Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keras 2 API support #87

Closed
4 tasks done
davinnovation opened this issue Apr 6, 2017 · 2 comments
Closed
4 tasks done

Keras 2 API support #87

davinnovation opened this issue Apr 6, 2017 · 2 comments

Comments

@davinnovation
Copy link
Contributor

davinnovation commented Apr 6, 2017

Title:
Keras 2 API support

  • klab-11-1-cnn_mnist.py
    line 61 : model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
    border_mode='valid',
    input_shape=input_shape))
    line 63 : model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
    # on Keras 2, Convolution2D should be changed to Conv2D
  • klab-09-2-xor-nn.py
    line 14 : model.fit(x_data, y_data, nb_epoch=2000)
    # on Keras 2, nb_epoch should be changed to epochs
  • klab-05-2-logistic_regression_diabetes.py
    line 14 : model.fit(x_data, y_data, nb_epoch=2000)
    # on Keras 2, nb_epoch should be changed to epochs
  • klab-04-3-file_input_linear_regression.py
    line 15 : model.fit(x_data, y_data, nb_epoch=2000)
    # on Keras 2, nb_epoch should be changed to to epochs
@hunkim
Copy link
Owner

hunkim commented Apr 6, 2017

Good point. Feel free to send us PR.

@kkweon kkweon changed the title klab-11-1-cnn_mnist.py [ improvement ] Keras 2 API support Apr 7, 2017
jihobak added a commit to jihobak/DeepLearningZeroToAll that referenced this issue Apr 7, 2017
- change 'Convolution2D' to 'Conv2D'
- change 'nb_epoch' to 'epochs'

	modified:   klab-04-3-file_input_linear_regression.py
	modified:   klab-05-2-logistic_regression_diabetes.py
	modified:   klab-09-2-xor-nn.py
	modified:   klab-11-1-cnn_mnist.py
@jihobak
Copy link
Contributor

jihobak commented Apr 7, 2017

@kkweon

add minor issue

klab-11-1-cnn_mnist.py should be changed as well.
line 79: nb_epoch —> epochs

kkweon pushed a commit that referenced this issue Apr 7, 2017
- change 'Convolution2D' to 'Conv2D'
- change 'nb_epoch' to 'epochs'

	modified:   klab-04-3-file_input_linear_regression.py
	modified:   klab-05-2-logistic_regression_diabetes.py
	modified:   klab-09-2-xor-nn.py
	modified:   klab-11-1-cnn_mnist.py
@kkweon kkweon closed this as completed Apr 7, 2017
hunkim added a commit that referenced this issue Feb 25, 2021
* no plot for travis

* no plot for travis

* Added travis_wait

* Added travis_wait

* Added travis_wait

* Addressed #16

* Added tf.gradient

* Added char-seq

* PEP8

* gradient example

* Added backprop using tensorflow

* back prop added

* back prop added

* back prop added

* Added shape asset

* Fixed ymal

* fix: label and backprop (#21)

* hotfix: label and backprop

* fix: data label must start from 0

tf.one_hot requires Y data labels begin with 0

* Fix to get the correct next close price (#22)

* refactor: softmax (#23)

1) Fix softmax cross entropy
2) Pretty Print Format
3) Fix typo

* Fix to get the correct next close price (#24)

* Fix to get the correct next close price

* Fix to get the correct next close price #2

* Minor first pass consistency fixes. (#17)

* Ensure data consistency over Keras and TF in 02-1.

* Renamed file (flow, not fflow)

* Changed X and y to [1, 2, 3]

* Let's fix travis

* change comment cost -> cost/loss (#26)

* Let's fix travis

	cost -> cost/loss

Revert "cost -> cost/loss"

This reverts commit 92a12aa.

annotation change cost -> cost/loss

* change comment cost -> cost/loss

* Update lab-13-3-mnist_save_restore.py

to do annotation "import matplotlib.pyplot as plt" code.
modified an annotation for describing this file on line 1.
to erase useless tags on tf.variable_scope() lines.
modified "Savor -> Saver" on line 113.

* Update lab-04-1-multi_variable_linear_regression.py (#28)

to avoid ineffective operation when the loop works.

* Update lab-04-2-multi_variable_matmul_linear_regression.py (#29)

to avoid ineffective operation when the loop works.

* Update lab-04-3-file_input_linear_regression.py (#30)

to avoid ineffective operation when the loop works.

* Update lab-12-3-char-seq-softmax-only.py (#46)

to add an annotation for describing this file.

* Update lab-12-2-char-seq-rnn.py (#45)

to add 1st line for describing this file.

* Update lab-10-3-mnist_nn_xavier.py (#40)

* Update lab-10-3-mnist_nn_xavier.py

to delete useless "import numpy as np" code and to do annotation "import matplotlib.pyplot as plt" code.
Fixed 1st line for describing this file.

* Update lab-10-3-mnist_nn_xavier.py

* Update lab-10-1-mnist_softmax.py (#38)

to delete useless "import numpy as np" code and to do annotation "import matplotlib.pyplot as plt" code.
When I run this file every time, the result is slightly different. I guess from 41 to 45 lines, but I don't know exactly.

* Update lab-11-1-mnist_cnn.py (#43)

to delete useless "import numpy as np" code and to do annotation "import matplotlib.pyplot as plt" code.
Fixed 1st line for describing this file.

* Update lab-10-2-mnist_nn.py (#39)

to delete useless "import numpy as np" code and to do annotation "import matplotlib.pyplot as plt" code.
Fixed 1st line for describing this file.

* Update lab-10-4-mnist_nn_deep.py (#41)

to delete useless "import numpy as np" code and to do annotation "import matplotlib.pyplot as plt" code.
Fixed 1st line for describing this file.

* Update lab-11-2-mnist_deep_cnn.py (#44)

to delete useless "import numpy as np" code and to do annotation "import matplotlib.pyplot as plt" code.
Fixed 1st line for describing this file.

* Update lab-10-5-mnist_nn_dropout.py (#42)

to delete useless "import numpy as np" code and to do annotation "import matplotlib.pyplot as plt" code.
Fixed 1st line for describing this file.

* Added plot

* Let's run only labs for now

* edit lab-02-3 (#52)

* refactor: variable init method updated (#53)

tf.initialize_all_variables() method has been deprecated
use tf.global_variables_initializer() instead

* I have cleaned up the code. (#54)

수행시간이 소요되는 sess.run( ) 사용을 줄이기 위해
중복이 되는 호출을 방지하고,
하드 코딩된 W feed값을 feed_W로 분리.

* refactor: lab-09-5-sigmoid_back_prop.py (#55)

* rename: softmax -> sigmoid

* refactor: reference, comment, loss,

1. Description of file was added
  - add Prof. Sung KIM's slides and others
  - define loss function and network architecture

2. Loss function change
  - In Prof. Sung KIM's slides, reduce_sum was used instead of
  reduce_mean. In this file, it's also modified to sync the slides

3. Refactor X, y variable name
  - Scikit Learn's way to differentiate matrices and vectors
  - When we say 'X", the naming convention is to use uppercase because it's a multidimensional matrix
  - When we say "y", the naming convention is to use lowercase because it's a single vector

* doc: reference change

* Added more examples for lab01

* Moved files to Kereas

* Moved files to Kereas

* clean up the code

* Minor updates

* Added deep and wide

* Update lab-12-6-rnn_softmax_stock_prediction.py (#57)

to replace `MinMaxScaler` function, which we can use simply.

* Update lab-12-5-rnn_stock_prediction.py (#56)

to replace `MinMaxScaler` function, which we can use simply.

* trivial changes

* Rnn (#58)

* progress on lab 12

* updates on lab-12

* added optimizer

* updates on lab-12-1

* updated lab-12-01

* progress on lab-12-4-rnn_long_char

* updates on lab-12-4

* updates on lab-12-4

* with softmax

* fix softmax

* added accuracy

* added back multirnncell

* Fixed 12-5

* minor comments update

* Minor format changes

* 12-1 delete

* Fix travis

* updated req

* lab-07-2/lab-09-1/lab-09-2/lab-09-3 (#60)

* Update lab-07-2-learning_rate_and_evaluation.py

epoch의 개념은 이 코드상에서는 도입되어있지 않습니다. 아마도 step을 표현하신 것 같습니다.

* Update lab-09-1-xor.py

DeepLearningZeroToAll 다른 코드들과의 통일성과 Readability, Understandability를 위해 shape를 추가하면 좋을 것 같습니다.

* Update lab-09-2-xor-nn.py

* Update lab-09-3-xor-nn-wide-deep.py

* Update lab-07-2-learning_rate_and_evaluation.py (#61)

lab-07-2-learning_rate_and_evaluation.py에 epoch의 개념을 보여주는 코드를 추가했습니다.

* refactor: MinMaxScaler, doc, pep (#59)

1. MinMaxScaler
  - Vectorization

2. Doc change
  - Data file: [Open, High, Low, Volume, Close]
  - Old comments were in wrong order
    - # Open, High Low, Close, Volume

3. Do not follow Max Length 79
  - for code readability
  - it's okay to increase to 100
  - https://www.python.org/dev/peps/pep-0008/#maximum-line-length

* Use Adam

* more interesting ecamples

* Added slide locations

* Added TF csv reader

* Added results

* Updated lab 05 for lectures

* Update lab-11-1-mnist_cnn.py and lab-11-2-mnist_deep_cnn.py (#65)

* Update lab-11-1-mnist_cnn.py

I think you probably originally tried to express it this way.

print('Learning started. It takes sometime.')
c, _ = sess.run([cost, optimizer], feed_dict=feed_dict)

* Update lab-11-2-mnist_deep_cnn.py

I think you probably originally tried to express it this way.

print('Learning started. It takes sometime.')
c, _ = sess.run([cost, optimizer], feed_dict=feed_dict)

* Update lab-11-1-mnist_cnn.py (#66)

I think it's probably W3, not W2.

* Updates for unity and understandability (#67)

* Update lab-10-1-mnist_softmax.py

* Update lab-10-2-mnist_nn.py

* Update lab-10-3-mnist_nn_xavier.py

* Update lab-10-4-mnist_nn_deep.py

* Update lab-10-5-mnist_nn_dropout.py

* Update lab-11-1-mnist_cnn.py

* Update lab-11-2-mnist_deep_cnn.py

* Update lab-13-1-mnist_using_scope.py

* Update lab-13-2-mnist_tensorboard.py

* Update lab-13-3-mnist_save_restore.py

* Added lab 08

* Update lab-08-tensor_transformations.py (#68)

'tf.transpose ' is added before tf.shape
identity transformation with tf.transpose

* Put advanced ones bottom

* Updated code for lab7

* Updated code for lab7

* Updated lab07 files

* Added class and ensemble

* Added Ensemble results

* Update lab-03-2-minimizing_cost_gradient_update.py (#70)

Changed tf.reduce_sum to tf.reduce_mean.
About issue #69

* Added lab08 notebook

* Update tensor handling

* Remove W.eval for simplicity

* Rename softmax to FC

* Added logits for loss

* refactor: add a FC layer after RNN (#74)

* refactor: add a FC layer

1. Add a FC Layer
  - #73
2. Add a numpy doc style
3. Explicitly define an activation function in LSTMCell
  - default is tanh
4. Change to AdamOptimizer
5. Use `with` statement else the session must be closed explicitly
6. No linebreak for the 79 char limit

* refactor: nn.tanh -> tanh

* Added layers

* FC as a default

* Added FC after LSTM

* FC layers

* Feature/batchnorm (#76)

* refactor: add a FC layer

1. Add a FC Layer
  - #73
2. Add a numpy doc style
3. Explicitly define an activation function in LSTMCell
  - default is tanh
4. Change to AdamOptimizer
5. Use `with` statement else the session must be closed explicitly
6. No linebreak for the 79 char limit

* refactor: nn.tanh -> tanh

* add: MNIST Batchnormalization layer

* fix: image

* XOR Tensorboard (#77)

* Added XOR Tensorboard
Updated .gitignore file to ignore logs directory

* Renamed lab-09-7 to lab-09-3

* several commit in this PR (#80)

* modify code for compatability with keras 2.x.

* change requirement Keras 1.2.2 to 2.0.2.

* fix Travis CI Build Error(First try)

* Fix Travis CI Build Error(Second try).

* Fix Travis CI Build Error(Third try).

* Fix Travis CI Build Error(Forth try).

* feat: Python2 Support(print)

* fix: Travis-ci script

* add: pylint

* fix: Keras v2 support

* fix: travis-ci

1. pylint does not support py3.6...

* renamed numbers and touch on tensorboard

* Changed file names

* Added mxnet

* keras code PR (#81)

* - add klab-10-1-mnist_softmax.py based on lab-10-1-mnist_softmax.py ( tested on Ubuntu 14.04 )

* Update klab-10-1-mnist_softmax.py

- Update to version Keras 2.0.2
- Tested on Ubuntu 16.04

* PyTorch version lab codes  (#88)

* Added PyTorch

PyTorch code based on TF-Lab code
Rest of lab codes will be added.

* Update lab-02-1&2-linear_regression.py

* Update lab-04-2-multi_variable_linear_regression.py

* Update lab-04-3-file_input_linear_regression.py

* Update lab-05-1-logistic_regression.py

* Update lab-05-2-logistic_regression_diabetes.py

* Update lab-06-1-softmax_classifier.py

* Update lab-06-2-softmax_zoo_classifier.py

* Update lab-09-1-xor.py

* Update lab-09-2-xor-nn.py

* Lab 10 MNIST and High-level TF API(dropout, batchnorm, xavier) (#89)

* Lab 10 MNIST and High-level TF API(dropout, batchnorm, xavier)

* PyTorch version lab codes  (#88)

* Added PyTorch

PyTorch code based on TF-Lab code
Rest of lab codes will be added.

* Update lab-02-1&2-linear_regression.py

* Update lab-04-2-multi_variable_linear_regression.py

* Update lab-04-3-file_input_linear_regression.py

* Update lab-05-1-logistic_regression.py

* Update lab-05-2-logistic_regression_diabetes.py

* Update lab-06-1-softmax_classifier.py

* Update lab-06-2-softmax_zoo_classifier.py

* Update lab-09-1-xor.py

* Update lab-09-2-xor-nn.py

* add Lab 10 MNIST and High-level TF API(dropout, batchnorm, xavier)

* add Lab 10 MNIST and High-level TF API(dropout, batchnorm, xavier)

* Remove Pytorch files

* Revert "Lab 10 MNIST and High-level TF API(dropout, batchnorm, xavier) (#89)" (#92)

This reverts commit de74aab.

* Add Lab 10 MNIST and High-level TF API(dropout, batchnorm, xavier) (#93)

* Update lab-03-X-minimizing_cost_tf_gradient.py (#95)

Update parameters of compute_gradients for TypeError.

* reshape (#94)

* Keras 2 API support #87 (#96)

- change 'Convolution2D' to 'Conv2D'
- change 'nb_epoch' to 'epochs'

	modified:   klab-04-3-file_input_linear_regression.py
	modified:   klab-05-2-logistic_regression_diabetes.py
	modified:   klab-09-2-xor-nn.py
	modified:   klab-11-1-cnn_mnist.py

* change name of W_val and cost_val  (#99)

* Change Variable name of W_val and cost_val

W_val and cost_val to W_history and cost_history
To understand the purpose of variables easily

* Revert "Change Variable name of W_val and cost_val"

This reverts commit d75f37f.

* Change Variable name of W_val and cost_val

W_val and cost_val to W_record and cost_record
To understand the purpose of variables easily

* Change name of W_val and cost_val

W_val and cost_val to W_record and cost_record
To understand the purpose of variables easily

#99 wasn't refactored by mistake

* Change W_val cost_val

W_val and cost_val to W_history and cost_history
To understand the purpose of variables easily

* Updated CNN basics

* change list indentation style. (#101)

* modify code for compatability with keras 2.x.

* change requirement Keras 1.2.2 to 2.0.2.

* fix Travis CI Build Error(First try)

* Fix Travis CI Build Error(Second try).

* Fix Travis CI Build Error(Third try).

* Fix Travis CI Build Error(Forth try).

* change list indentation style.

* sync with original repo.

* Need one ensemble. Also class and layers sound better.

* Added lab-10 codes (#100)

* Added lab-10 codes

PyTorch codes for lab-10-1 ~ lab-10-5 are added.

Thank you.

* run autopep8 check

* add: CONTRIBUTING guide (#102)

1. It's a guideline to let people know how to contribute

* add: lab-11 PyTorch codes (#104)

* Added lab-10 codes

PyTorch codes for lab-10-1 ~ lab-10-5 are added.

Thank you.

* run autopep8 check

* Added lab-11 codes

* [WIP] Add MXNet examples (#106)

* lab 11

mxnet support first attemp

fix lab 11

fix lab 11

fix lab 11

fix lab 11

fix lab 11

fix

fix lab-11

fix

Add readme

update readme

* update

* add new line

* Create/pytorch/lab12 (#108)

* Add lab-12-1-hello-rnn.py for PyTorch

* Add lab-12-1,2,4,5 for PyTorch
* imported from Tensorflow codes in repo

* [MXNet] Add lab-04-3 and lab-12-4 (#110)

* lab 11

mxnet support first attemp

fix lab 11

fix lab 11

fix lab 11

fix lab 11

fix lab 11

fix

fix lab-11

fix

Add readme

update readme

* lab 11

mxnet support first attemp

fix lab 11

fix lab 11

fix lab 11

fix lab 11

fix lab 11

fix

fix lab-11

fix

Add readme

update readme

add lab-11-5

fix lab-11-2

fix 11-5

fix 11-5

* update

* add new line

* add lab-04-3

* remove tf line

* add lab-12-4

* add data

* add output reference

* using forward_backward directly

* Starting the lab related to the Chainer framework. (#111)

* [MXNet] Simplify regression examples using fit + add logistic regression and softmax classification (#114)

* Revise mxlab-04-3, 12-4 + add mxlab-05-2, 06-2

* add result

* Chainer/lab1and2 (#115)

* Starting the lab related to the Chainer framework.

* Adding Chainer labs 1-1 (basics) and 2(linear regression).

* add lab-12-5 (#117)

* add klab 07 #120 (#121)

* add klab 07 #120

* update klab-07-3-linear_regression_min_max.py
using sklearn MinMaxScaler instead custom MinMaxScaler function

* fix: travis ci (#124)

1> drop pylint

* fix: requirements.txt (#125)

1. add torch, chainer
2. drop pylint
3. update Keras

* Revert "fix: requirements.txt (#125)" (#126)

This reverts commit 7c50009.

* refactor: PEP8 FLAKE8 (#127)

1. Format to PEP8
2. Remove unused var/package

* Chainer/lab5and10 (#118)

* Starting the lab related to the Chainer framework.

* Adding Chainer labs 1-1 (basics) and 2(linear regression).

* Adding lab 5-2 logistic regression (mnist) and lab 10-2 nn (mnist)

* Fied typo

* modify list indentation style. (#128)

* supplement code for "Out of Memory" issue (#131)

fix: OOM issue

MNIST.test data set is too big for some system so that makes Out of
Memory issue.
Commented code split dataset and predict to avoid "OOM" issue

1. Leave comments referring to the optional file
2. Add the optional file (lab-11-X-mnist_cnn_low_memory.py)

* add Keras/klab-10-2-mnist_nn.py (#139)

* remove UserWarning(remove learning rate from compile()) (#133)

1. Explicitly define an optimizer

* modify learning rate for linear regression. (#143)

* remove the bad comment (#145)

* remove the bad comment for issue #138 (#146)

* remove the bad comment for issue #138 (#147)

* delete the "# This example does not work" at line2 (#151)

* Small typo in lab-01-basics.ipynb (#123) (#158)

Thank you!

* add klab-13-2-mnist_tensorboard.py. (#156)

* Small typos corrected. (#160)

* Exercise answer and style changes (#163)

* Small typos corrected.

* Exercise answer

* Remove useless variable

* Change some comments, change some style

* Update lab-12-5-rnn_stock_prediction.py (#165)

Modify rmse bug in Test Step

* Update lab-12-5-rnn_stock_prediction.py (#166)

change variable name learing_rate to learning_rate

* fix: GradientDescent --> AdamOptimizer (#168)

* Modify MultiRNNCell problems (#164)

* Modify MultiRNNCell problems

* Modify duplicated code

* fix: dropout layers in lab-10-7-mnist_nn_higher_level_API.py (#170)

- Layers were not connected correctly

* updated x-xor

* fix the output sample to be compatible with ‘cost = tf.reduce_mean’ instead of ‘cost = tf.reduce_sum’ that was a buggy version. (#175)

* Fix the result sample to be compatible with not only ‘W’ and ‘learning_rate’ pair but also those of the lecture. (#176)

* Fixed XOR back prop. TODO: Fix other back props

* Adding chlab-10-3 MNIST and MLP with Dropout (#136)

* Added Selu (WIP)

* add: pure numpy (#180)

add: Numpy README

* remove redundant an initializing global variables line (#181)

* Update lab-12-2-char-seq-rnn.py (#183)

X_for_fc should be used instead of output

* Hotfix/lab09: Results (#185)

* fix a result of ‘lab-09-3’
* fix a result of ‘lab-09-4’
* fix a result of ‘lab-09-x-xor-nn-back_prop.py’

* add ipynb directory (#162)

* add ipynb lab-02-1, lab-02-2

* add ipynb lab-02-3, lab-03-1

* add ipynb lab-03-2, lab-03-3, lab-03-X

* Rename to lower case for consistency (#188)

* Unblock using matplotlib.pyplot (#190)

I have no problem with matplotlib.pyplot in my environment. 
Is there any problem?

* for conciseness (#193)

for conciseness

* Fixes deprecated `tf.arg_max` (#195)

tf.arg_max is deprecated -> tf.argmax should be used instead

Related issue: #194

* Update lab-11-4-mnist_cnn_layers.py (#205)

dropout rate = 1 - keep_prob

* Update lab-11-5-mnist_cnn_ensemble_layers.py

dropout rate = 1 - keep_prob

* Delete unnecessary codes

* move calculating total_batch to outside

* fix: sigmoid function comment

* Update README.md (#211)

replace long url to the shorten, which is optimized for sharing and officially supported by YouTube.

* revise typo states -> _states

* * Modify data scaling problem

* Update lab-03-2-minimizing_cost_gradient_update.py

Fix the comment.

* modify softmax_cross_entropy_with_logits function

Revise Warining of softmax_cross_entropy_with_logits

* Update lab-04-1-multi_variable_linear_regression.py

필요없는 코드라 판단되어 삭제합니다.

* Update pylint

* Fix .pylintrc

* Fix .pylintrc

* Update lab-02-2-linear_regression_feed.py

edit comment

* Update lab-03-3-minimizing_cost_tf_optimizer.py

Fixed duplicate 'sess.run' when training. (This seems more intuitive.)

* Update lab-03-2-minimizing_cost_gradient_update.py

Fixed duplicate 'sess.run' when training.

* Update lab-04-3-file_input_linear_regression.py

1. Add data output and train output example.
2. Change the data output format to make it easier to read.

* Update lab-03-X-minimizing_cost_tf_gradient.py

1. The value of W changes because of 'apply_gradients', not because of 'gradient'. Therefore, it is not necessary to make W output twice.
2. Modified output comment.
3. 'train = optimizer.minimize (cost)' is an unnecessary sentence.

* Add logistic regression for ipynb

* Update lab-04-4-tf_reader_linear_regression.py

Actual results are very different. (I've tried it several times.) So I fixed the result output comment.

* Update lab-06-2-softmax_zoo_classifier.py (#231)

* Update lab-06-2-softmax_zoo_classifier.py

1. Some line change for readability.
2. Add more detail output.

* Update lab-06-2-softmax_zoo_classifier.py

* Update lab-06-2-softmax_zoo_classifier.py

* Update lab-06-1-softmax_classifier.py (#232)

* Update lab-06-1-softmax_classifier.py

More simplified code and added missing output.

* Update lab-06-1-softmax_classifier.py

* Update lab-07-1-learning_rate_and_evaluation.py

1. Modified to a learning rate that is appropriate for learning.
2. The arg_max is modified to argmax.

* Update lab-07-3-linear_regression_min_max.py

1. Changed the function name because there are many inquiries that it is confused with 'MinMaxScaler' of 'sklearn.preprocessing'.
2. Added normalized output.

* Update lab-07-4-mnist_introduction.py

1. Change arg_max to argmax.
2. Change the name of the variable 'total_batch' to 'num_iterations' to understand the concept of 'iterations' mentioned in the lecture.

* Update lab-09-1-xor.py (#236)

* Update lab-09-1-xor.py

Removed unnecessary codes.

* Update lab-09-1-xor.py

Add numpy again.

* Update lab-09-1-xor.py

Commit suggestion.

* Update lab-09-2-xor-nn.py (#237)

* Update lab-09-2-xor-nn.py

1. Removed unnecessary codes.

* Update lab-09-2-xor-nn.py

Add numpy again.

* Update lab-09-2-xor-nn.py

Add numpy again.

* Update lab-09-2-xor-nn.py

Commit suggstion.

Co-Authored-By: qoocrab <qoocrab@gmail.com>

* Update lab-09-2-xor-nn.py

Commit Suggestion.

Co-Authored-By: qoocrab <qoocrab@gmail.com>

* Update lab-09-2-xor-nn.py

Commit Suggestion.

Co-Authored-By: qoocrab <qoocrab@gmail.com>

* Update lab-09-2-xor-nn.py

Add f-string output.

* Update lab-09-2-xor-nn.py

* Update lab-09-3-xor-nn-wide-deep.py (#238)

* Update lab-09-3-xor-nn-wide-deep.py

Removed unnecessary codes.

* Update lab-09-3-xor-nn-wide-deep.py

Add numpy again.

* Update lab-09-3-xor-nn-wide-deep.py

Add numpy again.

* Update lab-09-3-xor-nn-wide-deep.py

Co-Authored-By: qoocrab <qoocrab@gmail.com>

* Update lab-09-3-xor-nn-wide-deep.py

Co-Authored-By: qoocrab <qoocrab@gmail.com>

* Update lab-09-4-xor_tensorboard.py (#239)

* Update lab-09-4-xor_tensorboard.py

Simplify code and delete unnecessary variables.

* Update lab-09-4-xor_tensorboard.py

Co-Authored-By: qoocrab <qoocrab@gmail.com>

* Update lab-09-4-xor_tensorboard.py

Co-Authored-By: qoocrab <qoocrab@gmail.com>

* Update lab-09-4-xor_tensorboard.py

Co-Authored-By: qoocrab <qoocrab@gmail.com>

* Update lab-09-4-xor_tensorboard.py

Co-Authored-By: qoocrab <qoocrab@gmail.com>

* Update lab-09-4-xor_tensorboard.py

Add f-string output.

* Drop 2.7 support

* Update lab-10-1-mnist_softmax.py

1. change epochs, iterations variable name.
2. change optimizer to softmax_cross_entropy_with_logits_v2.

* Update Lab-02, Lab-03  (#240)

* Update lab-03-X-minimizing_cost_tf_gradient.py

1. The value of W changes because of 'apply_gradients', not because of 'gradient'. Therefore, it is not necessary to make W output twice.
2. Modified output comment.
3. 'train = optimizer.minimize (cost)' is an unnecessary sentence.

* Modify the code for readability. (Black formatting)

* Update lab-12-0-rnn_basics.ipynb

1. tf.contrib.rnn.BasicRNNCell will be removed in a future version. so changed to tf.keras.layers.SimpleRNNCell.
2. tf.contrib.rnn.BasicLSTMCell will be removed in a future version. so changed to tf.nn.rnn_cell.LSTMCell.

* Conv2d ensemble model using MNIST data

* tensorflow 2.0 conversion, initial work. (tried to port as close as possible from tf1 code)

* Update lab-12-2-char-seq-rnn.py

a minor typo revision

* Conv2d ensemble model using MNIST data

* wise and deep LSTM Layer

accuracy 1.0 in 200 epochs

* cifar100 klab add performance is 99%

* modified error: test  pass sk server

* change epoch 1-> 10

* Fix tests/test_square.py to use TF 2.0 Eager Sessions

* Update .travis.yml to remomve pylint which does not do anything useful anyway othter than false positives.

* Update tf2-12-5-rnn_stock_prediction.py

* Create stale.yml

* Create label.yml

* Create labeler.yml

* Update stale.yml

Update `days-before-stale` and `days-before-close`

* 30 days no activity -> stale
* 5 days since it's labeled as stale -> close

* fix a typing error (#258)

Thanks

* updated keras models (#259)

* updated keras models

* few more changes

* Update labeler.yml

* Update.github/labeler.yml to fix a failure.

* add model.fit func, tf2-10-2-mnist.nn.py

* Delete label.yml

* fix typo : lab-11-4-mnist_cnn_layers.py

* SyntaxWarning tf2-12-4-rnn_long_char.py (#268)

* Colaboratory를 통해 생성됨

* Colaboratory를 통해 생성됨

* Colaboratory를 통해 생성됨

Co-authored-by: Sung Kim <hunkim@gmail.com>
Co-authored-by: Mo Kweon <kkweon@gmail.com>
Co-authored-by: Jin Gyu Chong <jingyu.chong@gmail.com>
Co-authored-by: Sangwhan "fish" Moon <innodb@gmail.com>
Co-authored-by: Seung Hyun Jeon <shtowever@gmail.com>
Co-authored-by: Jongmin <jijupax@gmail.com>
Co-authored-by: zeran4 <jehoonshin@naver.com>
Co-authored-by: Jenny <jennykang@users.noreply.github.com>
Co-authored-by: LeeTaeMin <dnflxoals@gmail.com>
Co-authored-by: maestrojeong <legend4020@snu.ac.kr>
Co-authored-by: BaekSeungYun <bkryusim@gmail.com>
Co-authored-by: Dongjun Lee <redongjun@gmail.com>
Co-authored-by: skyer9 <skyer9@gmail.com>
Co-authored-by: davinnovation <davinnovation@gmail.com>
Co-authored-by: togheppi <licslegend@gmail.com>
Co-authored-by: piper <jihoBak0@gmail.com>
Co-authored-by: kaka120011 <kaka120011@gmail.com>
Co-authored-by: Lewis Kim (Sunghyun Kim) <kshr2d2@gmail.com>
Co-authored-by: Xingjian Shi <xshiab@ust.hk>
Co-authored-by: Min-je Choi <devnote5676@naver.com>
Co-authored-by: surfertas <tasuku@gmail.com>
Co-authored-by: JaeSeok <qpark99@users.noreply.github.com>
Co-authored-by: JEONG HYUN SEOK <nicewook@hotmail.com>
Co-authored-by: HongCheng <kwchenghong@gmail.com>
Co-authored-by: Dong-ryull Shin / Ryan <1982sdr@hanmail.net>
Co-authored-by: Jeff-HOU <1085363891@qq.com>
Co-authored-by: astriker <astriker@gmail.com>
Co-authored-by: antil1 <loveskywhy@naver.com>
Co-authored-by: EunSik Park <kongse92@gmail.com>
Co-authored-by: yonghwee.kim <fklh15@naver.com>
Co-authored-by: Syen Park <syenpark@gmail.com>
Co-authored-by: Soonmok Kwon <ep1804@users.noreply.github.com>
Co-authored-by: dextto <zd.gary@gmail.com>
Co-authored-by: Jinman Chang <jinman190@gmail.com>
Co-authored-by: wizardbc <wizardbc@gmail.com>
Co-authored-by: Myoungdo Park <cuspymd@gmail.com>
Co-authored-by: HanbumKo <37624747+HanbumKo@users.noreply.github.com>
Co-authored-by: forybm <forybm1@naver.com>
Co-authored-by: jjangga0214 <jjangga@kookmin.ac.kr>
Co-authored-by: Chungmin Park(Chuck) <chuck.m.park@gmail.com>
Co-authored-by: Kyu Chul Kim <gooodcheer@gmail.com>
Co-authored-by: qoocrab <qoocrab@gmail.com>
Co-authored-by: woongs <ccw1021.dev@gmail.com>
Co-authored-by: healess <healess1@gmail.com>
Co-authored-by: LukeSungukJung <xrjseka615@gmail.com>
Co-authored-by: jayjun911 <jayjun911@gmail.com>
Co-authored-by: Sihyeon Kim <sihyeonkim0923@gmail.com>
Co-authored-by: UnKuk Joung <jukyellow@gmail.com>
Co-authored-by: Mo Kweon <kkweon@google.com>
Co-authored-by: HanSeokhyeon <38755868+HanSeokhyeon@users.noreply.github.com>
Co-authored-by: Gaurav Ghati <gauravghati225@gmail.com>
Co-authored-by: 김수완 [Kim <ksw3337@neople.co.kr>
Co-authored-by: godpeny <slsnsepdpd@gmail.com>
Co-authored-by: JinsubPark <72424093+JinsubPark@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants