Skip to content
This repository has been archived by the owner on Oct 5, 2020. It is now read-only.

Commit

Permalink
Improve logging
Browse files Browse the repository at this point in the history
  • Loading branch information
Taras Kushnir authored and Taras Kushnir committed Nov 9, 2018
1 parent 7d9718d commit 13d4c79
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 13 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ After this you will be able to understand code in the repo.

C++ code in the repo is simple enough to work in Windows/Mac/Linux. You can use CMake to compile it (check out `.travis.yml` or `appveyor.yml` to see how it's done in Linux or Windows).

In order to use MNIST data you will need to unzip archives in the `data/` directory first.
In order to use MNIST data you will need to unzip archives in the `data/` directory first. Also compiled executable accepts path to this `data/` directory as first command line argument.

# See
Main learning loop (as defined in `network2_t::backpropagate()`) looks like this:
Expand Down
22 changes: 10 additions & 12 deletions src/network/network2.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -12,30 +12,28 @@ network2_t::network2_t(std::initializer_list<network2_t::layer_type> layers):
{ }

void network2_t::train(network2_t::training_data const &data,
const optimizer_t<data_type> &strategy,
optimizer_t<data_type> const &optimizer,
size_t epochs,
size_t minibatch_size) {
log("Training using %d inputs", data.size());
// big chunk of data is used for training while
// small chunk - for validation after some epochs
const size_t training_size = 5 * data.size() / 6;
std::vector<size_t> eval_indices(data.size() - training_size);
// generate indices from 1 to the number of inputs
std::iota(eval_indices.begin(), eval_indices.end(), training_size);

for (size_t j = 0; j < epochs; j++) {
size_t k = 0;
for (size_t e = 0; e < epochs; e++) {
auto indices_batches = batch_indices(training_size, minibatch_size);
for (auto &indices: indices_batches) {
if (k++ % 50 == 0) { log("Batched indices %d out of %d", k, indices_batches.size()); }
update_mini_batch(data, indices, strategy);
const size_t batches_size = indices_batches.size();

for (size_t b = 0; b < batches_size; b++) {
update_mini_batch(data, indices_batches[b], optimizer);
if (b % (batches_size/4) == 0) { log("Processed batch %d out of %d", b, batches_size); }
}

//if (j % 2 == 0) {
auto result = evaluate(data, eval_indices);
log("Epoch %d: %d / %d", j, result, eval_indices.size());
//} else {
// log("Epoch %d ended", j);
//}
auto result = evaluate(data, eval_indices);
log("Epoch %d: %d / %d", e, result, eval_indices.size());
}

auto result = evaluate(data, eval_indices);
Expand Down

0 comments on commit 13d4c79

Please sign in to comment.