Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 0.3 #46

Merged
merged 146 commits into from
Nov 13, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
146 commits
Select commit Hold shift + click to select a range
cf9935a
Added PR template back.
vmarois Oct 24, 2018
bafd1b4
Clean up .gitignore.
vmarois Oct 24, 2018
18ef169
Trying out to build doc without setup.py.
vmarois Oct 24, 2018
736013e
Typo.
vmarois Oct 24, 2018
dbb101c
Remove version constraint on torchvision.
vmarois Oct 24, 2018
04fb1cd
Updated install & doc update notes.
vmarois Oct 25, 2018
1f82de0
Light tweaking.
vmarois Oct 25, 2018
7fe64f9
Started page on architecture of mip.
vmarois Oct 25, 2018
138575d
Continue writing.
vmarois Oct 25, 2018
8f971df
Continue working on doc.
vmarois Oct 25, 2018
814dab2
Finished including paper content.
vmarois Oct 25, 2018
1f1f47d
Added sampler factory to trainers, index selection not used/fully fun…
tkornuta-ibm Oct 25, 2018
9acf38a
Basic workers using SamplerRactory, unified build() method for factories
tkornuta-ibm Oct 25, 2018
49669f9
Unified ControllerFactory.build() with other factories.build()
tkornuta-ibm Oct 25, 2018
bf13f76
Finished enhancing the doc.
vmarois Oct 26, 2018
aed7668
Sampler working with indices: a) range [0 11], b) list [1 2 3], c) ca…
tkornuta-ibm Oct 26, 2018
7764b48
Added accuracy entry to stat_agg in ImageTextToClass.
vmarois Oct 26, 2018
caf29b5
Collect batch_size.
vmarois Oct 26, 2018
7251164
Some small cleanup.
vmarois Oct 26, 2018
70fdd44
Added small script to locally build doc.
vmarois Oct 26, 2018
9f1cb55
Merge pull request #33 from IBM/feat/doc-enhancement
tkornuta-ibm Oct 26, 2018
08be517
Merge branch 'feat/sampler' of github.com:IBM/mi-prometheus
tkornuta-ibm Oct 26, 2018
5961e67
Addressed the raised issues: yaml list, error when non supported samp…
tkornuta-ibm Oct 26, 2018
96dc483
Small typos.
vmarois Oct 26, 2018
b687184
Merge pull request #32 from IBM/feat/sampler
vmarois Oct 26, 2018
790b733
Tiny name refactor OnLine -> Online, OffLine -> Offline
tkornuta-ibm Oct 26, 2018
07f6f24
Work in progress on splitter
tkornuta-ibm Oct 26, 2018
6b78164
Merge branch 'develop' of github.com:IBM/mi-prometheus into feat/sampler
tkornuta-ibm Oct 26, 2018
3f8639f
Added default variables to vision problems, tested on everything ASID…
tkornuta-ibm Oct 26, 2018
22d47ae
index_splitter operational
tkornuta-ibm Oct 27, 2018
ab8a73d
Fixed bug in SamplerFactory.
vmarois Oct 27, 2018
f1404af
Moved index_splitter function to utils
tkornuta-ibm Oct 29, 2018
8bcd6b6
Merge branch 'feat/sampler' of github.com:IBM/mi-prometheus into feat…
tkornuta-ibm Oct 29, 2018
894438c
Merge pull request #38 from IBM/master
tkornuta-ibm Oct 30, 2018
49bc3be
LGTM catched warning fix
tkornuta-ibm Oct 30, 2018
3b30ea1
Merge branch 'develop' of github.com:IBM/mi-prometheus into feat/sampler
tkornuta-ibm Oct 30, 2018
8416095
Bugfix: bringing back export of experiment config yo yaml file
tkornuta-ibm Oct 30, 2018
1eb1bee
Moved configuration dumping/logging to end of configuration setup
tkornuta-ibm Oct 30, 2018
1b5e8b8
Lots of cleanup, polishing and doc enhancement.
vmarois Oct 30, 2018
d8fe3cf
Splitting workers into grid_workers, workers and helpers
tkornuta-ibm Oct 30, 2018
07813e6
Changed the organization of workers, grid workers and helpers, added …
tkornuta-ibm Oct 30, 2018
cf3eeb2
Fixing bugs in SamplerFactory, plus removing additional options due t…
tkornuta-ibm Oct 30, 2018
8487ad6
Fixed setup script with proper links to grid_workers and helpers
tkornuta-ibm Oct 30, 2018
ffd819c
Tested splitting in MNIST, options: range in config, range from file,…
tkornuta-ibm Oct 30, 2018
5b8aa1f
Fixed printing the right number of samples in 'validating/testing ove…
tkornuta-ibm Oct 30, 2018
1b9cdaf
Some light polishing & cleaning.
vmarois Oct 31, 2018
1b51341
Updated the doc.
vmarois Oct 31, 2018
f4d4b84
Merge pull request #36 from IBM/feat/sampler
vmarois Oct 31, 2018
4cb398f
Trying to build the documentation.
vmarois Oct 31, 2018
941d680
Update readthedocs.yml
vmarois Oct 31, 2018
2440b6d
trying use_system_site_packages=true
vmarois Oct 31, 2018
9d765f4
comment
tkornuta-ibm Oct 31, 2018
b29825f
Added second option of loading model to trainers, fixed exception han…
tkornuta-ibm Oct 31, 2018
aba29b7
Added better error handling when loading the model to tester
tkornuta-ibm Oct 31, 2018
3966afe
Update tester.py
tkornuta-ibm Oct 31, 2018
1083806
Update trainer.py
tkornuta-ibm Oct 31, 2018
639541b
Confirmation handling - in try except
tkornuta-ibm Oct 31, 2018
56f57ed
Fixing 'old-style' classes in DNC
tkornuta-ibm Oct 31, 2018
a0ff954
Merge pull request #43 from IBM/fix/lgtm_errors
tkornuta-ibm Oct 31, 2018
446a9f6
MAES fix - super() enclosing class
tkornuta-ibm Oct 31, 2018
3617c8c
General fix: Except block directly handles BaseException
tkornuta-ibm Oct 31, 2018
8883640
Merge pull request #44 from IBM/fix/lgtm_errors
tkornuta-ibm Oct 31, 2018
9149045
name fix in make.bat
tkornuta-ibm Oct 31, 2018
1ef2a44
Merge pull request #42 from IBM/feat/model_loading
vmarois Nov 2, 2018
7d39594
Cleaned up MNIST
tkornuta-ibm Nov 5, 2018
7aa3ec4
3 vision models working on updated MNIST problem
tkornuta-ibm Nov 6, 2018
9272e6d
MNIST description cleanup
tkornuta-ibm Nov 6, 2018
0bdfb99
MNIST and CIFAR10 working with AlexNet and SimpleCNN models
tkornuta-ibm Nov 6, 2018
21d9342
First fixes of number of available CPUs, grid_trainer and tester work…
tkornuta-ibm Nov 6, 2018
b38d244
Merge branch 'master' of github.com-tkornut:IBM/mi-prometheus into de…
tkornuta-ibm Nov 6, 2018
6faaca7
Merge branch 'develop' of github.com-tkornut:IBM/mi-prometheus into d…
tkornuta-ibm Nov 6, 2018
2b54025
Fixed epoch size handling in both trainers and testers
tkornuta-ibm Nov 6, 2018
034ee5a
Fix typo
vmarois Nov 6, 2018
95b5422
Merge pull request #54 from IBM/fix/workers_epoch_size
vmarois Nov 6, 2018
cd9c312
[Fix] Documentation building (#55)
vmarois Nov 6, 2018
b844d32
Some cleanup.
vmarois Nov 6, 2018
a02df75
Merge branch 'develop' into lenet5
vmarois Nov 6, 2018
49ada6e
Merge pull request #53 from IBM/lenet5
vmarois Nov 6, 2018
e1cceb0
Add link to external docs.
vmarois Nov 6, 2018
ef509e4
Trying to fix LGTM warnings.
vmarois Nov 6, 2018
f2332ae
Cleaned up grid testers, now relying onf the fact whether best_model.…
tkornuta-ibm Nov 7, 2018
0b8105d
hints -> hint
tkornuta-ibm Nov 7, 2018
818e9c0
Work on mip-grid-analyzer, working up to the point when statistics ar…
tkornuta-ibm Nov 7, 2018
af5fba4
Removed spannig many processes, commented file content analysis (for …
tkornuta-ibm Nov 7, 2018
ac350c6
Merge pull request #56 from IBM/feat/link_to_external_docs
tkornuta-ibm Nov 7, 2018
c2eabc2
Merge branch 'develop' into fix/grid-analyzer
tkornuta-ibm Nov 7, 2018
722c469
Merge pull request #57 from IBM/fix/lgtm_alerts
vmarois Nov 7, 2018
a24b4af
Merge branch 'develop' of github.com:IBM/mi-prometheus into fix/grid-…
tkornuta-ibm Nov 7, 2018
0f727c5
Added status to checkpoint saving/loading
tkornuta-ibm Nov 7, 2018
1805075
online trainer logging status and storing validation agg during parti…
tkornuta-ibm Nov 7, 2018
e0e12b6
simple_cnn.py fixed
sesevgen Nov 7, 2018
41c4ca1
Fixed incorrect spelling of Conv2d in lenet5
sesevgen Nov 7, 2018
194cede
Merge pull request #70 from sesevgen/Fix_numpy_int_conv
vmarois Nov 7, 2018
866a6d9
Merge branch 'develop' of github.com:IBM/mi-prometheus into feat/trai…
tkornuta-ibm Nov 7, 2018
59b8989
Updated video_to_class.py and sequential_pixel_mnist.py
sesevgen Nov 8, 2018
def377e
both trainers exporting agg validation statistics along with model sa…
tkornuta-ibm Nov 8, 2018
c1293d6
Fix masks, improve unit tests, fix some data issues
sesevgen Nov 8, 2018
f7ef5fd
Fixed bug with increment of episode in offline trainer
tkornuta-ibm Nov 8, 2018
bb777aa
Modified permuted sequential row mnist, better comments throughout
sesevgen Nov 8, 2018
5b8e77c
config polishing
tkornuta-ibm Nov 8, 2018
d4e35b3
Final cleanup
sesevgen Nov 8, 2018
c372dd1
Merge branch 'feat/trainers_save_status' into fix/grid-analyzer
tkornuta-ibm Nov 8, 2018
b7eb77e
analyzer - processing data from checkpoint and csv file
tkornuta-ibm Nov 8, 2018
c7e4606
Improved unit tests to check whether tensors are correct size. Fixed …
sesevgen Nov 8, 2018
4886de5
Commented out unused variable, but left the commented line in case it…
sesevgen Nov 8, 2018
1411276
timestamp
tkornuta-ibm Nov 8, 2018
f9e2bdc
Model saves training and validation stats
tkornuta-ibm Nov 8, 2018
7657d50
episode limit - 1000
tkornuta-ibm Nov 8, 2018
f1ad3c4
Merge branch 'feat/trainers_save_status' into fix/grid-analyzer
tkornuta-ibm Nov 8, 2018
b4d424e
Addressed comments
sesevgen Nov 9, 2018
87942dc
Merge pull request #71 from sesevgen/refactor_videotoclass
tkornuta-ibm Nov 9, 2018
7de3773
Update maes_model.py
tkornuta-ibm Nov 9, 2018
21e0759
Comment
tkornuta-ibm Nov 9, 2018
32bf7a0
Reading training and validation from checkpoint
tkornuta-ibm Nov 9, 2018
16f10c8
Merge branch 'develop' of github.com:IBM/mi-prometheus into fix/grid-…
tkornuta-ibm Nov 9, 2018
8d6cb59
Polished experiment confirmation
tkornuta-ibm Nov 9, 2018
d36ac12
Standardization in vision mnist configs
tkornuta-ibm Nov 9, 2018
3cad6ca
Refined confirmation, added handling of termination with ctrl-c - exc…
tkornuta-ibm Nov 9, 2018
0626484
Micro-cleanup in commandline arguments, changed output to expdir - fi…
tkornuta-ibm Nov 9, 2018
632f668
Removed partial validation aggregation from offline trainer
tkornuta-ibm Nov 9, 2018
32a45d6
Added export to checkpoint method to stats objects that fixes problem…
tkornuta-ibm Nov 9, 2018
cd14f34
Update model.py
tkornuta-ibm Nov 9, 2018
d8c20f1
Merge pull request #72 from IBM/feat/trainers_save_status
tkornuta-ibm Nov 9, 2018
c58fc86
gid analyzer working
tkornuta-ibm Nov 9, 2018
3c3ecec
gid analyzer working
tkornuta-ibm Nov 9, 2018
ad5d187
Merge branch 'develop' of github.com:IBM/mi-prometheus into fix/grid-…
tkornuta-ibm Nov 9, 2018
1ae12d3
Analyzer cleanup + grid-training-mnist config
tkornuta-ibm Nov 9, 2018
28ea631
Removed empty exception handling around user input()
tkornuta-ibm Nov 9, 2018
dc672a8
Removed unused imports
tkornuta-ibm Nov 9, 2018
b8e96b4
Model cleanup
tkornuta-ibm Nov 9, 2018
e2ee0cc
Missing try-except in analyzer
tkornuta-ibm Nov 9, 2018
2e7cfb6
Added option to indicate trainer from grid training configuration fil…
tkornuta-ibm Nov 10, 2018
840f571
Lots of small clean up / polishing.
vmarois Nov 10, 2018
ad67e90
Merge branch 'fix/grid-analyzer' of github.com:IBM/mi-prometheus into…
vmarois Nov 10, 2018
f6dc6af
Fixed grid analyzer comments
tkornuta-ibm Nov 10, 2018
de98f88
Merge branch 'fix/grid-analyzer' of github.com:IBM/mi-prometheus into…
tkornuta-ibm Nov 10, 2018
1e6165f
input at the end of configuration
tkornuta-ibm Nov 10, 2018
5b41532
Made some methods static + polishing.
vmarois Nov 12, 2018
0ef9ee4
Fixed few bugs raised by Vincent: 1) saving proper status when conver…
tkornuta-ibm Nov 12, 2018
d00fa16
Added psutil to config and readme
tkornuta-ibm Nov 12, 2018
a9cf3d2
if added to doc_build
tkornuta-ibm Nov 12, 2018
292213f
Standardizes statuses accross trainers, updates statuses in checkpoin…
tkornuta-ibm Nov 13, 2018
9d0a1e2
Merge pull request #81 from IBM/fix/grid-analyzer
vmarois Nov 13, 2018
602444d
Added psutil to mocked modules
tkornuta-ibm Nov 13, 2018
904ac5d
Minor change in stop threshold in MNIST configs
tkornuta-ibm Nov 13, 2018
2a92103
Merge pull request #88 from IBM/fix/grid-analyzer
tkornuta-ibm Nov 13, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions .github/PULL_REQUEST_TEMPLATE/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
## Pull Request template
Please, go through these steps before you submit a PR.

1. Make sure that your PR is not a duplicate.
2. If not, then make sure that:

2.1. You have done your changes in a separate branch. Branches should have descriptive names that start with either the `fix/` or `feature/` prefixes. Good examples are: `fix/signin-issue` or `feature/new-model`.

2.2. You have descriptive commits messages with short titles (first line).

2.3. You have only one commit (if not, squash them into one commit).

3. **After** these steps, you're ready to open a pull request.

3.1. Give a descriptive title to your PR.

3.2. Provide a description of your changes.

3.3. Put `closes #XXXX` in your comment to auto-close the issue that your PR fixes (if such).

Important: Please review the [CONTRIBUTING.md](../CONTRIBUTING.md) file for detailed contributing guidelines.

*Please remove this template before submitting.*
12 changes: 7 additions & 5 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,21 @@
!Readme.md
!readthedocs.yml
!setup.py
!doc_build.sh
!__init__.py
!/configs/**
!/docs/**
!/miprometheus/**
!.github/**

# You can be specific with these rules
__pycache__*
*.swp
*.vector_cache
!.gitignore

# not sure if those are needed
problems/.DS_Store
problems/image_text_to_class/.DS_Store
problems/image_text_to_class/CLEVR_v1.0
CLEVR_v1.0/
# Ignore every DS_Store
**/.DS_Store

# Ignore build directory
/build/**
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,15 +51,15 @@ The dependencies of MI-prometheus are:

* pytorch (v. 0.4.0)
* numpy
* torchvision (v. 0.2.0)
* torchvision
* torchtext
* tensorboardx
* matplotlib
* psutil (enables grid-* to span child processes on MacOS and Ubuntu)
* PyYAML
* tqdm
* nltk
* h5py
* six
* pyqt5 (v. 5.10.1)


Expand Down
15 changes: 0 additions & 15 deletions configs/example_trainer_gpu.yaml

This file was deleted.

4 changes: 4 additions & 0 deletions configs/maes_baselines/default_training.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ training:
initial_max_sequence_length: 5
# must_finish: false

# Sampler.
sampler:
name: RandomSampler

# Optimizer parameters:
optimizer:
# Exact name of the pytorch optimizer function
Expand Down
19 changes: 11 additions & 8 deletions configs/vision/alexnet_cifar10.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,12 @@ training:
problem:
name: &name CIFAR10
batch_size: &b 64
index: [0, 40000]
use_train_data: True
padding: &p [0,0,0,0] # ex: (x1, x2, x3, x4) pad last dim by (x1, x2) and 2nd to last by (x3, x4)
up_scaling: &scale True # if up_scale true the image is resized to 224 x 224
resize: [224, 224]
# Use sampler that operates on a subset.
sampler:
name: SubsetRandomSampler
indices: [0, 45000]
# optimizer parameters:
optimizer:
name: Adam
Expand All @@ -22,19 +24,20 @@ validation:
problem:
name: *name
batch_size: *b
index: [40000, 49999]
use_train_data: True # True because we are splitting the training set to: validation and training
padding: *p
up_scaling: *scale
resize: [224, 224]
# Use sampler that operates on a subset.
sampler:
name: SubsetRandomSampler
indices: [45000, 50000]

# Problem parameters:
testing:
problem:
name: *name
batch_size: *b
use_train_data: False
padding: *p
up_scaling: *scale
resize: [224, 224]

# Model parameters:
model:
Expand Down
17 changes: 8 additions & 9 deletions configs/vision/alexnet_mnist.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,19 @@ training:
problem:
name: &name MNIST
batch_size: &b 64
index: [0, 54999]
use_train_data: True
padding: &p [0,0,0,0] # ex: (x1, x2, x3, x4) pad last dim by (x1, x2) and 2nd to last by (x3, x4)
up_scaling: &scale True # if up_scale true, the image is resized to 224 x 224
resize: [224, 224]
# Use sampler that operates on a subset.
sampler:
name: SubsetRandomSampler
indices: [0, 55000]
# optimizer parameters:
optimizer:
name: Adam
lr: 0.01
# settings parameters
terminal_conditions:
loss_stop: 1.0e-5
loss_stop: 1.0e-2
episode_limit: 50000
epochs_limit: 10

Expand All @@ -24,19 +26,16 @@ validation:
problem:
name: *name
batch_size: *b
index: [54999, 59999]
use_train_data: True # True because we are splitting the training set to: validation and training
padding: *p
up_scaling: *scale
resize: [224, 224]

# Problem parameters:
testing:
problem:
name: *name
batch_size: *b
use_train_data: False
padding: *p
up_scaling: *scale
resize: [224, 224]

# Model parameters:
model:
Expand Down
43 changes: 43 additions & 0 deletions configs/vision/grid_trainer_mnist.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
grid_tasks:
-
default_configs: configs/vision/lenet5_mnist.yaml
-
default_configs: configs/vision/simplecnn_mnist.yaml

# Set exactly the same experiment conditions for the 2 tasks.
grid_overwrite:
training:
problem:
batch_size: &b 1000
sampler:
name: SubsetRandomSampler
indices: [0, 55000]
# Set the same optimizer parameters.
optimizer:
name: Adam
lr: 0.01
# Set the same terminal conditions.
terminal_conditions:
loss_stop: 4.0e-2
episode_limit: 10000
epoch_limit: 10

# Problem parameters:
validation:
problem:
batch_size: *b
sampler:
name: SubsetRandomSampler
indices: [55000, 60000]

testing:
problem:
batch_size: *b

grid_settings:
# Set number of repetitions of each experiments.
experiment_repetitions: 5
# Set number of concurrent running experiments.
max_concurrent_runs: 4
# Set trainer.
trainer: mip-online-trainer
48 changes: 48 additions & 0 deletions configs/vision/lenet5_mnist.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Training parameters:
training:
problem:
name: &name MNIST
batch_size: &b 64
use_train_data: True
data_folder: &folder '~/data/mnist'
resize: [32, 32]
# Use sampler that operates on a subset.
sampler:
name: SubsetRandomSampler
indices: [0, 55000]
# optimizer parameters:
optimizer:
name: Adam
lr: 0.01
# settings parameters
terminal_conditions:
loss_stop: 1.0e-2
episode_limit: 10000
epoch_limit: 10

# Validation parameters:
validation:
#partial_validation_interval: 100
problem:
name: *name
batch_size: *b
use_train_data: True # True because we are splitting the training set to: validation and training
data_folder: *folder
resize: [32, 32]
# Use sampler that operates on a subset.
sampler:
name: SubsetRandomSampler
indices: [55000, 60000]

# Testing parameters:
testing:
problem:
name: *name
batch_size: *b
use_train_data: False
data_folder: *folder
resize: [32, 32]

# Model parameters:
model:
name: LeNet5
16 changes: 8 additions & 8 deletions configs/vision/simplecnn_cifar10.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,11 @@ training:
problem:
name: &name CIFAR10
batch_size: &b 64
index: [0, 40000]
use_train_data: True
padding: &p [0,0,0,0] # ex: (x1, x2, x3, x4) pad last dim by (x1, x2) and 2nd to last by (x3, x4)
up_scaling: &scale False # if up_scale true the image is resized to 224 x 224
# Use sampler that operates on a subset.
sampler:
name: SubsetRandomSampler
indices: [0, 45000]
# optimizer parameters:
optimizer:
name: Adam
Expand All @@ -22,19 +23,18 @@ validation:
problem:
name: *name
batch_size: *b
index: [40000, 49999]
use_train_data: True # True because we are splitting the training set to: validation and training
padding: *p
up_scaling: *scale
# Use sampler that operates on a subset.
sampler:
name: SubsetRandomSampler
indices: [45000, 50000]

# Problem parameters:
testing:
problem:
name: *name
batch_size: *b
use_train_data: False
padding: *p
up_scaling: *scale

# Model parameters:
model:
Expand Down
32 changes: 21 additions & 11 deletions configs/vision/simplecnn_mnist.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,38 +5,48 @@ training:
problem:
name: &name MNIST
batch_size: &b 64
index: [0, 54999]
data_folder: &folder '~/data/mnist'
use_train_data: True
padding: &p [0,0,0,0] # ex: (x1, x2, x3, x4) pad last dim by (x1, x2) and 2nd to last by (x3, x4)
up_scaling: &scale False # if up_scale true, the image is resized to 224 x 224
resize: [32, 32]
sampler:
name: SubsetRandomSampler
indices: [0, 55000]
#indices: ~/data/mnist/split_a.txt
# optimizer parameters:
optimizer:
name: Adam
lr: 0.01
# settings parameters
terminal_conditions:
loss_stop: 1.0e-5
episode_limit: 50000
epochs_limit: 10
loss_stop: 1.0e-2
episode_limit: 1000
epoch_limit: 1

# Problem parameters:
validation:
problem:
name: *name
batch_size: *b
index: [54999, 59999]
data_folder: *folder
use_train_data: True # True because we are splitting the training set to: validation and training
padding: *p
up_scaling: *scale
resize: [32, 32]
sampler:
name: SubsetRandomSampler
indices: [55000, 60000]
#indices: ~/data/mnist/split_b.txt
#dataloader:
# drop_last: True

# Problem parameters:
testing:
#seed_numpy: 4354
#seed_torch: 2452
problem:
name: *name
batch_size: *b
data_folder: *folder
use_train_data: False
padding: *p
up_scaling: *scale
resize: [32, 32]


# Model parameters:
Expand Down
14 changes: 14 additions & 0 deletions doc_build.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#!/usr/bin/env bash

cd docs
rm -rf build

# create html pages
sphinx-build -b html source build
make html

# open web browser(s) to master table of content
if which firefox
then
firefox build/index.html
fi
Loading