Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Browse files
Browse the repository at this point in the history
Added tensorflownet basic framework (ver 0.1.0)
- Loading branch information
1 parent
6a311a0
commit 4769a8a
Showing
27 changed files
with
2,101 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,68 @@ | ||
# tensorflownet | ||
|
||
# Welcome to *TensorFlowNet*! | ||
|
||
****TensorFlowNet**** is a Machine Learning framework that is built on top of [TensorFlow](https://github.com/tensorflow/tensorflow) and it uses TensorFlow's [Eager](https://www.tensorflow.org/programmers_guide/eager) framework for fast research and experimentation. Visualization is done using [TensorBoard](https://github.com/tensorflow/tensorboard). | ||
|
||
TensorFlowNet is easy to be customized by creating the necessary classes: | ||
1. **Data Loading**: a dataset class is required to load the data. | ||
2. **Model Design**: a tf.keras.Model class that represents the network model. | ||
3. **Loss Method**: an appropriate class for the loss, for example CrossEntropyLoss or MSELoss. | ||
4. **Evaluation Metric**: a class to measure the accuracy of the results. | ||
|
||
# Structure | ||
TensorFlowNet consists of the following packages: | ||
## Datasets | ||
This is for loading and transforming datasets. | ||
## Models | ||
Network models are kept in this package. It already includes [ResNet](https://arxiv.org/abs/1512.03385), [PreActResNet](https://arxiv.org/abs/1603.05027), [Stacked Hourglass](https://arxiv.org/abs/1603.06937) and [SphereFace](https://arxiv.org/abs/1704.08063). | ||
## Losses | ||
There are number of different choices available for Classification or Regression. New loss methods can be put here. | ||
## Evaluates | ||
There are number of different choices available for Classification or Regression. New accuracy metrics can be put here. | ||
## Plugins | ||
As of now, the following plugins are available: | ||
1. **ProgressBar**: | ||
## Root | ||
- main | ||
- dataloader | ||
- checkpoints | ||
- model | ||
- train | ||
- test | ||
|
||
# Setup | ||
First, you need to download TensorFlowNet by calling the following command: | ||
> git clone --recursive https://github.com/human-analysis/tensorflownet.git | ||
Since TensorFlowNet relies on several Python packages, you need to make sure that the requirements exist by executing the following command in the *tensorflownet* directory: | ||
> pip install -r requirements.txt | ||
**Notice** | ||
* If you do not have TensorFlow or it does not meet the requirements, please follow the instruction on [the TensorFlow website](https://www.tensorflow.org/install/). | ||
|
||
Congratulations!!! You are now ready to use TensorFlowNet! | ||
|
||
# Usage | ||
TensorFlowNet comes with a classification example in which a [ResNet](https://arxiv.org/abs/1512.03385) model is trained for the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. | ||
> python [main.py](https://github.com/human-analysis/tensorflownet/blob/dev/main.py) | ||
# Configuration | ||
TensorFlowNet loads its parameters at the beginning via a config file and/or the command line. | ||
## Config file | ||
When TensorFlowNet is being run, it will automatically load all parameters from [args.txt](https://github.com/human-analysis/tensorflownet/blob/master/args.txt) by default, if it exists. In order to load a custom config file, the following parameter can be used: | ||
> python main.py --config custom_args.txt | ||
### args.txt | ||
> [Arguments] | ||
> | ||
> log_type = traditional\ | ||
> save_results = No\ | ||
> \ | ||
> \# dataset options\ | ||
> dataroot = ./data\ | ||
> dataset_train = CIFAR10\ | ||
> dataset_test = CIFAR10\ | ||
> batch_size = 64 | ||
|
||
## Command line | ||
Parameters can also be set in the command line when invoking [main.py](https://github.com/human-analysis/tensorflownet/blob/master/main.py). These parameters will precede the existing parameters in the configuration file. | ||
> python [main.py](https://github.com/human-analysis/tensorflownet/blob/master/main.py) --log-type progressbar |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
# checkpoints.py | ||
|
||
import os | ||
import tensorflow as tf | ||
import tensorflow.contrib.eager as tfe | ||
|
||
class Checkpoints: | ||
def __init__(self, args): | ||
self.save_path = args.save | ||
self.resume_path = args.resume_path | ||
self.save_results = args.save_results | ||
|
||
if self.save_results and not os.path.isdir(self.save_path): | ||
os.makedirs(self.save_path) | ||
|
||
def latest(self): | ||
return tf.train.latest_checkpoint(self.resume_path) | ||
|
||
def save(self, epoch, model, best): | ||
model_objects = {'model': model} | ||
if best is True: | ||
ckpt = tfe.Checkpoint(**model_objects) | ||
ckpt.save('%s/model_epoch_%d' % (self.save_path, epoch)) | ||
|
||
def load(self, model, filename): | ||
model_objects = {'model': model} | ||
print("=> loading checkpoint '{}'".format(filename)) | ||
ckpt = tfe.Checkpoint(**model_objects) | ||
ckpt.restore(filename) | ||
|
||
return model_objects['model'] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,124 @@ | ||
# config.py | ||
import os | ||
import datetime | ||
import argparse | ||
import json | ||
import configparser | ||
import utils | ||
import re | ||
from ast import literal_eval as make_tuple | ||
|
||
|
||
def parse_args(): | ||
result_path = "results/" | ||
now = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S') | ||
result_path = os.path.join(result_path, now) | ||
|
||
parser = argparse.ArgumentParser(description='Your project title goes here') | ||
|
||
# the following two parameters can only be provided at the command line. | ||
parser.add_argument('--result-path', type=str, default=result_path, metavar='', help='full path to store the results') | ||
parser.add_argument("-c", "--config", "--args-file", dest="config_file", default="args.txt", help="Specify a config file", metavar="FILE") | ||
args, remaining_argv = parser.parse_known_args() | ||
|
||
result_path = args.result_path | ||
# add date and time to the result directory name | ||
# if now not in result_path: | ||
# result_path = os.path.join(result_path, now) | ||
|
||
# ======================= Data Setings ===================================== | ||
parser.add_argument('--dataroot', type=str, default=None, help='path of the data') | ||
parser.add_argument('--dataset-test', type=str, default=None, help='name of training dataset') | ||
parser.add_argument('--dataset-train', type=str, default=None, help='name of training dataset') | ||
parser.add_argument('--split_test', type=float, default=None, help='test split') | ||
parser.add_argument('--split_train', type=float, default=None, help='train split') | ||
parser.add_argument('--test-dev-percent', type=float, default=None, metavar='', help='percentage of dev in test') | ||
parser.add_argument('--train-dev-percent', type=float, default=None, metavar='', help='percentage of dev in train') | ||
parser.add_argument('--resume-path', type=str, default=None, help='full path of models to resume training') | ||
parser.add_argument('--nclasses', type=int, default=None, metavar='', dest='noutputs', help='number of classes for classification') | ||
parser.add_argument('--nchannels', type=int, default=3, metavar='', help='number of input channels') | ||
parser.add_argument('--noutputs', type=int, default=None, metavar='', help='number of outputs, i.e. number of classes for classification') | ||
parser.add_argument('--input-filename-test', type=str, default=None, help='input test filename for filelist and folderlist') | ||
parser.add_argument('--label-filename-test', type=str, default=None, help='label test filename for filelist and folderlist') | ||
parser.add_argument('--input-filename-train', type=str, default=None, help='input train filename for filelist and folderlist') | ||
parser.add_argument('--label-filename-train', type=str, default=None, help='label train filename for filelist and folderlist') | ||
parser.add_argument('--loader-input', type=str, default=None, help='input loader') | ||
parser.add_argument('--loader-label', type=str, default=None, help='label loader') | ||
parser.add_argument('--dataset-options', type=json.loads, default=None, metavar='', help='additional model-specific parameters, i.e. \'{"gauss": 1}\'') | ||
|
||
# ======================= Network Model Setings ============================ | ||
parser.add_argument('--model-type', type=str, default=None, help='type of network') | ||
parser.add_argument('--model-options', type=json.loads, default={}, metavar='', help='additional model-specific parameters, i.e. \'{"nstack": 1}\'') | ||
parser.add_argument('--loss-type', type=str, default='Classification', help='loss method') | ||
parser.add_argument('--loss-options', type=json.loads, default={}, metavar='', help='loss-specific parameters, i.e. \'{"wsigma": 1}\'') | ||
parser.add_argument('--evaluation-type', type=str, default='Classification', help='evaluation method') | ||
parser.add_argument('--evaluation-options', type=json.loads, default={}, metavar='', help='evaluation-specific parameters, i.e. \'{"topk": 1}\'') | ||
parser.add_argument('--resolution-high', type=int, default=None, help='image resolution height') | ||
parser.add_argument('--resolution-wide', type=int, default=None, help='image resolution width') | ||
parser.add_argument('--ndim', type=int, default=None, help='number of feature dimensions') | ||
parser.add_argument('--nunits', type=int, default=None, help='number of units in hidden layers') | ||
parser.add_argument('--dropout', type=float, default=None, help='dropout parameter') | ||
parser.add_argument('--length-scale', type=float, default=None, help='length scale') | ||
parser.add_argument('--tau', type=float, default=None, help='Tau') | ||
|
||
# ======================= Training Settings ================================ | ||
parser.add_argument('--cuda', action='store_true', default=False, help='run on gpu') | ||
parser.add_argument('--ngpu', type=int, default=None, help='number of gpus to use') | ||
parser.add_argument('--batch-size', type=int, default=None, help='batch size for training') | ||
parser.add_argument('--nepochs', type=int, default=None, help='number of epochs to train') | ||
parser.add_argument('--niters', type=int, default=None, help='number of iterations at test time') | ||
parser.add_argument('--epoch-number', type=int, default=None, help='epoch number') | ||
parser.add_argument('--nthreads', type=int, default=None, help='number of threads for data loading') | ||
parser.add_argument('--manual-seed', type=int, default=None, help='manual seed for randomness') | ||
|
||
# ===================== Visualization Settings ============================= | ||
parser.add_argument('-p', '--port', type=int, default=None, metavar='', help='port for visualizing training at http://localhost:port') | ||
parser.add_argument('--env', type=str, default='', metavar='', help='environment for visualizing training at http://localhost:port') | ||
|
||
# ======================= Hyperparameter Setings =========================== | ||
parser.add_argument('--learning-rate', type=float, default=None, help='learning rate') | ||
parser.add_argument('--optim-method', type=str, default="SGD", help='the optimization routine ') | ||
parser.add_argument('--optim-options', type=json.loads, default={}, metavar='', help='optimizer-specific parameters, i.e. \'{"lr": 0.001}\'') | ||
parser.add_argument('--scheduler-method', type=str, default=None, help='cosine, step, exponential, plateau') | ||
parser.add_argument('--scheduler-options', type=json.loads, default={}, metavar='', help='optimizer-specific parameters') | ||
|
||
# ======================== Main Setings ==================================== | ||
parser.add_argument('--log-type', type=str, default='traditional', metavar='', help='allows to select logger type, traditional or progressbar') | ||
parser.add_argument('--same-env', type=utils.str2bool, default='No', metavar='', help='does not add date and time to the visdom environment name') | ||
parser.add_argument('-s', '--save', '--save-results', action='store_true', dest="save_results", default=False, help='save the arguments and the results') | ||
|
||
if os.path.exists(args.config_file): | ||
config = configparser.ConfigParser() | ||
config.read([args.config_file]) | ||
defaults = dict(config.items("Arguments")) | ||
parser.set_defaults(**defaults) | ||
|
||
args = parser.parse_args(remaining_argv) | ||
|
||
# add date and time to the name of Visdom environment and the result | ||
if args.env is '': | ||
args.env = args.model_type | ||
if not args.same_env: | ||
args.env += '_' + now | ||
args.result_path = result_path | ||
|
||
# refine tuple arguments: this section converts tuples that are | ||
# passed as string back to actual tuples. | ||
pattern = re.compile('^\(.+\)') | ||
|
||
for arg_name in vars(args): | ||
# print(arg, getattr(args, arg)) | ||
arg_value = getattr(args, arg_name) | ||
if isinstance(arg_value, str) and pattern.match(arg_value): | ||
setattr(args, arg_name, make_tuple(arg_value)) | ||
print(arg_name, arg_value) | ||
elif isinstance(arg_value, dict): | ||
dict_changed = False | ||
for key, value in arg_value.items(): | ||
if isinstance(value, str) and pattern.match(value): | ||
dict_changed = True | ||
arg_value[key] = make_tuple(value) | ||
if dict_changed: | ||
setattr(args, arg_name, arg_value) | ||
|
||
return args |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
# dataloader.py | ||
|
||
import tensorflow as tf | ||
import datasets | ||
import glob | ||
from tqdm import tqdm | ||
|
||
class Dataloader: | ||
"""docstring for Dataloader""" | ||
def __init__(self, args): | ||
super(Dataloader, self).__init__() | ||
self.args = args | ||
|
||
self.dataset_test_name = args.dataset_test | ||
self.dataset_train_name = args.dataset_train | ||
self.dataroot = args.dataroot | ||
self.batch_size = args.batch_size | ||
|
||
if self.dataset_train_name == "CELEBA": | ||
self.dataset_train, self.dataset_train_len = datasets.ImageFolder(root=self.dataroot + "/train") | ||
|
||
elif self.dataset_train_name == "MNIST": | ||
self.dataset_train, self.dataset_train_len = datasets.MNIST(self.dataroot).train() | ||
|
||
else: | ||
raise(Exception("Unknown Dataset")) | ||
|
||
if self.dataset_test_name == "CELEBA": | ||
self.dataset_test, self.dataset_test_len = datasets.ImageFolder(root=self.dataroot + "/test") | ||
|
||
elif self.dataset_test_name == "MNIST": | ||
self.dataset_test, self.dataset_test_len = datasets.MNIST(self.dataroot).test() | ||
|
||
else: | ||
raise(Exception("Unknown Dataset")) | ||
|
||
def create(self, shuffle=False, flag=None): | ||
dataloader = {} | ||
if flag == "Train": | ||
dataloader['train'] = (self.dataset_train.batch(self.batch_size).shuffle(self.dataset_train_len), self.dataset_train_len) | ||
|
||
elif flag == "Test": | ||
dataloader['test'] = (self.dataset_test.batch(self.batch_size), self.dataset_test_len) | ||
|
||
elif flag == None: | ||
dataloader['train'] = (self.dataset_train.batch(self.batch_size).shuffle(self.dataset_train_len), self.dataset_train_len) | ||
dataloader['test'] = (self.dataset_test.batch(self.batch_size), self.dataset_test_len) | ||
|
||
return dataloader |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# __init__.py | ||
|
||
from .imagefolder import ImageFolder | ||
from .mnist import MNIST |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
# imagefolder.py | ||
|
||
import numpy as np | ||
import os | ||
from PIL import Image | ||
import tensorflow as tf | ||
|
||
class ImageFolder: | ||
def __init__(self, root, transform=None): | ||
self.root = root | ||
self.transform = transform | ||
self.filenames = [] | ||
self.labels = [] | ||
label = 0 | ||
|
||
for _, folders, _ in os.walk(self.root): | ||
for folder in folders: | ||
for _, _, files in os.walk("{}/{}".format(self.root, folder)): | ||
for file in files: | ||
self.filenames.append("{}/{}/{}".format(self.root, folder, file)) | ||
self.labels.append(label) | ||
break | ||
label += 1 | ||
break | ||
|
||
self.length = len(self.filenames) | ||
print("Total = {}".format(self.length)) | ||
|
||
dataset = tf.data.Dataset.from_tensor_slices((self.filenames, self.labels)) | ||
dataset = dataset.map(lambda filename, label: tuple(tf.py_func(read_image_file, [filename, label], [tf.uint8, self.labels.dtype]))) | ||
return dataset, self.length | ||
|
||
def read_image_file(filename, label): | ||
with Image.open(filename) as image: | ||
im_numpy = np.array(image) | ||
return im_numpy, label |
Oops, something went wrong.