Skip to content

Commit

Permalink
Add documentation to project.
Browse files Browse the repository at this point in the history
  • Loading branch information
datitran committed May 23, 2019
1 parent 0cd97f9 commit 204c772
Show file tree
Hide file tree
Showing 11 changed files with 367 additions and 7 deletions.
10 changes: 10 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -111,3 +111,13 @@ venv.bak/
train_jobs
_medium
_readme/graphs.ipynb
mkdocs/docs/_readme/
mkdocs/docs/evaluater/
mkdocs/docs/handlers/
mkdocs/docs/tests/
mkdocs/docs/trainer/
mkdocs/docs/utils/
mkdocs/docs/index.md
mkdocs/docs/CONTRIBUTING.md
mkdocs/docs/LICENSE.md
/docs
21 changes: 17 additions & 4 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,21 @@
language: python
python:
- "3.6"
- '3.6'
install:
- pip install -r src/requirements.txt
- pip install tensorflow==1.13.1
- pip install -r src/requirements.txt
- pip install tensorflow==1.13.1
- pip install mkdocs==1.0.4 mkdocs-material==4.3.0
script:
- nosetests -vs src/tests
- nosetests -vs src/tests
- cd mkdocs && sh build_docs.sh
deploy:
provider: pages
skip_cleanup: true
github_token: "$GITHUB_TOKEN"
local-dir: docs/
on:
branch: master
target_branch: gh-pages
env:
global:
secure: fvEEhJ4M7WCK5LwlcZBaneI0kwMKmySxGjVG4nNOlyi53YREKMgFpu8erQLcbrMwXrqcna4F0MmhTAW3OsjFwPiWzspqDF8ptIeBpPAdFdGkZUZ2Ylpu/J3iT+mQbKBuFKtm5znxvzvWyMLd0uyTr7U4lKEdVa8EJ4AM3AQuIab/4l7REGVTQqH+oY7phnPqKf2PcbzUcyw6ZWJ6nsHJnh2ql1nD2/26NQWx2FEM37GdIT0qZTbJRgHW37GOCfdTjMdUzdlEAs9CQew9wTxj+dcgWKWGBnZbmeExi/2QM7ITddHlx+y994tg9Gz1Zh6GmcQqj/vGwF3O8jDgtbB8yaQZlAeuoBGBRYSqzcLbita06gaKEOxo0u0/rtXzCpKziwTXaWyf9ZHySb3CAlwWkwlvA8zJQLaqTxWvmupABn0WBUmE/keIKxgeW5OAwJ9yE4EKrJPmaEDwm1BWjnCcBeyspfc+00VBSnMId3ZUvd/zGiBjRaAHaOqfwpYc4bEozmgToLbB3+OQdTAcFBnT+ownt8KFmDVFGMvE1tfCvA6oAOEebXFHC5rV27DIH1sIG+L4LDFhKKw+Pw7aeO+Ljk4JCUChPLsl1wUw5UGx1tFSfld3n1NPf9O59eD5jgACUM3j4X5q8bOZGNh06HIITFeKiaatjGRWHNwtDwlU4Ao=
33 changes: 33 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Contribution Guide

We welcome any contributions whether it's,

- Submitting feedback
- Fixing bugs
- Or implementing a new feature.

Please read this guide before making any contributions.

#### Submit Feedback
The feedback should be submitted by creating an issue at [GitHub issues](https://github.com/idealo/image-quality-assessment/issues).
Select the related template (bug report, feature request, or custom) and add the corresponding labels.

#### Fix Bugs:
You may look through the [GitHub issues](https://github.com/idealo/image-quality-assessment/issues) for bugs.

#### Implement Features
You may look through the [GitHub issues](https://github.com/idealo/image-quality-assessment/issues) for feature requests.

## Pull Requests (PR)
1. Fork the repository and a create a new branch from the master branch.
2. For bug fixes, add new tests and for new features please add changes to the documentation.
3. Do a PR from your new branch to our `dev` branch of the original Image Quality Assessment repo.

## Documentation
- Make sure any new function or class you introduce has proper docstrings.

## Testing
- We use [nosetests](https://nose.readthedocs.io/en/latest/) for our testing. Make sure to write tests for any new feature and/or bug fixes.

## Main Contributor List
We maintain a list of main contributors to appreciate all the contributions.
14 changes: 11 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Image quality assessment
# Image Quality Assessment

[![Build Status](https://travis-ci.org/idealo/image-quality-assessment.svg?branch=master)](https://travis-ci.org/idealo/image-quality-assessment)
[![License](https://img.shields.io/badge/License-Apache%202.0-orange.svg)](https://github.com/idealo/image-quality-assessment/blob/master/LICENSE)
Expand All @@ -7,9 +7,16 @@ This repository provides an implementation of an aesthetic and technical image q

NIMA consists of two models that aim to predict the aesthetic and technical quality of images, respectively. The models are trained via transfer learning, where ImageNet pre-trained CNNs are used and fine-tuned for the classification task.

For more information on how we used NIMA for our specifc problem, we did a write-up on two blog posts:

* NVIDIA Developer Blog: [Deep Learning for Classifying Hotel Aesthetics Photos](https://devblogs.nvidia.com/deep-learning-hotel-aesthetics-photos/)
* Medium: [Using Deep Learning to automatically rank millions of hotel images](https://medium.com/idealo-tech-blog/using-deep-learning-to-automatically-rank-millions-of-hotel-images-c7e2d2e5cae2)

The provided code allows to use any of the pre-trained models in [Keras](https://keras.io/applications/). We further provide Docker images for local CPU training and remote GPU training on AWS EC2, as well as pre-trained models on the [AVA](https://github.com/ylogx/aesthetics/tree/master/data/ava) and [TID2013](http://www.ponomarenko.info/tid2013.htm) datasets.

We welcome all kinds of contributions, especially new model architectures and/or hyperparameter combinations that improve the performance of the currently published models (see [Contribute](#contribute)).
Read the full documentation at: [https://idealo.github.io/image-quality-assessment/](https://idealo.github.io/image-quality-assessment/).

Image quality assessment is compatible with Python 3.6 and is distributed under the Apache 2.0 license. We welcome all kinds of contributions, especially new model architectures and/or hyperparameter combinations that improve the performance of the currently published models (see [Contribute](#contribute)).


## Trained models
Expand Down Expand Up @@ -151,6 +158,7 @@ We welcome all kinds of contributions and will publish the performances from new

For example, to train a new aesthetic NIMA model based on InceptionV3 ImageNet weights, you just have to change the `base_model_name` parameter in the config file `models/MobileNet/config_mobilenet_aesthetic.json` to "InceptionV3". You can also control all major hyperparameters in the config file, like learning rate, batch size, or dropout rate.

See the [Contribution](CONTRIBUTING.md) guide for more details.

## Datasets
This project uses two datasets to train the NIMA model:
Expand Down Expand Up @@ -228,7 +236,7 @@ Please cite Image Quality Assessment in your publications if this is useful for
## Maintainers
* Christopher Lennan, github: [clennan](https://github.com/clennan)
* Hao Nguyen, github: [MrBanhBao](https://github.com/MrBanhBao)

* Dat Tran, github: [datitran](https://github.com/datitran)

## Copyright

Expand Down
6 changes: 6 additions & 0 deletions mkdocs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Image Quality Assessment Documentation

## Building the documentation
- Install MkDocs: `pip install mkdocs mkdocs-material`
- Serve MkDocs: `mkdocs serve` and then go to `http://127.0.0.1:8000/` to view it
- Run `python autogen.py` to auto-generate the code documentation
239 changes: 239 additions & 0 deletions mkdocs/autogen.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,239 @@
# Heavily borrowed from the Auto-Keras project:
# https://github.com/jhfjhfj1/autokeras/blob/master/mkdocs/autogen.py

import ast
import os
import re


def delete_space(parts, start, end):
if start > end or end >= len(parts):
return None
count = 0
while count < len(parts[start]):
if parts[start][count] == ' ':
count += 1
else:
break
return '\n'.join(y for y in [x[count:] for x in parts[start : end + 1] if len(x) > count])


def change_args_to_dict(string):
if string is None:
return None
ans = []
strings = string.split('\n')
ind = 1
start = 0
while ind <= len(strings):
if ind < len(strings) and strings[ind].startswith(" "):
ind += 1
else:
if start < ind:
ans.append('\n'.join(strings[start:ind]))
start = ind
ind += 1
d = {}
for line in ans:
if ":" in line and len(line) > 0:
lines = line.split(":")
d[lines[0]] = lines[1].strip()
return d


def remove_next_line(comments):
for x in comments:
if comments[x] is not None and '\n' in comments[x]:
comments[x] = ' '.join(comments[x].split('\n'))
return comments


def skip_space_line(parts, ind):
while ind < len(parts):
if re.match(r'^\s*$', parts[ind]):
ind += 1
else:
break
return ind


# check if comment is None or len(comment) == 0 return {}
def parse_func_string(comment):
if comment is None or len(comment) == 0:
return {}
comments = {}
paras = ('Args', 'Attributes', 'Returns', 'Raises')
comment_parts = [
'short_description',
'long_description',
'Args',
'Attributes',
'Returns',
'Raises',
]
for x in comment_parts:
comments[x] = None

parts = re.split(r'\n', comment)
ind = 1
while ind < len(parts):
if re.match(r'^\s*$', parts[ind]):
break
else:
ind += 1

comments['short_description'] = '\n'.join(
['\n'.join(re.split('\n\s+', x.strip())) for x in parts[0:ind]]
).strip(':\n\t ')
ind = skip_space_line(parts, ind)

start = ind
while ind < len(parts):
if parts[ind].strip().startswith(paras):
break
else:
ind += 1
long_description = '\n'.join(
['\n'.join(re.split('\n\s+', x.strip())) for x in parts[start:ind]]
).strip(':\n\t ')
comments['long_description'] = long_description

ind = skip_space_line(paras, ind)
while ind < len(parts):
if parts[ind].strip().startswith(paras):
start = ind
start_with = parts[ind].strip()
ind += 1
while ind < len(parts):
if parts[ind].strip().startswith(paras):
break
else:
ind += 1
part = delete_space(parts, start + 1, ind - 1)
if start_with.startswith(paras[0]):
comments[paras[0]] = change_args_to_dict(part)
elif start_with.startswith(paras[1]):
comments[paras[1]] = change_args_to_dict(part)
elif start_with.startswith(paras[2]):
comments[paras[2]] = change_args_to_dict(part)
elif start_with.startswith(paras[3]):
comments[paras[3]] = part
ind = skip_space_line(parts, ind)
else:
ind += 1

remove_next_line(comments)
return comments


def md_parse_line_break(comment):
comment = comment.replace(' ', '\n\n')
return comment.replace(' - ', '\n\n- ')


def to_md(comment_dict):
doc = ''
if 'short_description' in comment_dict:
doc += comment_dict['short_description']
doc += '\n\n'

if 'long_description' in comment_dict:
doc += md_parse_line_break(comment_dict['long_description'])
doc += '\n'

if 'Args' in comment_dict and comment_dict['Args'] is not None:
doc += '##### Args\n'
for arg, des in comment_dict['Args'].items():
doc += '* **' + arg + '**: ' + des + '\n\n'

if 'Attributes' in comment_dict and comment_dict['Attributes'] is not None:
doc += '##### Attributes\n'
for arg, des in comment_dict['Attributes'].items():
doc += '* **' + arg + '**: ' + des + '\n\n'

if 'Returns' in comment_dict and comment_dict['Returns'] is not None:
doc += '##### Returns\n'
if isinstance(comment_dict['Returns'], str):
doc += comment_dict['Returns']
doc += '\n'
else:
for arg, des in comment_dict['Returns'].items():
doc += '* **' + arg + '**: ' + des + '\n\n'
return doc


def parse_func_args(function):
args = [a.arg for a in function.args.args if a.arg != 'self']
kwargs = []
if function.args.kwarg:
kwargs = ['**' + function.args.kwarg.arg]

return '(' + ', '.join(args + kwargs) + ')'


def get_func_comments(function_definitions):
doc = ''
for f in function_definitions:
temp_str = to_md(parse_func_string(ast.get_docstring(f)))
doc += ''.join(
[
'### ',
f.name.replace('_', '\\_'),
'\n',
'```python',
'\n',
'def ',
f.name,
parse_func_args(f),
'\n',
'```',
'\n',
temp_str,
'\n',
]
)

return doc


def get_comments_str(file_name):
with open(file_name) as fd:
file_contents = fd.read()
module = ast.parse(file_contents)

function_definitions = [node for node in module.body if isinstance(node, ast.FunctionDef)]

doc = get_func_comments(function_definitions)

class_definitions = [node for node in module.body if isinstance(node, ast.ClassDef)]
for class_def in class_definitions:
temp_str = to_md(parse_func_string(ast.get_docstring(class_def)))

# excludes private methods (start with '_')
method_definitions = [
node
for node in class_def.body
if isinstance(node, ast.FunctionDef) and (node.name[0] != '_' or node.name[:2] == '__')
]

temp_str += get_func_comments(method_definitions)
doc += '## class ' + class_def.name + '\n' + temp_str
return doc


def extract_comments(directory):
for parent, dir_names, file_names in os.walk(directory):
for file_name in file_names:
if os.path.splitext(file_name)[1] == '.py' and file_name != '__init__.py':
# with open
doc = get_comments_str(os.path.join(parent, file_name))
directory = os.path.join('docs', parent.replace('../src/', ''))
if not os.path.exists(directory):
os.makedirs(directory)

output_file = open(os.path.join(directory, file_name[:-3] + '.md'), 'w')
output_file.write(doc)
output_file.close()


extract_comments('../src/')
9 changes: 9 additions & 0 deletions mkdocs/build_docs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/usr/bin/env bash

cp ../README.md docs/index.md
cp -r ../_readme docs/_readme
cp ../CONTRIBUTING.md docs/CONTRIBUTING.md
cp ../LICENSE docs/LICENSE.md
python autogen.py
mkdir ../docs
mkdocs build -c -d ../docs/
Binary file added mkdocs/docs/img/favicon.ico
Binary file not shown.
1 change: 1 addition & 0 deletions mkdocs/docs/img/logo.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 204c772

Please sign in to comment.