Skip to content

Commit

Permalink
Development longregression (#3)
Browse files Browse the repository at this point in the history
* New refactor code. Initial push

* Allow specifying the network type in include (automl#78)

* Allow specifying the network type in include

* Fix test flake 8

* fix test api

* increased time for func eval in cros validation

* Addressed comments

Co-authored-by: Ravin Kohli <kohliravin7@gmail.com>

* Search space update (automl#80)

* Added Hyperparameter Search space updates

* added test for search space update

* Added Hyperparameter Search space updates

* added test for search space update

* Added hyperparameter search space updates to network, trainer and improved check for search space updates

* Fix mypy, flake8

* Fix tests and silly mistake in base_pipeline

* Fix flake

* added _cs_updates to dummy component

* fixed indentation and isinstance comment

* fixed silly error

* Addressed comments from fransisco

* added value error for search space updates

* ADD tests for setting range of config space

* fic utils search space update

* Make sure the performance of pipeline is at least 0.8

* Early stop fixes

* Network Cleanup (automl#81)

* removed old supported_tasks dictionary from heads, added some docstrings and some small fixes

* removed old supported_tasks attribute and updated doc strings in base backbone and base head components

* removed old supported_tasks attribute from network backbones

* put time series backbones in separate files, add doc strings and refactored search space arguments

* split image networks into separate files, add doc strings and refactor search space

* fix typo

* add an intial simple backbone test similar to the network head test

* fix flake8

* fixed imports in backbones and heads

* added new network backbone and head tests

* enabled tests for adding custom backbones and heads, added required properties to base head and base backbone

* First documentation

* Default to ubuntu-18.04

* Comment enhancements

* Feature preprocessors, Loss strategies (automl#86)

* ADD Weighted loss

* Now?

* Fix tests, flake, mypy

* Fix tests

* Fix mypy

* change back sklearn requirement

* Assert for fast ica sklearn bug

* Forgot to add skip

* Fix tests, changed num only data to float

* removed fast ica

* change num only dataset

* Increased number of features in num only

* Increase timeout for pytest

* ADD tensorboard to requirement

* Fix bug with small_preprocess

* Fix bug in pytest execution

* Fix tests

* ADD error is raised if default not in include

* Added dynamic search space for deciding n components in feature preprocessors, add test for pipeline include

* Moved back to random configs in tabular test

* Added floor and ceil and handling of logs

* Fix flake

* Remove TruncatedSVD from cs if num numerical ==1

* ADD flakyness to network accuracy test

* fix flake

* remove cla to pytest

* Validate the input to autopytorch

* Bug fixes after rebase

* Move to new scikit learn

* Remove dangerous convert dtype

* Try to remove random float error again and make data pickable

* Tets pickle on versions higher than 3.6

* Tets pickle on versions higher than 3.6

* Comment fixes

* Adding tabular regression pipeline (automl#85)

* removed old supported_tasks dictionary from heads, added some docstrings and some small fixes

* removed old supported_tasks attribute and updated doc strings in base backbone and base head components

* removed old supported_tasks attribute from network backbones

* put time series backbones in separate files, add doc strings and refactored search space arguments

* split image networks into separate files, add doc strings and refactor search space

* fix typo

* add an intial simple backbone test similar to the network head test

* fix flake8

* fixed imports in backbones and heads

* added new network backbone and head tests

* enabled tests for adding custom backbones and heads, added required properties to base head and base backbone

* adding tabular regression pipeline

* fix flake8

* adding tabular regression pipeline

* fix flake8

* fix regression test

* fix indentation and comments, undo change in base network

* pipeline fitting tests now check the expected output shape dynamically based on the input data

* refactored trainer tests, added trainer test for regression

* remove regression from mixup unitest

* use pandas unique instead of numpy

* [IMPORTANT] added proper target casting based on task type to base trainer

* adding tabular regression task to api

* adding tabular regression example, some small fixes

* new/more tests for tabular regression

* fix mypy and flake8 errors from merge

* fix issues with new weighted loss and regression tasks

* change tabular column transformer to use net fit_dictionary_tabular fixture

* fixing tests, replaced num_classes with output_shape

* fixes after merge

* adding voting regressor wrapper

* fix mypy and flake

* updated example

* lower r2 target

* address comments

* increasing timeout

* increase number of labels in test_losses because it occasionally failed if one class was not in the labels

* lower regression lr in score test until seeding properly works

* fix randomization in feature validator test

* Make sure the performance of pipeline is at least 0.8

* Early stop fixes

* Network Cleanup (automl#81)

* removed old supported_tasks dictionary from heads, added some docstrings and some small fixes

* removed old supported_tasks attribute and updated doc strings in base backbone and base head components

* removed old supported_tasks attribute from network backbones

* put time series backbones in separate files, add doc strings and refactored search space arguments

* split image networks into separate files, add doc strings and refactor search space

* fix typo

* add an intial simple backbone test similar to the network head test

* fix flake8

* fixed imports in backbones and heads

* added new network backbone and head tests

* enabled tests for adding custom backbones and heads, added required properties to base head and base backbone

* First documentation

* Default to ubuntu-18.04

* Comment enhancements

* Feature preprocessors, Loss strategies (automl#86)

* ADD Weighted loss

* Now?

* Fix tests, flake, mypy

* Fix tests

* Fix mypy

* change back sklearn requirement

* Assert for fast ica sklearn bug

* Forgot to add skip

* Fix tests, changed num only data to float

* removed fast ica

* change num only dataset

* Increased number of features in num only

* Increase timeout for pytest

* ADD tensorboard to requirement

* Fix bug with small_preprocess

* Fix bug in pytest execution

* Fix tests

* ADD error is raised if default not in include

* Added dynamic search space for deciding n components in feature preprocessors, add test for pipeline include

* Moved back to random configs in tabular test

* Added floor and ceil and handling of logs

* Fix flake

* Remove TruncatedSVD from cs if num numerical ==1

* ADD flakyness to network accuracy test

* fix flake

* remove cla to pytest

* Validate the input to autopytorch

* Bug fixes after rebase

* Move to new scikit learn

* Remove dangerous convert dtype

* Try to remove random float error again and make data pickable

* Tets pickle on versions higher than 3.6

* Tets pickle on versions higher than 3.6

* Comment fixes

* [REFACTORING]: no change in the functionalities, inputs, returns

* Modified an error message

* [Test error fix]: Fixed the error caused by flake8

* [Test error fix]: Fixed the error caused by flake8

* FIX weighted loss issue (automl#94)

* Changed tests for losses and how weighted strategy is handled in the base trainer

* Addressed comments from francisco

* Fix training test

* Re-arranged tests and moved test_setup to pytest

* Reduced search space for dummy forward backward pass of backbones

* Fix typo

* ADD Doc string to loss function

* Logger enhancements

* show_models

* Move to spawn

* Adding missing logger line

* Feedback from comments

* ADD_109

* No print allow

* [PR response]: deleted unneeded changes from merge and fixed the doc-string.

* fixed the for loop in type_check based on samuel's review

* deleted blank space pointed out by flake8

* Try no autouse

* handle nans in categorical columns (automl#118)

* handle nans in categorical columns

* Fixed error in self dtypes

* Addressed comments from francisco

* Forgot to commit

* Fix flake

* Embedding layer (automl#91)

* work in progress

* in progress

* Working network embedding

* ADD tests for network embedding

* Removed ordinal encoder

* Removed ordinal encoder

* Add seed for test_losses for reproducibility

* Addressed comments

* fix flake

* fix test import training

* ADD_109

* No print allow

* Fix tests and move to boston

* Debug issue with python 3.6

* Debug for python3.6

* Run only debug file

* work in progress

* in progress

* Working network embedding

* ADD tests for network embedding

* Removed ordinal encoder

* Removed ordinal encoder

* Addressed comments

* fix flake

* fix test import training

* Fix tests and move to boston

* Debug issue with python 3.6

* Run only debug file

* Debug for python3.6

* print paths of parent dir

* Trying to run examples

* Trying to run examples

* Add success model

* Added parent directory for printing paths

* Try no autouse

* print log file to see if backend is saving num run

* Setup logger in backend

* handle nans in categorical columns (automl#118)

* handle nans in categorical columns

* Fixed error in self dtypes

* Addressed comments from francisco

* Forgot to commit

* Fix flake

* try without embeddings

* work in progress

* in progress

* Working network embedding

* ADD tests for network embedding

* Removed ordinal encoder

* Removed ordinal encoder

* Addressed comments

* fix flake

* fix test import training

* Fix tests and move to boston

* Debug issue with python 3.6

* Run only debug file

* Debug for python3.6

* work in progress

* in progress

* Working network embedding

* ADD tests for network embedding

* print paths of parent dir

* Trying to run examples

* Trying to run examples

* Add success model

* Added parent directory for printing paths

* print log file to see if backend is saving num run

* Setup logger in backend

* try without embeddings

* no embedding for python 3.6

* Deleted debug example

* Fix test for evaluation

* Deleted utils file

Co-authored-by: chico <francisco.rivera.valverde@gmail.com>

* Fixes to address automlbenchmark problems

* Fix trajectory file output

* modified the doc-string in TransformSubset in base_dataset.py

* change config_id to config_id+1 (automl#129)

* move to a minimization problem (automl#113)

* move to a minimization problem

* Fix missing test loss file

* Missed regression

* More robust test

* Try signal timeout

* Kernel PCA failures

* Feedback from Ravin

* Better debug msg

* Feedback from comments

* Doc string request

* Feedback from comments

* Enhanced doc string

* FIX_123 (automl#133)

* FIX_123

* Better debug msg

* at least 1 config in regression

* Return self in _fit()

* Adds more examples to customise AutoPyTorch. (automl#124)

* 3 examples plus doc update

* Forgot the examples

* Added example for resampling strategy

* Update example worflow

* Fixed bugs in example and resampling strategies

* Addressed comments

* Addressed comments

* Addressed comments from shuhei, better documentation

* [Feat] Better traditional pipeline cutoff time (automl#141)

* [Feat] Better traditional pipeline cutoff time

* Fix unit testing

* Better failure msg

* bug fix catboost

* Feedback from Ravin

* First batch of feedback from comments

* Missed examples

* Syntax fix

* Hyperparameter Search Space updates now with constant and include ability (automl#146)

* In progress, add_hyperparameter

* Added SearchSpace working functionality

* Working search space update with test for __choice__ and fix flake

* fixed mypy bug and bug in making constant float hyperparameters

* Add test for fitting pipeline with constant updates

* fix flake

* bug in int for feature preprocessors and minor bugs in hyperparameter search space fixed

* Forgot to add a file

* Addressed comments, better documentation and better tests for search space updates

* Fix flake

* [Bug] Fix random halt problems on traditional pipelines (automl#147)

* [feat] Fix random halt problems on traditional pipelines

* Documentation update

* Fix flake

* Flake due to kernel pca errors

* Run history traditional (automl#121)

* In progress, issue with failed traditional

* working traditional classifiers

* Addressed comments from francisco

* Changed test loop in test_api

* Add .autopytorch runs back again

* Addressed comments, better documentation and dict for runhistory

* Fix flake

* Fix tests and add additional run info for crossval

* fix tests for train evaluator and api

* Addressed comments

* Addressed comments

* Addressed comments from shuhei, removed deleting from additioninfo

* [FIX] Enables backend to track the num run  (automl#162)

* AA_151

* doc the peek attr

* [ADD] Relax constant pipeline performance

* [Doc] First push of the developer documentation (automl#127)

* First push of the developer documentation

* Feedback from Ravin

* Document scikit-learn develop guide

* Feedback from Ravin

* Delete extra point

* Refactoring base dataset splitting functions (automl#106)

* [Fork from automl#105] Made CrossValFuncs and HoldOutFuncs class to group the functions

* Modified time_series_dataset.py to be compatible with resampling_strategy.py

* [fix]: back to the renamed version of CROSS_VAL_FN from temporal SplitFunc typing.

* fixed flake8 issues in three files

* fixed the flake8 issues

* [refactor] Address the francisco's comments

* [refactor] Adress the francisco's comments

* [refactor] Address the doc-string issue in TransformSubset class

* [fix] Address flake8 issues

* [fix] Fix flake8 issue

* [fix] Fix mypy issues raised by github check

* [fix] Fix a mypy issue

* [fix] Fix a contradiction in holdout_stratified_validation

Since stratified splitting requires to shuffle by default
and it raises error in the github check,
I fixed this issue.

* [fix] Address the francisco's review

* [fix] Fix a mypy issue tabular_dataset.py

* [fix] Address the francisco's comment about the self.dataset_name

Since we would to use the dataset name which does not have any name,
I decided to get self.dataset_name back to Optional[str].

* [fix] Fix mypy issues

* [Fix] Refactor development reproducibility (automl#172)

* [Fix] pass random state to randomized algorithms

* [Fix] double instantiation of random state

* [fix] Flaky for sample configuration

* [FIX] Runtime warning

* [FIX] hardcoded budget

* [FIX] flake

* [Fix] try forked

* [Fix] try forked

* [FIX] budget

* [Fix] missing random_state in trainer

* [Fix] overwrite in random_state

* [FIX] fix seed in splits

* [Rebase]

* [FIX] Update cv score after split num change

* [FIX] CV split

* [ADD] Extra visualization example (automl#189)

* [ADD] Extra visualization example

* Update docs/manual.rst

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update docs/manual.rst

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* [Fix] missing version

* Update examples/tabular/40_advanced/example_visualization.py

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* [FIX] make docs more clear to the user

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* [Fix] docs links (automl#201)

* [Fix] docs links

* Update README.md

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update examples check

* Remove tmp in examples

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* [Refactor] Use the backend implementation from automl common (automl#185)

* [ADD] First push to enable common backend

* Fix unit test

* Try public https

* [FIX] conftest prefix

* [fix] unit test

* [FIX] Fix fixture in score

* [Fix] pytest collection

* [FIX] flake

* [FIX] regression also!

* Update README.md

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update .gitmodules

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* [FIX] Regression time

* Make flaky in case memout doesn't happen

* Refacto development automl common backend debug (#2)

* [ADD] debug information

* [FIX] try fork for more stability

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* [DOC] Adds documentation to the abstract evaluator (automl#160)

* DOC_153

* Changes from Ravin

* [FIX] improve clarity of msg in commit

* [FIX] Update Readme (automl#208)

* Reduce run time of the test  (automl#205)

* In progress, changing te4sts

* Reduce time for tests

* Fix flake in tests

* Patch train in other tests also

* Address comments from shuhei and fransisco:

* Move base training to pytest

* Fix flake in tests

* forgot to pass n_samples

* stupid error

* Address comments from shuhei, remove hardcoding and fix bug in dummy eval function

* Skip ensemble test for python >=3.7 and introduce random state for feature processors

* fix flake

* Remove example workflow

* Remove  from __init__ in feature preprocessing

* [refactor] Getting dataset properties from the dataset object (automl#164)

* Use get_required_dataset_info of the dataset when needing required info for getting dataset requirements

* Fix flake

* Fix bug in getting dataset requirements

* Added doc string to explain dataset properties

* Update doc string in utils pipeline

* Change ubuntu version in docs workflow (automl#237)

* Add dist check worflow (automl#238)

* [feature] Greedy Portfolio (automl#200)

* initial configurations added

* In progress, adding flag in search function

* Adds documentation, example and fixes setup.py

* Address comments from shuhei, change run_greedy to portfolio_selection

* address comments from fransisco, movie portfolio to configs

* Address comments from fransisco, add tests for greedy portfolio and tests

* fix flake tests

* Simplify portfolio selection

* Update autoPyTorch/optimizer/smbo.py

Co-authored-by: Francisco Rivera Valverde <44504424+franchuterivera@users.noreply.github.com>

* Address comments from fransisco, path exception handling and test

* fix flake

* Address comments from shuhei

* fix bug in setup.py

* fix tests in base trainer evaluate, increase n samples and add seed

* fix tests in base trainer evaluate, increase n samples (fix)

Co-authored-by: Francisco Rivera Valverde <44504424+franchuterivera@users.noreply.github.com>

* [ADD] Forkserver as default multiprocessing strategy (automl#223)

* First push of forkserver

* [Fix] Missing file

* [FIX] mypy

* [Fix] renam choice to init

* [Fix] Unit test

* [Fix] bugs in examples

* [Fix] ensemble builder

* Update autoPyTorch/pipeline/components/preprocessing/image_preprocessing/normalise/__init__.py

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update autoPyTorch/pipeline/components/preprocessing/image_preprocessing/normalise/__init__.py

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update autoPyTorch/pipeline/components/preprocessing/tabular_preprocessing/encoding/__init__.py

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update autoPyTorch/pipeline/components/preprocessing/image_preprocessing/normalise/__init__.py

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update autoPyTorch/pipeline/components/preprocessing/tabular_preprocessing/feature_preprocessing/__init__.py

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update autoPyTorch/pipeline/components/preprocessing/tabular_preprocessing/scaling/__init__.py

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update autoPyTorch/pipeline/components/setup/network_head/__init__.py

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update autoPyTorch/pipeline/components/setup/network_initializer/__init__.py

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* Update autoPyTorch/pipeline/components/setup/network_embedding/__init__.py

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* [FIX] improve doc-strings

* Fix rebase

Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>

* [ADD] Get incumbent config (automl#175)

* In progress get_incumbent_results

* [Add] get_incumbent_results to base task, changed additional info in abstract evaluator, and  tests

* In progress addressing fransisco's comment

* Proper check for include_traditional

* Fix flake

* Mock search of estimator

* Fixed path of run history test_api

* Addressed comments from Fransisco, making better tests

* fix flake

* After rebase fix issues

* fix flake

* Added debug information for API

* filtering only successful runs in get_incumbent_results

* Address comments from fransisco

* Revert changes made to run history assertion in base taks #1257

* fix flake issue

* [ADD] Coverage calculation (automl#224)

* [ADD] Coverage calculation

* [Fix] Flake8

* [fix] rebase artifacts

* [Fix] smac reqs

* [Fix] Make traditional test robust

* [Fix] unit test

* [Fix] test_evaluate

* [Fix] Try more time for cross validation

* Fix mypy post rebase

* Fix unit test

* [ADD] Pytest schedule (automl#234)

* add schedule for pytests workflow

* Add ref to development branch

* Add scheduled test

* update schedule workflow to run on python 3.8

* omit test, examples, workflow from coverage and remove unnecessary code from schedule

* Fix call for python3.8

* Fix call for python3.8 (2)

* fix code cov call in python 3.8

* Finally fix cov call

* [fix] Dropout bug fix (automl#247)

* fix dropout bug

* fix dropout shape discrepancy

* Fix unit test bug

* Add tests for dropout shape asper comments from fransisco

* Fix flake

* Early stop on metric

* Enable long run regression

Co-authored-by: Ravin Kohli <kohliravin7@gmail.com>
Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>
Co-authored-by: bastiscode <sebastian.walter98@gmail.com>
Co-authored-by: nabenabe0928 <shuhei.watanabe.utokyo@gmail.com>
Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com>
  • Loading branch information
6 people committed Jun 14, 2021
1 parent e22a374 commit 6030aeb
Show file tree
Hide file tree
Showing 685 changed files with 36,260 additions and 25,907 deletions.
2 changes: 2 additions & 0 deletions .binder/apt.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
build-essential
swig
43 changes: 43 additions & 0 deletions .binder/postBuild
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
#!/bin/bash

set -e

python -m pip install .[docs,examples]

# Taken from https://github.com/scikit-learn/scikit-learn/blob/22cd233e1932457947e9994285dc7fd4e93881e4/.binder/postBuild
# under BSD3 license, copyright the scikit-learn contributors

# This script is called in a binder context. When this script is called, we are
# inside a git checkout of the automl/Auto-PyTorch repo. This script
# generates notebooks from the Auto-PyTorch python examples.

if [[ ! -f /.dockerenv ]]; then
echo "This script was written for repo2docker and is supposed to run inside a docker container."
echo "Exiting because this script can delete data if run outside of a docker container."
exit 1
fi

# Copy content we need from the Auto-PyTorch repo
TMP_CONTENT_DIR=/tmp/Auto-PyTorch
mkdir -p $TMP_CONTENT_DIR
cp -r examples .binder $TMP_CONTENT_DIR
# delete everything in current directory including dot files and dot folders
find . -delete

# Generate notebooks and remove other files from examples folder
GENERATED_NOTEBOOKS_DIR=examples
cp -r $TMP_CONTENT_DIR/examples $GENERATED_NOTEBOOKS_DIR

find $GENERATED_NOTEBOOKS_DIR -name 'example_*.py' -exec sphx_glr_python_to_jupyter.py '{}' +
# Keep __init__.py and custom_metrics.py
NON_NOTEBOOKS=$(find $GENERATED_NOTEBOOKS_DIR -type f | grep -v '\.ipynb' | grep -v 'init' | grep -v 'custom_metrics')
rm -f $NON_NOTEBOOKS

# Modify path to be consistent by the path given by sphinx-gallery
mkdir notebooks
mv $GENERATED_NOTEBOOKS_DIR notebooks/

# Put the .binder folder back (may be useful for debugging purposes)
mv $TMP_CONTENT_DIR/.binder .
# Final clean up
rm -rf $TMP_CONTENT_DIR
1 change: 1 addition & 0 deletions .binder/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
-r ../requirements.txt
42 changes: 42 additions & 0 deletions .codecov.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
#see https://github.com/codecov/support/wiki/Codecov-Yaml
codecov:
notify:
require_ci_to_pass: yes

coverage:
precision: 2 # 2 = xx.xx%, 0 = xx%
round: nearest # how coverage is rounded: down/up/nearest
range: 10...90 # custom range of coverage colors from red -> yellow -> green
status:
# https://codecov.readme.io/v1.0/docs/commit-status
project:
default:
against: auto
target: 70% # specify the target coverage for each commit status
threshold: 50% # allow this little decrease on project
# https://github.com/codecov/support/wiki/Filtering-Branches
# branches: master
if_ci_failed: error
# https://github.com/codecov/support/wiki/Patch-Status
patch:
default:
against: auto
target: 30% # specify the target "X%" coverage to hit
threshold: 50% # allow this much decrease on patch
changes: false

parsers:
gcov:
branch_detection:
conditional: true
loop: true
macro: false
method: false
javascript:
enable_partials: false

comment:
layout: header, diff
require_changes: false
behavior: default # update if exists else create new
branches: *
27 changes: 27 additions & 0 deletions .coveragerc
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# .coveragerc to control coverage.py
[run]
branch = True
include = "autoPyTorch/*"

[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover

# Don't complain about missing debug-only code:
def __repr__
if self\.debug

# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError

# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:

ignore_errors = True

[html]
directory = coverage_html_report
7 changes: 7 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
[flake8]
max-line-length = 120
show-source = True
application-import-names = autoPyTorch
exclude =
venv
build
31 changes: 31 additions & 0 deletions .github/workflows/dist.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
name: dist-check

on: [push, pull_request]

jobs:
dist:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Build dist
run: |
python setup.py sdist
- name: Twine check
run: |
pip install twine
last_dist=$(ls -t dist/autoPyTorch-*.tar.gz | head -n 1)
twine_output=`twine check "$last_dist"`
if [[ "$twine_output" != "Checking $last_dist: PASSED" ]]; then echo $twine_output && exit 1;fi
- name: Install dist
run: |
last_dist=$(ls -t dist/autoPyTorch-*.tar.gz | head -n 1)
pip install $last_dist
- name: PEP 561 Compliance
run: |
pip install mypy
cd .. # required to use the installed version of autosklearn
if ! python -c "import autoPyTorch"; then exit 1; fi
44 changes: 44 additions & 0 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
name: Docs
on: [pull_request, push]

jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install dependencies
run: |
git submodule update --init --recursive
pip install -e .[docs,examples]
- name: Make docs
run: |
cd docs
make html
- name: Pull latest gh-pages
if: (contains(github.ref, 'develop') || contains(github.ref, 'master')) && github.event_name == 'push'
run: |
cd ..
git clone https://github.com/automl/Auto-PyTorch.git --branch gh-pages --single-branch gh-pages
- name: Copy new doc into gh-pages
if: (contains(github.ref, 'develop') || contains(github.ref, 'master')) && github.event_name == 'push'
run: |
branch_name=${GITHUB_REF##*/}
cd ../gh-pages
rm -rf $branch_name
cp -r ../Auto-PyTorch/docs/build/html $branch_name
- name: Push to gh-pages
if: (contains(github.ref, 'develop') || contains(github.ref, 'master')) && github.event_name == 'push'
run: |
last_commit=$(git log --pretty=format:"%an: %s")
cd ../gh-pages
branch_name=${GITHUB_REF##*/}
git add $branch_name/
git config --global user.name 'Github Actions'
git config --global user.email 'not@mail.com'
git remote set-url origin https://x-access-token:${{ secrets.GITHUB_TOKEN }}@github.com/${{ github.repository }}
git commit -am "$last_commit"
git push
35 changes: 35 additions & 0 deletions .github/workflows/long_regression_test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
name: Tests

on:
schedule:
# Every Truesday at 7AM UTC
# TODO teporary set to every day just for the PR
#- cron: '0 07 * * 2'
- cron: '0 07 * * *'


jobs:
ubuntu:

runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8]
fail-fast: false

steps:
- uses: actions/checkout@v2
with:
ref: development
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install test dependencies
run: |
git submodule update --init --recursive
python -m pip install --upgrade pip
pip install -e .[test]
- name: Run tests
run: |
python -m pytest --durations=200 cicd/test_preselected_configs.py -vs
23 changes: 23 additions & 0 deletions .github/workflows/pre-commit.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
name: pre-commit

on: [push, pull_request]

jobs:
run-all-files:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Python 3.7
uses: actions/setup-python@v2
with:
python-version: 3.7
- name: Init Submodules
run: |
git submodule update --init --recursive
- name: Install pre-commit
run: |
pip install pre-commit
pre-commit install
- name: Run pre-commit
run: |
pre-commit run --all-files
55 changes: 55 additions & 0 deletions .github/workflows/pytest.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
name: Tests

on: [push, pull_request]

jobs:
ubuntu:

runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
include:
- python-version: 3.8
code-cov: true
fail-fast: false
max-parallel: 2

steps:
- uses: actions/checkout@v2
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install test dependencies
run: |
git submodule update --init --recursive
python -m pip install --upgrade pip
pip install -e .[test]
- name: Store repository status
id: status-before
run: |
echo "::set-output name=BEFORE::$(git status --porcelain -b)"
- name: Run tests
run: |
if [ ${{ matrix.code-cov }} ]; then
codecov='--cov=autoPyTorch --cov-report=xml --cov-config=.coveragerc';
fi
python -m pytest --forked --durations=20 --timeout=600 --timeout-method=signal -v $codecov test
- name: Check for files left behind by test
if: ${{ always() }}
run: |
before="${{ steps.status-before.outputs.BEFORE }}"
after="$(git status --porcelain -b)"
if [[ "$before" != "$after" ]]; then
echo "git status from before: $before"
echo "git status from after: $after"
echo "Not all generated files have been deleted!"
exit 1
fi
- name: Upload coverage
if: matrix.code-cov && always()
uses: codecov/codecov-action@v1
with:
fail_ci_if_error: true
verbose: true
34 changes: 34 additions & 0 deletions .github/workflows/scheduled_test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
name: Tests

on:
schedule:
# Every Monday at 7AM UTC
- cron: '0 07 * * 1'


jobs:
ubuntu:

runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8]
fail-fast: false
max-parallel: 2

steps:
- uses: actions/checkout@v2
with:
ref: development
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install test dependencies
run: |
git submodule update --init --recursive
python -m pip install --upgrade pip
pip install -e .[test]
- name: Run tests
run: |
python -m pytest --forked --durations=20 --timeout=600 --timeout-method=signal -v test

0 comments on commit 6030aeb

Please sign in to comment.