Skip to content
This repository has been archived by the owner on Feb 11, 2023. It is now read-only.

Commit

Permalink
CI actions
Browse files Browse the repository at this point in the history
* add CI testing
* add CI experiments
* add badge
* fix minumal req.
* fix re-label
* fix minor msgs
  • Loading branch information
Borda committed Oct 12, 2020
1 parent 985b113 commit 5481070
Show file tree
Hide file tree
Showing 17 changed files with 232 additions and 36 deletions.
88 changes: 88 additions & 0 deletions .github/workflows/ci-experiment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
name: CI experiments

# see: https://help.github.com/en/actions/reference/events-that-trigger-workflows
on: # Trigger the workflow on push or pull request, but only for the master branch
push:
branches: [master]
pull_request:
branches: [master]

jobs:
pytest:

runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-18.04]
python-version: [2.7, 3.6, 3.7, 3.8]

# Timeout: https://stackoverflow.com/a/59076067/4521646
timeout-minutes: 35

steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}

# Note: This uses an internal pip API and may not always work
# https://github.com/actions/cache/blob/master/examples.md#multiple-oss-in-a-workflow
- name: Cache pip
uses: actions/cache@v2
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-py${{ matrix.python-version }}-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-py${{ matrix.python-version }}-
- name: Set py2.7 dependencies
if: matrix.python-version == 2.7
run: |
cp -r requirements-py27.txt requirements.txt
- name: install ImSegm
run: |
python --version
pip --version
pip install wheel --quiet
pip install https://github.com/Borda/pyGCO/archive/master.zip
pip install . --user
rm -rf imsegm
pip list
# create filders
mkdir libs
mkdir output
mkdir results
- name: ANNOTATION section
env:
DISPLAY: ""
run: |
bash handling_annotations/test_annotations.sh
- name: SEGMENTATION section
env:
DISPLAY: ""
run: |
bash experiments_segmentation/test_segmentations.sh
- name: CENTER DETECT. section
env:
DISPLAY: ""
run: |
bash experiments_ovary_centres/test_ovary_centers.sh
- name: REGION GROWING section
env:
DISPLAY: ""
run: |
bash experiments_ovary_detect/test_ovary_detect.sh
- name: remove ImSegm
env:
DISPLAY: ""
run: |
# just to not being cached
pip uninstall -y imsegm
107 changes: 107 additions & 0 deletions .github/workflows/ci-testing.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
name: CI testing

# see: https://help.github.com/en/actions/reference/events-that-trigger-workflows
on: # Trigger the workflow on push or pull request, but only for the master branch
push:
branches: [master]
pull_request:
branches: [master]

jobs:
pytest:

runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-18.04, macOS-10.15] # , windows-2019
python-version: [2.7, 3.6, 3.8]
requires: ['minimal', 'latest']
exclude:
- python-version: 2.7
requires: 'minimal'
- python-version: 3.8
requires: 'minimal'

# Timeout: https://stackoverflow.com/a/59076067/4521646
timeout-minutes: 35

steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}

- name: Set py2.7 dependencies
if: matrix.python-version == 2.7
run: |
cp -r requirements-py27.txt requirements.txt
# required for matplotlib @py2
pip install -U backports.functools_lru_cache
- name: Set min. dependencies
if: matrix.requires == 'minimal'
run: |
python -c "fname = 'requirements.txt' ; req = open(fname).read().replace('>=', '==') ; open(fname, 'w').write(req)"
python -c "fname = 'tests/requirements.txt' ; req = open(fname).read().replace('>=', '==') ; open(fname, 'w').write(req)"
# Note: This uses an internal pip API and may not always work
# https://github.com/actions/cache/blob/master/examples.md#multiple-oss-in-a-workflow
- name: Get pip cache
id: pip-cache
run: |
python -c "from pip._internal.locations import USER_CACHE_DIR; print('::set-output name=dir::' + USER_CACHE_DIR)"
- name: Cache pip
uses: actions/cache@v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-py${{ matrix.python-version }}-${{ matrix.requires }}-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-py${{ matrix.python-version }}-${{ matrix.requires }}-
- name: Install dependencies
run: |
pip install wheel --quiet
pip install --requirement requirements.txt --quiet --upgrade
pip install --requirement tests/requirements.txt --quiet --upgrade
python --version
pip --version
pip list
mkdir output
mkdir results
shell: bash

- name: Tests
env:
DISPLAY: ""
run: |
check-manifest
python setup.py check --metadata --strict
python setup.py install --dry-run --user
python setup.py build_ext --inplace
coverage run --source imsegm -m py.test imsegm tests --durations=25 --junitxml=junit/test-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}.xml
flake8 .
- name: Upload pytest test results
uses: actions/upload-artifact@master
with:
name: pytest-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}
path: junit/test-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}.xml
# Use always() to always run this step to publish test results when there are test failures
if: always()

- name: Statistics
if: success()
run: |
coverage report
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
if: success()
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: coverage.xml
fail_ci_if_error: false
16 changes: 3 additions & 13 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ matrix:
- dist: bionic # Ubuntu 18.04
python: 3.6
env: MIN_REQUIREMENTS=1
- dist: bionic # Ubuntu 18.04
python: 3.7
#- dist: bionic # Ubuntu 18.04
# python: 3.7
- dist: bionic # Ubuntu 18.04
python: 3.8

Expand Down Expand Up @@ -63,7 +63,7 @@ before_script:
script:
- python setup.py build_ext --inplace
# - pytest imsegm -v --doctest-modules
- coverage run -m py.test imsegm tests --doctest-modules --flake8
- coverage run --source imsegm -m py.test imsegm tests # --flake8
# - nosetests imsegm tests -v --exe --with-doctest --with-xunit --with-coverage --cover-package=imsegm
- python setup.py install --dry-run

Expand All @@ -72,13 +72,3 @@ after_success:
- coverage xml
- python-codacy-coverage -r coverage.xml
- coverage report
# ANNOTATION section
#- bash handling_annotations/test_annotations.sh
# SEGMENTATION section
#- bash experiments_segmentation/test_segmentations.sh
# CENTER DETECT. section
#- bash experiments_ovary_centres/test_ovary_centers.sh
# REGION GROWING section
#- bash experiments_ovary_detect/test_ovary_detect.sh
# test installed package
#- cd .. && python -c "import imsegm.descriptors"
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,11 @@
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/48b7976bbe9d42bc8452f6f9e573ee70)](https://www.codacy.com/app/Borda/pyImSegm?utm_source=github.com&utm_medium=referral&utm_content=Borda/pyImSegm&utm_campaign=Badge_Grade)
[![CircleCI](https://circleci.com/gh/Borda/pyImSegm.svg?style=svg&circle-token=a30180a28ae7e490c0c0829d1549fcec9a5c59d0)](https://circleci.com/gh/Borda/pyImSegm)
[![CodeFactor](https://www.codefactor.io/repository/github/borda/pyimsegm/badge)](https://www.codefactor.io/repository/github/borda/pyimsegm)
[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Borda/pyImSegm.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Borda/pyImSegm/context:python)

[![Documentation Status](https://readthedocs.org/projects/pyimsegm/badge/?version=latest)](https://pyimsegm.readthedocs.io/en/latest/?badge=latest)
[![Gitter](https://badges.gitter.im/pyImSegm/community.svg)](https://gitter.im/pyImSegm/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)

<!--
[![Run Status](https://api.shippable.com/projects/5962ea48a125960700c197f8/badge?branch=master)](https://app.shippable.com/github/Borda/pyImSegm)
[![Coverage Badge](https://api.shippable.com/projects/5962ea48a125960700c197f8/coverageBadge?branch=master)](https://app.shippable.com/github/Borda/pyImSegm)
Expand Down
8 changes: 4 additions & 4 deletions circle.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ references:
name: Install PyPI dependences
command: |
sudo apt-get update -qq
sudo apt-get install git-lfs
#sudo apt-get install git-lfs
sudo apt-get install tk-dev pkg-config python-dev python-tk
sudo pip install --upgrade pip setuptools
sudo pip install -r ./tests/requirements.txt
Expand All @@ -21,7 +21,7 @@ references:
command: |
unset DISPLAY
mkdir output && mkdir results && mkdir test-reports
coverage run --source imsegm -m py.test imsegm tests -v --doctest-modules --junitxml=test-reports/pytest_junit.xml
coverage run --source imsegm -m py.test imsegm tests -v --junitxml=test-reports/pytest_junit.xml
coverage report
python setup.py check --metadata --strict
Expand Down Expand Up @@ -149,6 +149,6 @@ workflows:
- Formatting
- Py2-Tests
- Py3-Tests
- Py2-Experiments
- Py3-Experiments
#- Py2-Experiments
#- Py3-Experiments
- Build-Docs
2 changes: 1 addition & 1 deletion experiments_ovary_detect/run_ellipse_cut_scale.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def perform_stage(df_group, stage, path_images, path_out):
stat_a = NORM_FUNC(df_group['ellipse_a'])
stat_b = NORM_FUNC(df_group['ellipse_b'])
norm_size = (int(stat_b), int(stat_a))
logging.info('normal dimension is %r' % norm_size)
logging.info('normal dimension is {}'.format(norm_size))

path_out_stage = os.path.join(path_out, str(stage))
if not os.path.isdir(path_out_stage):
Expand Down
2 changes: 1 addition & 1 deletion handling_annotations/run_overlap_images_segms.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ def visualise_overlap(path_img, path_seg, path_out,
segm_alpha = tl_visu.norm_aplha(segm_alpha)

if b_relabel:
seg, _, _ = segmentation.relabel_sequential(seg)
seg, _, _ = segmentation.relabel_sequential(seg.copy())

if img.ndim == 2: # for gray images of ovary
img = np.rollaxis(np.tile(img, (3, 1, 1)), 0, 3)
Expand Down
3 changes: 1 addition & 2 deletions imsegm/annotation.py
Original file line number Diff line number Diff line change
Expand Up @@ -279,8 +279,7 @@ def quantize_image_nearest_color(img, colors):

def image_inpaint_pixels(img, valid_mask):
assert img.shape == valid_mask.shape, \
'image size %r and mask size %r should be equal' \
% (img.shape, valid_mask.shape)
'image size %r and mask size %r should be equal' % (img.shape, valid_mask.shape)
coords = np.array(np.nonzero(valid_mask)).T
values = img[valid_mask]
it = interpolate.NearestNDInterpolator(coords, values)
Expand Down
4 changes: 2 additions & 2 deletions imsegm/classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -508,7 +508,7 @@ def feature_scoring_selection(features, labels, names=None, path_out=''):
>>> import shutil
>>> shutil.rmtree(path_out, ignore_errors=True)
"""
logging.info('Feature selection for %r', names)
logging.info('Feature selection for %s', names)
features = np.array(features) if not isinstance(features, np.ndarray) else features
labels = np.array(labels) if not isinstance(labels, np.ndarray) else labels
logging.debug('Features: %r and labels: %r', features.shape, labels.shape)
Expand Down Expand Up @@ -571,7 +571,7 @@ def save_classifier(path_out, classif, clf_name, params, feature_names=None,
'TESTINNG'
>>> os.remove(p_clf)
"""
assert os.path.isdir(path_out), 'missing folder: %r' % path_out
assert os.path.isdir(path_out), 'missing folder: %s' % path_out
dict_classif = {
'params': params,
'name': clf_name,
Expand Down
7 changes: 4 additions & 3 deletions imsegm/graph_cuts.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,15 @@
"""

import logging
from warnings import warn

import numpy as np

try:
from gco import cut_general_graph
except Exception:
print('WARNING: Missing Grah-Cut (GCO) library,'
' please install it from https://github.com/Borda/pyGCO.')
except ImportError:
warn('Missing Grah-Cut (GCO) library,'
' please install it from https://github.com/Borda/pyGCO.')
from skimage import filters
from sklearn import metrics, preprocessing
from sklearn import pipeline, cluster, mixture, decomposition
Expand Down
7 changes: 4 additions & 3 deletions imsegm/region_growing.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
"""

import logging
from warnings import warn

import numpy as np
from scipy import stats, ndimage, interpolate
Expand All @@ -15,9 +16,9 @@

try:
from gco import cut_general_graph, cut_grid_graph
except Exception:
print('WARNING: Missing Grah-Cut (GCO) library,'
' please install it from https://github.com/Borda/pyGCO.')
except ImportError:
warn('Missing Grah-Cut (GCO) library,'
' please install it from https://github.com/Borda/pyGCO.')

from imsegm.graph_cuts import MAX_PAIRWISE_COST, get_vertexes_edges, compute_spatial_dist
from imsegm.labeling import histogram_regions_labels_norm
Expand Down
4 changes: 2 additions & 2 deletions imsegm/utilities/data_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,8 +173,8 @@ def load_landmarks_csv(path_file):
assert os.path.exists(path_file), 'missing "%s"' % path_file
df = pd.read_csv(path_file, index_col=0)
landmarks = df[COLUMNS_COORDS].values.tolist()
logging.debug(' load_landmarks_csv (%i): \n%r', len(landmarks),
np.asarray(landmarks).astype(int).tolist())
logging.debug(' load_landmarks_csv (%i): \n%r',
len(landmarks), np.asarray(landmarks).astype(int).tolist())
return landmarks


Expand Down
2 changes: 1 addition & 1 deletion imsegm/utilities/experiments.py
Original file line number Diff line number Diff line change
Expand Up @@ -392,7 +392,7 @@ def __init__(self, wrap_func, iterate_vals, nb_workers=CPU_COUNT, desc='',
def __iter__(self):
tqdm_bar = None
if self.desc is not None:
desc = '%r @%i-threads' % (self.desc, self.nb_workers)
desc = '%s @%i-threads' % (self.desc, self.nb_workers)
tqdm_bar = tqdm.tqdm(total=len(self), desc=desc)

if self.nb_workers > 1:
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Cython >= 0.28 # 0.27 fails with python 3.7
numpy >= 1.13.3 # version 1.16 breaks skimage 0.14
scipy >= 1.0
pandas >= 0.19
pillow >= 2.1.0, < 7 # fail loading JPG images
pillow >= 4.0, < 7 # fail loading JPG images
matplotlib >= 2.0.2
scikit-learn >= 0.18.1
scikit-image >= 0.13.0
Expand Down
1 change: 1 addition & 0 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ verbose = 2
# max-complexity = 10

[tool:pytest]
addopts = --doctest-modules
log_cli = 1
log_cli_level = CRITICAL
#log_cli_format = %(message)s
Expand Down

0 comments on commit 5481070

Please sign in to comment.