Skip to content

Commit

Permalink
Bunch of small refactorings and updates
Browse files Browse the repository at this point in the history
  • Loading branch information
Ayaz Salikhov committed Apr 17, 2021
1 parent e28c630 commit bf9660a
Show file tree
Hide file tree
Showing 13 changed files with 32 additions and 36 deletions.
8 changes: 4 additions & 4 deletions .pre-commit-config.yaml
@@ -1,12 +1,12 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.1.0
rev: v3.4.0
hooks:
- id: check-yaml
files: .*\.(yaml|yml)$
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.23.0
rev: v1.26.1
hooks:
- id: yamllint
args: ['-d {extends: relaxed, rules: {line-length: disable}}', '-s']
Expand All @@ -17,10 +17,10 @@ repos:
- id: bashate
args: ['--ignore=E006']
- repo: https://gitlab.com/pycqa/flake8
rev: 3.8.3
rev: 3.9.1
hooks:
- id: flake8
- repo: https://github.com/pre-commit/mirrors-autopep8
rev: v1.5.4
rev: v1.5.6
hooks:
- id: autopep8
2 changes: 1 addition & 1 deletion .readthedocs.yml
Expand Up @@ -12,6 +12,6 @@ sphinx:
formats: all

python:
version: 3.7
version: 3.8
install:
- requirements: requirements-dev.txt
4 changes: 3 additions & 1 deletion Makefile
Expand Up @@ -90,7 +90,7 @@ git-commit: ## commit outstading git changes and push to remote
@git config --global user.name "GitHub Actions"
@git config --global user.email "actions@users.noreply.github.com"

@echo "Publishing outstanding changes in $(LOCAL_PATH) to $(GITHUB_REPOSITORY)"
@echo "Publishing outstanding changes in $(LOCAL_PATH) to $(GITHUB_REPOSITORY)"
@cd $(LOCAL_PATH) && \
git remote add publisher https://$(GITHUB_TOKEN)@github.com/$(GITHUB_REPOSITORY).git && \
git checkout master && \
Expand Down Expand Up @@ -152,6 +152,8 @@ pull/%: DARGS?=
pull/%: ## pull a jupyter image
docker pull $(DARGS) $(OWNER)/$(notdir $@)

pull-all: $(foreach I,$(ALL_IMAGES),pull/$(I) ) ## pull all images

push/%: DARGS?=
push/%: ## push all tags for a jupyter image
docker push --all-tags $(DARGS) $(OWNER)/$(notdir $@)
Expand Down
2 changes: 1 addition & 1 deletion README.md
Expand Up @@ -31,7 +31,7 @@ in working on the project.
## Jupyter Notebook Deprecation Notice

Following [Jupyter Notebook notice](https://github.com/jupyter/notebook#notice), we encourage users to transition to JupyterLab.
This can be done by passing the environment variable `JUPYTER_ENABLE_LAB=yes` at container startup,
This can be done by passing the environment variable `JUPYTER_ENABLE_LAB=yes` at container startup,
more information is available in the [documentation](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/common.html#docker-options).

At some point, JupyterLab will become the default for all of the Jupyter Docker stack images, however a new environment variable will be introduced to switch back to Jupyter Notebook if needed.
Expand Down
2 changes: 1 addition & 1 deletion base-notebook/Dockerfile
Expand Up @@ -79,7 +79,7 @@ RUN chmod a+rx /usr/local/bin/fix-permissions
# hadolint ignore=SC2016
RUN sed -i 's/^#force_color_prompt=yes/force_color_prompt=yes/' /etc/skel/.bashrc && \
# Add call to conda init script see https://stackoverflow.com/a/58081608/4413446
echo 'eval "$(command conda shell.bash hook 2> /dev/null)"' >> /etc/skel/.bashrc
echo 'eval "$(command conda shell.bash hook 2> /dev/null)"' >> /etc/skel/.bashrc

# Create NB_USER with name jovyan user with UID=1000 and in the 'users' group
# and make sure these dirs are writable by the `users` group.
Expand Down
4 changes: 2 additions & 2 deletions datascience-notebook/Dockerfile
Expand Up @@ -62,10 +62,10 @@ RUN conda install --quiet --yes \
'r-caret=6.0*' \
'r-crayon=1.4*' \
'r-devtools=2.4*' \
'r-forecast=8.14*' \
'r-forecast=8.14*' \
'r-hexbin=1.28*' \
'r-htmltools=0.5*' \
'r-htmlwidgets=1.5*' \
'r-htmlwidgets=1.5*' \
'r-irkernel=1.1*' \
'r-nycflights13=1.0*' \
'r-randomforest=4.6*' \
Expand Down
2 changes: 1 addition & 1 deletion docs/using/common.md
Expand Up @@ -116,7 +116,7 @@ mamba install some-package

### Using alternative channels

Conda is configured by default to use only the [`conda-forge`](https://anaconda.org/conda-forge) channel.
Conda is configured by default to use only the [`conda-forge`](https://anaconda.org/conda-forge) channel.
However, alternative channels can be used either one shot by overwriting the default channel in the installation command or by configuring `conda` to use different channels.
The examples below show how to use the [anaconda default channels](https://repo.anaconda.com/pkgs/main) instead of `conda-forge` to install packages.

Expand Down
2 changes: 1 addition & 1 deletion docs/using/recipes.md
Expand Up @@ -130,7 +130,7 @@ RUN $CONDA_DIR/envs/${conda_env}/bin/python -m ipykernel install --user --name=$
fix-permissions /home/$NB_USER

# any additional pip installs can be added by uncommenting the following line
# RUN $CONDA_DIR/envs/${conda_env}/bin/pip install
# RUN $CONDA_DIR/envs/${conda_env}/bin/pip install

# prepend conda environment to path
ENV PATH $CONDA_DIR/envs/${conda_env}/bin:$PATH
Expand Down
2 changes: 1 addition & 1 deletion docs/using/running.md
Expand Up @@ -76,7 +76,7 @@ Executing the command: jupyter notebook

Pressing `Ctrl-C` shuts down the notebook server and immediately destroys the Docker container. Files written to `~/work` in the container remain touched. Any other changes made in the container are lost.

**Example 3** This command pulls the `jupyter/all-spark-notebook` image currently tagged `latest` from Docker Hub if an image tagged `latest` is not already present on the local host. It then starts a container named `notebook` running a JupyterLab server and exposes the server on a randomly selected port.
**Example 3** This command pulls the `jupyter/all-spark-notebook` image currently tagged `latest` from Docker Hub if an image tagged `latest` is not already present on the local host. It then starts a container named `notebook` running a JupyterLab server and exposes the server on a randomly selected port.

```
docker run -d -P --name notebook jupyter/all-spark-notebook
Expand Down
8 changes: 4 additions & 4 deletions docs/using/specifics.md
Expand Up @@ -82,7 +82,7 @@ sc <- sparkR.session("local")
# Sum of the first 100 whole numbers
sdf <- createDataFrame(list(1:100))
dapplyCollect(sdf,
function(x)
function(x)
{ x <- sum(x)}
)
# 5050
Expand All @@ -102,7 +102,7 @@ conf$spark.sql.catalogImplementation <- "in-memory"
sc <- spark_connect(master = "local", config = conf)

# Sum of the first 100 whole numbers
sdf_len(sc, 100, repartition = 1) %>%
sdf_len(sc, 100, repartition = 1) %>%
spark_apply(function(e) sum(e))
# 5050
```
Expand Down Expand Up @@ -171,7 +171,7 @@ sc <- sparkR.session("spark://master:7077")
# Sum of the first 100 whole numbers
sdf <- createDataFrame(list(1:100))
dapplyCollect(sdf,
function(x)
function(x)
{ x <- sum(x)}
)
# 5050
Expand All @@ -190,7 +190,7 @@ conf$spark.sql.catalogImplementation <- "in-memory"
sc <- spark_connect(master = "spark://master:7077", config = conf)

# Sum of the first 100 whole numbers
sdf_len(sc, 100, repartition = 1) %>%
sdf_len(sc, 100, repartition = 1) %>%
spark_apply(function(e) sum(e))
# 5050
```
Expand Down
2 changes: 1 addition & 1 deletion examples/docker-compose/README.md
Expand Up @@ -85,7 +85,7 @@ NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh

### How do I run over HTTPS?

To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script.
To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script.

```bash
PASSWORD=a_secret notebook/up.sh --secure
Expand Down
16 changes: 7 additions & 9 deletions test/helpers.py
Expand Up @@ -75,7 +75,7 @@ def specified_packages(self):
if self.specs is None:
LOGGER.info("Grabing the list of specifications ...")
self.specs = CondaPackageHelper._packages_from_json(
self._execute_command(CondaPackageHelper._conda_export_command(True))
self._execute_command(CondaPackageHelper._conda_export_command(from_history=True))
)
return self.specs

Expand Down Expand Up @@ -130,7 +130,7 @@ def _extract_available(lines):
return ddict

def check_updatable_packages(self, specifications_only=True):
"""Check the updatables packages including or not dependencies"""
"""Check the updatable packages including or not dependencies"""
specs = self.specified_packages()
installed = self.installed_packages()
available = self.available_packages()
Expand All @@ -145,9 +145,10 @@ def check_updatable_packages(self, specifications_only=True):
current = min(inst_vs, key=CondaPackageHelper.semantic_cmp)
newest = avail_vs[-1]
if avail_vs and current != newest:
if CondaPackageHelper.semantic_cmp(
current
) < CondaPackageHelper.semantic_cmp(newest):
if (
CondaPackageHelper.semantic_cmp(current) <
CondaPackageHelper.semantic_cmp(newest)
):
self.comparison.append(
{"Package": pkg, "Current": current, "Newest": newest}
)
Expand Down Expand Up @@ -180,10 +181,7 @@ def try_int(version_str):

def get_outdated_summary(self, specifications_only=True):
"""Return a summary of outdated packages"""
if specifications_only:
nb_packages = len(self.specs)
else:
nb_packages = len(self.installed)
nb_packages = len(self.specs if specifications_only else self.installed)
nb_updatable = len(self.comparison)
updatable_ratio = nb_updatable / nb_packages
return f"{nb_updatable}/{nb_packages} ({updatable_ratio:.0%}) packages could be updated"
Expand Down
14 changes: 5 additions & 9 deletions test/test_packages.py
Expand Up @@ -7,12 +7,12 @@
This test module tests if R and Python packages installed can be imported.
It's a basic test aiming to prove that the package is working properly.
The goal is to detect import errors that can be caused by incompatibilities between packages for example:
The goal is to detect import errors that can be caused by incompatibilities between packages, for example:
- #1012: issue importing `sympy`
- #966: isssue importing `pyarrow`
This module checks dynmamically, through the `CondaPackageHelper`, only the specified packages i.e. packages requested by `conda install` in the `Dockerfiles`.
This module checks dynamically, through the `CondaPackageHelper`, only the specified packages i.e. packages requested by `conda install` in the `Dockerfile`s.
This means that it does not check dependencies. This choice is a tradeoff to cover the main requirements while achieving reasonable test duration.
However it could be easily changed (or completed) to cover also dependencies `package_helper.installed_packages()` instead of `package_helper.specified_packages()`.
Expand Down Expand Up @@ -86,10 +86,7 @@ def packages(package_helper):

def package_map(package):
"""Perform a mapping between the python package name and the name used for the import"""
_package = package
if _package in PACKAGE_MAPPING:
_package = PACKAGE_MAPPING.get(_package)
return _package
return PACKAGE_MAPPING.get(package, package)


def excluded_package_predicate(package):
Expand Down Expand Up @@ -136,9 +133,8 @@ def _import_packages(package_helper, filtered_packages, check_function, max_fail
for package in filtered_packages:
LOGGER.info(f"Trying to import {package}")
try:
assert (
check_function(package_helper, package) == 0
), f"Package [{package}] import failed"
assert check_function(package_helper, package) == 0, \
f"Package [{package}] import failed"
except AssertionError as err:
failures[package] = err
if len(failures) > max_failures:
Expand Down

0 comments on commit bf9660a

Please sign in to comment.