Skip to content
This repository was archived by the owner on Aug 28, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 31 additions & 21 deletions .github/workflows/ci_docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,19 @@ on: # Trigger the workflow on push or pull request

concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}

defaults:
run:
shell: bash

jobs:
build-docs:
make-docs:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
check: ["html", "linkcheck"]
env:
PUB_BRANCH: publication
PATH_DATASETS: ${{ github.workspace }}/.datasets
Expand All @@ -35,16 +43,17 @@ jobs:
key: pip-${{ hashFiles('requirements.txt') }}-${{ hashFiles('_requirements/docs.txt') }}
restore-keys: pip-

- name: Install dependencies
- name: Install Texlive & tree
run: |
sudo apt-get update --fix-missing
sudo apt-get install -y tree
# install Texlive, see https://linuxconfig.org/how-to-install-latex-on-ubuntu-20-04-focal-fossa-linux
sudo apt-get install -y cmake pandoc texlive-latex-extra dvipng texlive-pictures
sudo apt-get install -y cmake tree pandoc texlive-latex-extra dvipng texlive-pictures

- name: Install dependencies
run: |
pip --version
pip install -q -r requirements.txt -r _requirements/docs.txt
pip list
shell: bash

- name: Process folders
run: |
Expand All @@ -54,21 +63,21 @@ jobs:
python .actions/assistant.py group-folders master-diff.txt
printf "Changed folders:\n"
cat changed-folders.txt
shell: bash

- name: ">> output"
id: changed
run: python -c "lines = open('changed-folders.txt').readlines(); print(f'::set-output name=nb_dirs::{len(lines)}')"
- name: Count changed notebooks
run: python -c "lines = open('changed-folders.txt').readlines(); print(f'NB_DIRS={len(lines)}')" >> $GITHUB_ENV

- uses: oleksiyrudenko/gha-git-credentials@v2.1
with:
token: "${{ secrets.GITHUB_TOKEN }}"
token: ${{ secrets.GITHUB_TOKEN }}
global: true
- name: Sync to pub
run: git merge -s resolve origin/$PUB_BRANCH

- name: Generate notebooks
if: steps.changed.outputs.nb_dirs != 0
if: ${{ env.NB_DIRS != 0 }}
env:
DRY_RUN: 1
run: |
# second half with || [...] is needed for reading the last line
while read -r line || [ -n "$line" ]; do
Expand All @@ -77,12 +86,9 @@ jobs:
cat .actions/_ipynb-render.sh
bash .actions/_ipynb-render.sh
done <<< $(cat changed-folders.txt)
env:
DRY_RUN: 1
shell: bash

- name: Copy notebooks
if: steps.changed.outputs.nb_dirs != 0
if: ${{ env.NB_DIRS != 0 }}
run: |
# second half with || [...] is needed for reading the last line
while read -r line || [ -n "$line" ]; do
Expand All @@ -91,22 +97,26 @@ jobs:
cp .notebooks/${line}.ipynb changed-notebooks/${dir}/
done <<< $(cat changed-folders.txt)
tree changed-notebooks
shell: bash

- uses: actions/upload-artifact@v3
if: steps.changed.outputs.nb_dirs != 0
if: ${{ matrix.check == 'html' && env.NB_DIRS != 0 }}
with:
name: notebooks-${{ github.sha }}
path: changed-notebooks/

- name: Link check
working-directory: ./_docs
if: ${{ matrix.check == 'linkcheck' }}
run: make linkcheck --jobs $(nproc) --debug SPHINXOPTS="--keep-going"

- name: Make Documentation
working-directory: ./_docs
run: make html --jobs $(nproc) --debug SPHINXOPTS="-W --keep-going" linkcheck
if: ${{ matrix.check == 'html' }}
run: make html --jobs $(nproc) --debug SPHINXOPTS="-W --keep-going"

- name: Upload built docs
if: ${{ matrix.check == 'html' }}
uses: actions/upload-artifact@v3
with:
name: docs-html-${{ github.sha }}
path: _docs/build/html/
# Use always() to always run this step to publish test results when there are test failures
if: success()
15 changes: 15 additions & 0 deletions .github/workflows/ci_schema.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
name: Check Schema

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
check:
uses: Lightning-AI/utilities/.github/workflows/check-schema.yml@main
with:
# skip azure due to the wrong schema file by MSFT
# https://github.com/Lightning-AI/lightning-flash/pull/1455#issuecomment-1244793607
azure-dir: ".azure"
3 changes: 2 additions & 1 deletion .github/workflows/ci_test-acts.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@ name: CI internal

# see: https://help.github.com/en/actions/reference/events-that-trigger-workflows
on: # Trigger the workflow on push or pull request, but only for the main branch
push: {}
push:
branches: [main]
pull_request:
branches: [main]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -172,8 +172,7 @@
# The trick is that we approximate $Z_{\theta}$ by a single Monte-Carlo sample.
# This gives us the exact same objective as written above.
#
# Visually, we can look at the objective as follows (figure credit
# - [Stefano Ermon and Aditya Grover](https://deepgenerativemodels.github.io/assets/slides/cs236_lecture11.pdf)):
# Visually, we can look at the objective as follows (figure credit - Stefano Ermon and Aditya Grover: lecture cs236/11):
#
# <center width="100%"><img src="contrastive_divergence.svg" width="700px"></center>
#
Expand Down Expand Up @@ -206,8 +205,7 @@
# Modeling the probability distribution for sampling new data is not the only application of energy-based models.
# Any application which requires us to compare two elements is much simpler to learn
# because we just need to go for the higher energy.
# A couple of examples are shown below (figure credit
# - [Stefano Ermon and Aditya Grover](https://deepgenerativemodels.github.io/assets/slides/cs236_lecture11.pdf)).
# A couple of examples are shown below (figure credit - Stefano Ermon and Aditya Grover: lecture cs236/11).
# A classification setup like object recognition or sequence labeling can be considered as an energy-based
# task as we just need to find the $Y$ input that minimizes the output $E(X, Y)$ (hence maximizes probability).
# Similarly, a popular application of energy-based models is denoising of images.
Expand Down