Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New documentation frontend #203

Merged
merged 11 commits into from
Mar 22, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
34 changes: 0 additions & 34 deletions .github/workflows/Pollen.yml

This file was deleted.

55 changes: 55 additions & 0 deletions .github/workflows/pollenbuild.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
name: pollenbuild

on:
push:
branches: ['pollen']

jobs:
pollen:
name: "Pollen documentation build"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ref: pollen
path: data
- uses: actions/checkout@v2
with:
ref: "gh-pages"
path: "pages"
- uses: actions/checkout@v2
with:
repository: lorenzoh/pollenjl-frontend
path: frontend
ref: main
- name: Move Pollen data to static folder
run: |
ls
rm -rf data/.git
cp -r data/ frontend/static/
- uses: actions/setup-node@v2
- name: Install dependencies
run: |
cd frontend
npm install
- name: Build
run: |
cd frontend
npm run build
- name: Build search index
run: |
cd frontend
cat static/data/dev/documents.json | node buildindex.cjs > ../data/dev/searchindex.json
- name: Deploy changes
run: |
ls -lt
rm -r frontend/build/data
cp -r frontend/build/** pages
cp -r data pages/
cd pages
git config user.name github-actions
git config user.email github-actions@github.com
git add -f .
git commit -m "Deploy documentation (Pollen.jl)"
git push

33 changes: 33 additions & 0 deletions .github/workflows/pollenexport.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
name: pollenexport

on:
push:
branches: ['master']

jobs:
pollen:
name: "Pollen documentation export"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v2
with:
ref: pollen
path: pollen
- uses: julia-actions/setup-julia@latest
with:
version: '1.7'
- name: "Install package and docs dependencies"
run: |
julia --color=yes --project=docs/ -e 'using Pkg; Pkg.add([Pkg.PackageSpec(path="."), Pkg.PackageSpec(url="https://github.com/lorenzoh/ModuleInfo.jl"), Pkg.PackageSpec(url="https://github.com/lorenzoh/Pollen.jl", rev="frontend")]); Pkg.instantiate();'
- name: Build
run: |
julia --color=yes --project=docs/ ./docs/make.jl ./pollen/dev/
- name: Deploy
run: |
cd pollen
git config user.name github-actions
git config user.email github-actions@github.com
git add .
git commit -m "Build documentation data (Pollen.jl)"
git push
5 changes: 5 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,17 @@
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
Colors = "5ae59095-9a9b-59fe-a467-6f913c188581"
Crayons = "a8cc5b0e-0ffa-5ad4-8c14-923d3ee1735f"
DLPipelines = "e6530d7c-7faa-4ede-a0d6-9eff9baad396"
DataAugmentation = "88a5189c-e7ff-4f85-ac6b-e6158070f02e"
FilePathsBase = "48062228-2e41-5def-b9a4-89aafe57970f"
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
FluxTraining = "7bf95e4d-ca32-48da-9824-f0dc5310474f"
ImageIO = "82e4d734-157c-48bb-816b-45c225c6df19"
ImageMagick = "6218d12a-5da1-5696-b52f-db25d2ecc6d1"
ImageShow = "4e3cecfd-b093-5904-9786-8bbb286a6a31"
Images = "916415d5-f1e6-5110-898d-aaa5f9f070e0"
ModuleInfo = "3c3ff5e7-c68c-4a09-80d1-9526a1e9878a"
Pollen = "c88717ad-5130-4874-a664-5a9aba5ec443"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
TestImages = "5e47fb64-e119-507b-a336-dd2b206d9990"
8 changes: 4 additions & 4 deletions docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,16 +37,16 @@ Quickly download and load task data containers from the fastai dataset library.

{.tight}
- `load
- [`Datasets.DATASETS`](#)
- [`FastAI.Datasets.DATASETS`](#)

### Mid-level

Load and transform data containers.

{.tight}
- [`Datasets.datasetpath`](#)
- [`Datasets.FileDataset`](#)
- [`Datasets.TableDataset`](#)
- [`FastAI.Datasets.datasetpath`](#)
- [`FastAI.Datasets.FileDataset`](#)
- [`FastAI.Datasets.TableDataset`](#)
- [`mapobs`](#)
- [`groupobs`](#)
- [`joinobs`](#)
Expand Down
3 changes: 1 addition & 2 deletions docs/interfaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ Enables training and prediction. Prerequisite for other, optional learning task
- [`decodeŷ`](#)
- Optional tasks:
- [`shouldbatch`](#)
- [`encode!`](#) or both [`encodeinput!`](#) and [`encodetarget!`](#).
- Enables use of:
- [`taskdataset`](#)
- [`taskdataloaders`](#)
Expand All @@ -34,7 +33,7 @@ For visualizing observations and predictions using [Makie.jl](https://github.com
Convenience for creating [`Learner`](#)s.

{.tight}
- Required tasks:
- Required methods:
- [`tasklossfn`](#)
- [`taskmodel`](#)
- Enables use of:
Expand Down
2 changes: 1 addition & 1 deletion docs/learning_methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ Now, with a makeshift model, an optimizer and a loss function we can create a [`

{cell=main}
```julia
using FastAI: Flux
using FastAI, Flux

model = Chain(
Models.xresnet18(),
Expand Down
31 changes: 19 additions & 12 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,16 +1,23 @@
using Pkg
using Pollen
using FastAI
using FastAI: Image
Image
using FluxTraining
using DLPipelines
import DataAugmentation
using FilePathsBase

import CairoMakie
# Create target folder
DIR = abspath(mkpath(ARGS[1]))

ENV["DATADEPS_ALWAYS_ACCEPT"] = "true"

refmodules = [FastAI, FluxTraining, DLPipelines, DataAugmentation, FastAI.Datasets, FastAI.Models]
project = Pollen.documentationproject(FastAI; refmodules = refmodules)
Pollen.fullbuild(project, Pollen.FileBuilder(Pollen.HTML(), p"dev/"))
# Create Project
project = include("project.jl")


@info "Rewriting documents..."
Pollen.rewritesources!(project)

@info "Writing to disk at \"$DIR\"..."
builder = Pollen.FileBuilder(
Pollen.JSON(),
DIR,
)
Pollen.build(
builder,
project,
)
28 changes: 28 additions & 0 deletions docs/project.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
using Pollen
using Pkg
using Crayons
Crayons.COLORS[:nothing] = 67

using FastAI, Flux, FluxTraining
import DataAugmentation
m = FastAI
ms = [
DataAugmentation,
Flux,
FluxTraining,
m,
]


project = Project(
Pollen.Rewriter[
Pollen.DocumentFolder(pkgdir(m), prefix = "documents"),
Pollen.ParseCode(),
Pollen.ExecuteCode(),
Pollen.PackageDocumentation(ms),
Pollen.DocumentGraph(),
Pollen.SearchIndex(),
Pollen.SaveAttributes((:title,)),
Pollen.LoadFrontendConfig(pkgdir(m))
],
)
26 changes: 8 additions & 18 deletions docs/serve.jl
Original file line number Diff line number Diff line change
@@ -1,23 +1,13 @@
using FastAI
import FastAI: Image
import CairoMakie
using Pollen
using FluxTraining
using DLPipelines
import DataAugmentation
using FilePathsBase
using Colors

function serve(lazy=true; kwargs...)
refmodules = [FastAI, FluxTraining, DLPipelines, DataAugmentation, DataLoaders, FastAI.Datasets]
project = Pollen.documentationproject(FastAI; refmodules, watchpackage=true, kwargs...)
Pollen.serve(project, lazy=lazy)
end
serve()
# using MyPackage

##

# Create Project
project = include("project.jl")

#=
project = Pollen.documentationproject(FastAI; refmodules, inlineincludes = false, )
=#

@info "Rewriting documents..."
Pollen.rewritesources!(project)

Pollen.serve(project, lazy=get(ENV, "POLLEN_LAZY", "false") == "true", format = Pollen.JSON())
32 changes: 32 additions & 0 deletions docs/toc.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
{
"Overview": "documents/README.md",
"Setup": "documents/docs/setup.md",
"Quickstart": "documents/notebooks/quickstart.ipynb",
"Tutorials": {
"Beginner": {
"Introduction": "documents/docs/introduction.md",
"Discovering functionality": "documents/docs/discovery.md"
},
"Intermediate": {
"Data containers": "documents/docs/data_containers.md"
},
"Advanced": {
"Unsupervised learning tasks": "documents/notebooks/vae.ipynb"
}
},
"Computer vision": {
"Tasks": {
"Image segmentation": "documents/notebooks/imagesegmentation.ipynb",
"Keypoint regression": "documents/notebooks/keypointregression.ipynb"
},
"How-To": {
"Use augmentations": "documents/docs/augmentvision.md",
"Presize vision datasets": "notebooks/presizing.ipynb"
}
},
"Tabular data": {
"Tasks": {
"Tabular classification": "documents/notebooks/tabularclassification.ipynb"
}
}
}
8 changes: 4 additions & 4 deletions notebooks/how_to_visualize.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
"tags": []
},
"source": [
"To add support for these to a learning task, you have to implement the plotting interface, consisting of [`plotsample!`](#), [`plotxy!`](#) and [`plotprediction!`](#)."
"To add support for these to a learning task, you have to implement the plotting interface for a block: [`showblock!`](#)."
]
},
{
Expand Down Expand Up @@ -139,7 +139,7 @@
}
],
"source": [
"plotsamples(task, samples)"
"showsamples(task, samples)"
]
},
{
Expand All @@ -161,7 +161,7 @@
}
],
"source": [
"plotbatch(task, xs, ys)"
"showbatch(task, xs, ys)"
]
},
{
Expand All @@ -183,7 +183,7 @@
}
],
"source": [
"plotpredictions(task, xs, ŷs, ys)"
"showpredictions(task, xs, ŷs, ys)"
]
}
],
Expand Down
2 changes: 1 addition & 1 deletion notebooks/keypointregression.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -754,7 +754,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The loss is going down which is a good sign, but visualizing the predictions against the ground truth will give us a better idea of how well the model performs. We'll use [`plotpredictions`](#) to compare batches of encoded targets and model outputs. For this we can run the model on a batch from the validation dataset and see how it performs."
"The loss is going down which is a good sign, but visualizing the predictions against the ground truth will give us a better idea of how well the model performs. We'll use [`showoutputs`](#) to compare batches of encoded targets and model outputs. For this we can run the model on a batch from the validation dataset and see how it performs."
]
},
{
Expand Down
3 changes: 2 additions & 1 deletion src/FastAI.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ using Base: NamedTuple
using Reexport
@reexport using FluxTraining
@reexport using DataLoaders
@reexport using Flux
using Flux

using Animations
import DataAugmentation
Expand Down Expand Up @@ -164,6 +164,7 @@ export
Training,
Validation,
Inference,
Context,

# blocks
Label,
Expand Down
3 changes: 2 additions & 1 deletion src/Tabular/Tabular.jl
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@ import ..FastAI:

import DataAugmentation
import DataFrames: DataFrame
import Flux: Embedding, Chain, Dropout, Dense, Parallel
import Flux
import Flux: Embedding, Chain, Dropout, Dense, Parallel, BatchNorm
import PrettyTables
import Requires: @require
import ShowCases: ShowCase
Expand Down