diff --git a/.gitignore b/.gitignore index 827884d..02fec36 100644 --- a/.gitignore +++ b/.gitignore @@ -5,3 +5,5 @@ template.pdf *.egg-info _build +Exports + diff --git a/README.md b/README.md index 5f9a6cd..baa56b9 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ I use [pixi](https://pixi.sh) for managing the environment, so if you don't have - clone the repository and `cd` into it - run `pixi install`, which will install `typst` and `mystmd`. - run `pixi shell` to open a pixi shell (or prefix your commands with `pixi run`) - + /Users/roaldarbol/Filen/Thesis/typst-thesis.md ### Typst To render the `typst` example, run: ```sh diff --git a/_toc.yml b/_toc.yml new file mode 100644 index 0000000..1601b34 --- /dev/null +++ b/_toc.yml @@ -0,0 +1,32 @@ +# Table of Contents +# +# Myst will respect: +# 1. New pages +# - file: relative/path/to/page +# 2. New sections without an associated page +# - title: Folder Title +# sections: ... +# 3. New sections with an associated page +# - file: relative/path/to/page +# sections: ... +# +# Note: Titles defined on pages here are not recognized. +# +# This spec is based on the JupyterBook table of contents. +# Learn more at https://jupyterbook.org/customize/toc.html + +format: jb-book +root: examples/example-myst/00-Front-Matter/00-Front-Page.md +chapters: + - title: Front Matter + sections: + - file: examples/example-myst/00-Front-Matter/01-Declaration.md + - file: examples/example-myst/00-Front-Matter/02-Abstract.md + - file: examples/example-myst/00-Front-Matter/03-Preface.md + - file: examples/example-myst/00-Front-Matter/04-Acknowledgements.md + - file: examples/example-myst/00-Front-Matter/05-Dedication.md + - file: examples/example-myst/00-Front-Matter/06-Glossary.md + - title: Chapters + sections: + - file: examples/example-myst/01-Chapters/01-Introduction.md + - file: examples/example-myst/01-Chapters/02-Chapter.md diff --git a/examples/example-myst/_toc.yml b/examples/example-myst/_toc.yml deleted file mode 100644 index ee11533..0000000 --- a/examples/example-myst/_toc.yml +++ /dev/null @@ -1,32 +0,0 @@ -# Table of Contents -# -# Myst will respect: -# 1. New pages -# - file: relative/path/to/page -# 2. New sections without an associated page -# - title: Folder Title -# sections: ... -# 3. New sections with an associated page -# - file: relative/path/to/page -# sections: ... -# -# Note: Titles defined on pages here are not recognized. -# -# This spec is based on the JupyterBook table of contents. -# Learn more at https://jupyterbook.org/customize/toc.html - -format: jb-book -root: 00-Front-Matter/00-Front-Page -chapters: - - title: Front Matter - sections: - - file: 00-Front-Matter/01-Declaration.md - - file: 00-Front-Matter/02-Abstract.md - - file: 00-Front-Matter/03-Preface.md - - file: 00-Front-Matter/04-Acknowledgements.md - - file: 00-Front-Matter/05-Dedication.md - - file: 00-Front-Matter/06-Glossary.md - - title: Chapters - sections: - - file: 01-Chapters/01-Introduction.md - - file: 01-Chapters/02-Chapter.md diff --git a/examples/example-typst/main.pdf b/examples/example-typst/main.pdf index e769aca..47ee7c3 100644 Binary files a/examples/example-typst/main.pdf and b/examples/example-typst/main.pdf differ diff --git a/examples/example-typst/main.typ b/examples/example-typst/main.typ index bdfa78b..fd9c177 100644 --- a/examples/example-typst/main.typ +++ b/examples/example-typst/main.typ @@ -5,31 +5,7 @@ title: "vak: a neural network framework for researchers studying animal acoustic communication", subtitle: "This is my subtitle.", abstract: [ -How is speech like birdsong? What do we mean when we say an animal learns their vocalizations? -Questions like these are answered by studying how animals communicate with sound. -As in many other fields, the study of acoustic communication is being revolutionized by deep neural network models. -These models enable answering questions that were previously impossible to address, -in part because the models automate analysis of very large datasets. Acoustic communication researchers -have developed multiple models for similar tasks, often implemented as research code with one of several libraries, -such as Keras and Pytorch. This situation has created a real need for a framework -that allows researchers to easily benchmark multiple models, -and test new models, with their own data. To address this need, we developed vak (#link("https://github.com/vocalpy/vak")[https://github.com/vocalpy/vak]), -a neural network framework designed for acoustic communication researchers. -("vak" is pronounced like "talk" or "squawk" and was chosen -for its similarity to the Latin root _voc_, as in "vocal".) -Here we describe the design of the vak, -and explain how the framework makes it easy for researchers to apply neural network models to their own data. -We highlight enhancements made in version 1.0 that significantly improve user experience with the library. -To provide researchers without expertise in deep learning access to these models, -vak can be run via a command-line interface that uses configuration files. -Vak can also be used directly in scripts by scientist-coders. To achieve this, vak adapts design patterns and -an API from other domain-specific PyTorch libraries such as torchvision, with modules representing -neural network operations, models, datasets, and transformations for pre- and post-processing. -vak also leverages the Lightning library as a backend, -so that vak developers and users can focus on the domain. -We provide proof-of-concept results showing how vak can be used to -test new models and compare existing models from multiple model families. -In closing we discuss our roadmap for development and vision for the community of users. + How is speech like birdsong? What do we mean when we say an animal learns their vocalizations? Questions like these are answered by studying how animals communicate with sound. As in many other fields, the study of acoustic communication is being revolutionized by deep neural network models. These models enable answering questions that were previously impossible to address, in part because the models automate analysis of very large datasets. Acoustic communication researchers have developed multiple models for similar tasks, often implemented as research code with one of several libraries, such as Keras and Pytorch. This situation has created a real need for a framework that allows researchers to easily benchmark multiple models, and test new models, with their own data. To address this need, we developed vak (#link("https://github.com/vocalpy/vak")[https://github.com/vocalpy/vak]), a neural network framework designed for acoustic communication researchers. ("vak" is pronounced like "talk" or "squawk" and was chosen for its similarity to the Latin root _voc_, as in "vocal".) Here we describe the design of the vak, and explain how the framework makes it easy for researchers to apply neural network models to their own data. We highlight enhancements made in version 1.0 that significantly improve user experience with the library. To provide researchers without expertise in deep learning access to these models, vak can be run via a command-line interface that uses configuration files. Vak can also be used directly in scripts by scientist-coders. To achieve this, vak adapts design patterns and an API from other domain-specific PyTorch libraries such as torchvision, with modules representing neural network operations, models, datasets, and transformations for pre- and post-processing. vak also leverages the Lightning library as a backend, so that vak developers and users can focus on the domain. We provide proof-of-concept results showing how vak can be used to test new models and compare existing models from multiple model families. In closing we discuss our roadmap for development and vision for the community of users. ], date: datetime( year: 2023, @@ -68,9 +44,10 @@ In closing we discuss our roadmap for development and vision for the community o ) /* Written by MyST v1.1.37 */ - #set page(columns: 1, margin: (x: 1.5cm, y: 2cm),) + // Here it needs to create each of the front matter sections with the + = vak: a neural network framework for researchers studying animal acoustic communication #include "Chapter-1/Introduction.typ" #include "Chapter-1/Discussion.typ" diff --git a/myst.yml b/myst.yml index 805abcd..a79f72e 100644 --- a/myst.yml +++ b/myst.yml @@ -16,9 +16,19 @@ project: github: https://github.com/roaldarbol/thesis-template bibliography: [] exports: - - format: typst - template: typst-thesis.typ - toc: examples/example-myst/_toc.yml + - format: typst + template: typst-thesis.typ + output: Exports/MyST-Thesis.pdf + # toc: _toc.yml + articles: + - file: examples/example-myst/00-Front-Matter/01-Declaration.md + - file: examples/example-myst/00-Front-Matter/02-Abstract.md + - file: examples/example-myst/00-Front-Matter/03-Preface.md + - file: examples/example-myst/00-Front-Matter/04-Acknowledgements.md + - file: examples/example-myst/00-Front-Matter/05-Dedication.md + - file: examples/example-myst/00-Front-Matter/06-Glossary.md + - file: examples/example-myst/01-Chapters/01-Introduction.md + - file: examples/example-myst/01-Chapters/02-Chapter.md site: template: book-theme # title: diff --git a/pixi.toml b/pixi.toml index cd3b7c8..c1d7216 100644 --- a/pixi.toml +++ b/pixi.toml @@ -7,7 +7,7 @@ channels = ["conda-forge"] platforms = ["osx-64"] [tasks] -render-typst = "typst compile examples/examploe-typst/main.typ --root ../.." +render-typst = "typst compile examples/example-typst/main.typ --root ../.." render-myst = "myst build --typst" [dependencies] diff --git a/template.typ b/template.typ index 2a2b8e0..729bab9 100644 --- a/template.typ +++ b/template.typ @@ -6,67 +6,13 @@ frontmatter: ( title: "[-doc.title-]", abstract: [ - [-parts.abstract-] - ], - [# if doc.subtitle #] - subtitle: "[-doc.subtitle-]", - [# endif #] - [# if doc.short_title #] - short-title: "[-doc.short_title-]", - [# endif #] - [# if doc.open_access !== undefined #] - open-access: [-doc.open_access-], - [# endif #] - [# if doc.github !== undefined #] - github: "[-doc.github-]", - [# endif #] - [# if doc.doi #] - doi: "[-doc.doi-]", - [# endif #] - [# if doc.date #] - date: datetime( - year: [-doc.date.year-], - month: [-doc.date.month-], - day: [-doc.date.day-], - ), - [# endif #] - [# if doc.keywords #] - keywords: ( - [#- for keyword in doc.keywords -#]"[-keyword-]",[#- endfor -#] - ), - [# endif #] - authors: ( - [# for author in doc.authors #] - ( - name: "[-author.name-]", - [# if author.orcid #] - orcid: "[-author.orcid-]", - [# endif #] - [# if author.email #] - email: "[-author.email-]", - [# endif #] - [# if author.affiliations #] - affiliations: ([#- for aff in author.affiliations -#]"[-aff.index-]"[#- if not loop.last -#],[#- endif -#][#- endfor -#]), - [# endif #] - ), - [# endfor #] - ), - affiliations: ( - [# for aff in doc.affiliations #] - ( - id: "[-aff.index-]", - name: "[-aff.name-]", - ), - [# endfor #] - ), - [# if doc.license.content #] - license: (id: "[-doc.license.content.id-]", name: "[-doc.license.content.name-]", url: "[-doc.license.content.url-]"), - [# endif #] + [-parts.abstract-] + ] ), ) // This may be moved below the first paragraph to start columns later -#set page(columns: 2, margin: (x: 1.5cm, y: 2cm),) +#set page(columns: 1, margin: (x: 1.5cm, y: 2cm),) [-CONTENT-] diff --git a/template.yml b/template.yml index 58cf92c..73bbb90 100644 --- a/template.yml +++ b/template.yml @@ -34,10 +34,6 @@ doc: - id: keywords - id: doi - id: github -options: - - id: conference-year - type: number - description: What year is the conference, for example, 2024 parts: - id: abstract description: > diff --git a/thumbnail.png b/thumbnail.png deleted file mode 100644 index d95a265..0000000 Binary files a/thumbnail.png and /dev/null differ diff --git a/typst-thesis.typ b/typst-thesis.typ index 2502ab9..b2e3939 100644 --- a/typst-thesis.typ +++ b/typst-thesis.typ @@ -4,7 +4,7 @@ set text(size: 8pt) set align(left) set par(justify: true) - text(weight: "bold")[#it.supplement #it.counter.display(it.numbering)] + text(weight: "bold")[it.supplement it.counter.display(it.numbering)] "." h(4pt) set text(fill: black.lighten(20%), style: "italic") @@ -45,22 +45,22 @@ counter(figure.where(kind: image)).update(0) counter(figure.where(kind: table)).update(0) counter(math.equation).update(0) - [#numbering("1.", ..nums)] + [numbering("1.", ..nums)] } else { - [#numbering("1.1.1", ..nums)] + [numbering("1.1.1", ..nums)] } }) // Configure figure numbering set figure(numbering: (..args) => { let chapter = counter(heading).display((..nums) => nums.pos().at(0)) - [#chapter.#numbering("1", ..args.pos())] + [chapter.numbering("1", ..args.pos())] }) // Configure equation numbering and spacing. set math.equation(numbering: (..args) => { let chapter = counter(heading).display((..nums) => nums.pos().at(0)) - [(#chapter.#numbering("1)", ..args.pos())] + [(chapter.numbering("1)", ..args.pos())] }) show math.equation: set block(spacing: 1em) show figure.caption: leftCaption