Skip to content
This repository was archived by the owner on Aug 28, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .actions/assistant.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ def _load_meta(folder: str, strict: bool = False) -> Optional[dict]:

Args:
folder: path to the folder with python script, meta and artefacts
strict: raise error if meta is missing required feilds
strict: raise error if meta is missing required fields
"""
fpath = AssistantCLI._find_meta(folder)
assert fpath, f"Missing meta file in folder: {folder}"
Expand Down
12 changes: 6 additions & 6 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,21 +23,21 @@ repos:
- id: detect-private-key

- repo: https://github.com/asottile/pyupgrade
rev: v3.8.0
rev: v3.14.0
hooks:
- id: pyupgrade
args: ["--py38-plus"]
name: Upgrade code

- repo: https://github.com/codespell-project/codespell
rev: v2.2.5
rev: v2.2.6
hooks:
- id: codespell
additional_dependencies: [tomli]
#args: ["--write-changes"]

- repo: https://github.com/PyCQA/docformatter
rev: v1.7.3
rev: v1.7.5
hooks:
- id: docformatter
args:
Expand All @@ -53,7 +53,7 @@ repos:
args: ["--print-width=120"]

- repo: https://github.com/psf/black
rev: 23.3.0
rev: 23.9.1
hooks:
- id: black
name: Format code
Expand All @@ -64,7 +64,7 @@ repos:
- id: yesqa

- repo: https://github.com/executablebooks/mdformat
rev: 0.7.16
rev: 0.7.17
hooks:
- id: mdformat
additional_dependencies:
Expand All @@ -73,7 +73,7 @@ repos:
- mdformat_frontmatter

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.276
rev: v0.0.292
hooks:
- id: ruff
args: ["--fix"]
2 changes: 1 addition & 1 deletion course_UvA-DL/06-graph-neural-networks/GNN_overview.py
Original file line number Diff line number Diff line change
Expand Up @@ -843,7 +843,7 @@ def print_results(result_dict):
# In this case, we will use the average pooling.
# Hence, we need to know which nodes should be included in which average pool.
# Using this pooling, we can already create our graph network below.
# Specifically, we re-use our class `GNNModel` from before,
# Specifically, we reuse our class `GNNModel` from before,
# and simply add an average pool and single linear layer for the graph prediction task.


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -315,7 +315,7 @@ def forward(self, x):
# inside the MCMC sampling to obtain reasonable samples.
# However, there is a training trick that significantly reduces the sampling cost: using a sampling buffer.
# The idea is that we store the samples of the last couple of batches in a buffer,
# and re-use those as the starting point of the MCMC algorithm for the next batches.
# and reuse those as the starting point of the MCMC algorithm for the next batches.
# This reduces the sampling cost because the model requires a significantly
# lower number of steps to converge to reasonable samples.
# However, to not solely rely on previous samples and allow novel samples as well,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ def autocomplete_image(img):
# potentially undesirable behavior. For instance, the value 242 has a
# 1000x lower likelihood than 243 although they are extremely close and
# can often not be distinguished. This shows that the model might have not
# generlized well over pixel values. The better solution to this problem
# generalized well over pixel values. The better solution to this problem
# is to use discrete logitics mixtures instead of a softmax distribution.
# A discrete logistic distribution can be imagined as discretized, binned
# Gaussians. Using a mixture of discrete logistics instead of a softmax
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
#
# Finetuning consists of four steps:
#
# - 1. Train a source neural network model on a source dataset. For text classication, it is traditionally a transformer model such as BERT [Bidirectional Encoder Representations from Transformers](https://arxiv.org/abs/1810.04805) trained on wikipedia.
# - 1. Train a source neural network model on a source dataset. For text classification, it is traditionally a transformer model such as BERT [Bidirectional Encoder Representations from Transformers](https://arxiv.org/abs/1810.04805) trained on wikipedia.
# As those model are costly to train, [Transformers](https://github.com/huggingface/transformers) or [FairSeq](https://github.com/pytorch/fairseq) libraries provides popular pre-trained model architectures for NLP. In this notebook, we will be using [tiny-bert](https://huggingface.co/prajjwal1/bert-tiny).
#
# - 2. Create a new neural network the target model. Its architecture replicates all model designs and their parameters on the source model, expect the latest layer which is removed. This model without its latest layers is traditionally called a backbone
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -235,8 +235,7 @@ def __init__(
tokenizers_parallelism: bool = True,
**dataloader_kwargs: Any,
):
r"""Initialize the ``LightningDataModule`` designed for both the RTE or BoolQ SuperGLUE Hugging Face
datasets.
r"""Initialize the ``LightningDataModule`` designed for both the RTE or BoolQ SuperGLUE Hugging Face datasets.

Args:
model_name_or_path (str):
Expand Down