This repository serves as a template for the projects of ISC 301 week 1.
- Install
uvon your system - Create Python environment by running
uv sync - Install pre-commit:
Then, install the pre-commit hooks:
uv add install pre-commit
uv run pre-commit install
- Try to compile the report template
uv run quarto render reports/presentation_project1/presentation.qmd
The template includes the following tools
uvfor dependency management.rufffor code formatting.pyrightfor type checking.- pre-commit hooks for automated validation.
We use uv for dependency management. It is just as
full-featured as poetry, but much faster.
-
Run
uv syncto synchronize the environment. This will create a virtual environment and install needed development dependencies. -
New dependencies can be installed by running
uv add polars package-name
Further references in uv's Getting started guide.
First, install pre-commit:
uv add install pre-commitThen, install the pre-commit hooks:
uv run pre-commit installThis will create a .git/hooks/pre-commit file that will run the pre-commit
hooks every time you commit. Upon the first commit, the hooks will be installed.
Some hooks output error message that require a manual change (e.g., linting errors). Other hooks perform automated fixes. Either way, you need to re-run the commit command:
git commit -m "My message"Among the pre-commit hooks, you will find one that runs
ruff on every Python file. It is also warmly
recommended that you set up ruff in your IDE (e.g., Visual Studio Code, PyCharm).
We recommend the use of type hints
of your code. One of the pre-commit hooks is pyright,
which will perform type checking when hints are available. This reduces greatly the
risk of bugs and the maintainability of the code.
Inspired from the cookiecutter data science template.
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- Project documentation
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── pyproject.toml <- Project configuration file
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
│
└── {{ src/module_name }} <- Source code for use in this project.
│
├── __init__.py <- Makes {{ cookiecutter.module_name }} a Python module
│
├── config.py <- Store useful variables and configuration