This repository provides reference data-processing pipelines and examples for Open Data Hub / Red Hat OpenShift AI. It focuses on document conversion and chunking using the Docling toolkit, packaged as Kubeflow Pipelines (KFP), example Jupyter Notebooks, and helper scripts.
The workbenches directory also provides a guide on how to create a custom workbench image to run Docling and the example notebooks in this repository.
odh-data-processing
|
|- kubeflow-pipelines
| |- docling-standard
| |- docling-vlm
|
|- notebooks
|- tutorials
|- use-cases
|
|- custom-workbench-imageRefer to the ODH Data Processing Kubeflow Pipelines documentation for instructions on how to install, run, and customize the Standard and VLM pipelines.
We welcome issues and pull requests. Please:
- Open an issue describing the change.
- Include testing instructions.
- For pipeline/component changes, recompile the pipeline and update generated YAML if applicable.
- Keep parameter names and docs consistent between code and README.
This repo enforces Python style and clean notebooks via pre-commit and a GitHub Actions workflow.
What runs:
- Ruff (lint, autofixes)
- Black (format)
- isort (import order, Black profile)
- nbstripout (removes Jupyter outputs)
Where it runs:
- On every Pull Request
- Once post-merge to
main(final validation)
Quick start (local):
pip install pre-commit
pre-commit install # installs the git hook
pre-commit run --all-files # run all checks on the repo
## 📄 License
Apache License 2.0