Kinoko AI models may make mistakes. You are solely responsible for what you eat 🙈.
🇯🇵 KinokoのAIモデルは誤りを生じる可能性があります。食べるものについては、ご自身の責任でご判断ください🙇。
🇫🇷 Les modèles IA de Kinoko peuvent faire des erreurs. Vous êtes seul responsable de ce que vous mangez 🙈.
Kinoko Lab is an end-to-end machine & deep learning application designed to classify mushrooms as edible or poisonous, combining:
- 📊 Exploratory Data Analysis on tabular mushroom datasets
- 🧠 Tabular Machine Learning models
- 📷 Image-based Deep Learning (CNN & DINOv2 fine-tuning)
- ⚡ Real-time predictions via a FastAPI backend
Our objective was to build a complete ML & DL pipeline — from raw data exploration to production-ready deployment.
flowchart LR
A[Mushroom Image / Tabular Data] --> B[Preprocessing]
B --> C[CNN]
B --> D[DINOv2 Fine-tuned]
B --> E[Tabular Model]
C --> F[FastAPI Backend]
D --> F
E --> F
F --> G[Streamlit UI]
- Preprocessing handles both image augmentation and tabular feature engineering
- CNN is a custom convolutional network trained from scratch
- DINOv2 is a Vision Transformer fine-tuned on the mushroom image dataset
- Tabular Model handles tabular classification from structured mushroom features
- FastAPI exposes prediction endpoints consumed by the frontend
- Streamlit provides an interactive web interface for end users
.
├── api/ # FastAPI backend
│ ├── fast.py # API routes & prediction endpoints
│ └── utils.py # Helper functions
├── data/
│ ├── image_dataset/ # Mushroom images (edible / poisonous), with augmented versions
│ └── table_dataset/ # CSV tabular datasets & metadata
├── models/
│ ├── images/
│ │ ├── baseline/ # CNN baseline (model, train, evaluate, preprocess, results, logs)
│ │ └── dinov2/ # DINOv2 fine-tuning (model, inference, preprocess)
│ └── tabular/
│ └── XGBoost/ # Tabular pipeline (data, model, preprocess, registry)
├── Notebooks/ # Exploration, prototyping & dataviz notebooks
├── streamlit/
│ ├── app.py # Streamlit web application
│ └── assets/ # UI assets (images for UI elements)
├── Dockerfile
├── docker-compose.yml
├── requirements.txt
├── requirements_api.txt
└── README.md
- tensorflow / keras / keras_hub → CNN model training and inference
- transformers / timm → DINOv2 Vision Transformer fine-tuning
- xgboost → Tabular mushroom classification
- scikit-learn → Preprocessing, metrics & evaluation utilities
- pandas / numpy → Data manipulation and numerical operations
- matplotlib / seaborn / plotly → Data visualization and training history plots
- fastapi / uvicorn / python-multipart → REST API backend for serving predictions
- streamlit / streamlit-option-menu / altair → Interactive web interface
- jupyterlab / ipywidgets / ipdb → Notebook-based exploration and prototyping
- pytest / pylint → Testing and code quality
- Clone the repository:
git clone https://github.com/ClemPera/Kinoko.git
cd Kinoko- Create and activate a Python 3.12+ virtual environment:
pyenv virtualenv 3.12.9 kinoko
pyenv local kinoko- Install the dependencies:
pip install -r requirements.txt| Command | Description |
|---|---|
make install_dep |
Install dependencies |
make run_api |
Run the FastAPI backend |
make run_streamlit |
Run the Streamlit interface |
make run_docker |
Run with Docker |
You can try the app directly on Streamlit without installing anything locally:
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0). See the LICENSE file for full details.