ML experiments, tracked & notified.
A Python toolkit for the full lifecycle of machine learning experiments — experiment tracking with SwanLab, notifications & data management with Lark (Feishu), and local storage.
📈 Metrics and Tracking: Embed minimal code into your ML pipeline to track and record key training metrics based on SwanLab.
📊 Data Management: Automatically organize your experiment directory structure based on experiment type and tags, enabling better management of experimental data.
📢 Message Notifications: Automatic push notifications are sent when the experiment starts, ends, or is interrupted, keeping you informed of the latest progress.
💾 Backup: Back up your data in the cloud and locally to prevent data loss.
pip install owlab
# or: uv pip install owlab
# or: use source code
# git clone https://github.com/Lounwb/OwLab.git && cd OwLab && pip install -e .
To enable Lark and Swanlab, you need to configure the relevant tokens and secrets. Owlab supports both configuration files and environment variables, providing you with flexible options:
- Configuration file:
~/.owlab/config.jsonor./.owlab/config.json, here is an example:
// configure your lark and swanlab in .owlab/config.json
{
"lark": {
"webhook": {
"webhook_url": "<your webhook url>",
"signature": "<your webhook signature>"
},
"api": {
"app_id": "<your app id>",
"app_secret": "<your app secret>",
"root_folder_token": "<your root folder token>"
}
},
"swanlab": {
"api_key": "<your swanlab api key>"
},
"storage": {
"local_path": "./output",
"csv_path": "./output/csv",
"model_path": "./output/models"
},
"logging": {
"level": "INFO",
"format": null,
"file": "./logs/owlab.log"
}
}
- Environment:
OWLAB_LARK__WEBHOOK__WEBHOOK_URL,OWLAB_LARK__API__APP_ID, etc.
from owlab import OwLab
owlab = OwLab()
owlab.init(
project="my_project", # Required
experiment_name="exp_001", # Optional; defaults to project
description="Short description",
type="baseline", # e.g. baseline / debug / ablation — used for folder naming
version="1.0", # Experiment version
tags=["baseline"], # Optional tags
config={
"methods": [...], # Method definitions for result tables
"datasets": [...],
"metrics": [...],
"measures": [...],
"experiment_params": {"learning_rate": 0.01, "batch_size": 32},
"seed": 42,
},
)for epoch in range(100):
owlab.log({"loss": loss, "accuracy": acc}, step=epoch)Call finish(results=...) with a list of result rows. Each row can include method, dataset, measure, and metric values. These are written to local files and, when configured, to Feishu spreadsheets.
owlab.finish(results=[
{
"method": "method1",
"dataset1": {"measure": "MCM", "accuracy": 0.95, "loss": 0.05},
"dataset2": {"measure": "MCM", "accuracy": 0.92, "loss": 0.08},
"Average": {"measure": "MCM", "accuracy": 0.935, "loss": 0.065},
},
# ...
])Like SwanLab’s swanlab.sync_tensorboard_torch(): call after init() and before creating SummaryWriter. Then writer.add_scalar() / add_scalars() also log to the current SwanLab run.
owlab.init(project="my_project", experiment_name="exp_1", ...)
owlab.sync_tensorboard_torch()
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(log_dir="./runs")
writer.add_scalar("loss", loss, step) # also sent to SwanLab
writer.add_scalar("acc", acc, step)- Local:
./output/<type>/<experiment_name>_<timestamp>/results.csv,results.json,owlab.log,model/
- Lark: Notifications via webhook; result tables written to Feishu via API (when configured).
- SwanLab: Metrics and runs visible in your SwanLab project (when
api_keyis set).
- PyPI: pypi.org/project/owlab
- License: MIT
- Repository: github.com/Lounwb/OwLab
- Issues: github.com/Lounwb/OwLab/issues
- SwanLab — experiment tracking
- Lark / Feishu — notifications and spreadsheets
