Debug β’ Visualize β’ Analyze your VLA deployments in the real world
π Quick Start Β· π Documentation Β· π― Features Β· π§ Installation
Deploying VLA models to real robots is hard. You face:
- π΅οΈ Black-box inference β Can't see what the model "sees" or why it fails
- β±οΈ Hidden latencies β Transport delays, inference bottlenecks, control loop timing issues
- π No unified logging β Every framework logs differently, making cross-model comparison painful
- π Tedious debugging β Replaying failures requires manual log parsing and visualization
VLA-Lab solves this. One unified toolkit for all your VLA deployment needs.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β VLA-Lab Architecture β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββ ββββββββββββββββββββββββ ββββββββββββββββββββββ β
β β Robot β β Inference Server β β VLA-Lab β β
β β Client βββββΆβ (DP / GR00T / ...) βββββΆβ RunLogger β β
β ββββββββββββββββ ββββββββββββββββββββββββ βββββββββββ¬βββββββββββ β
β β β
β βΌ β
β ββββββββββββββββββββββββββββββββββββββββββββ β
β β Unified Run Storage β β
β β ββββββββββββ¬βββββββββββββ¬ββββββββββββ β β
β β βmeta.json β steps.jsonlβ artifacts/β β β
β β ββββββββββββ΄βββββββββββββ΄ββββββββββββ β β
β ββββββββββββββββββββ¬ββββββββββββββββββββββββ β
β β β
β βΌ β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Visualization Suite β β
β β βββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββββββββββ β β
β β β Inference β β Latency β β Dataset β β β
β β β Viewer β β Analyzer β β Browser β β β
β β βββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββββββββββ β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
Standardized run structure with JSONL + image artifacts. Works across all VLA frameworks. Step-by-step playback with multi-camera views, 3D trajectory visualization, and action overlays. |
Profile transport delays, inference time, control loop frequency. Find your bottlenecks. Explore Zarr-format training/evaluation datasets with intuitive UI. |
pip install vlalabOr install from source:
git clone https://github.com/ky-ji/VLA-Lab.git
cd VLA-Lab
pip install -e .import vlalab
# Initialize a run
run = vlalab.init(project="pick_and_place", config={"model": "diffusion_policy"})
# Log during inference
vlalab.log({"state": obs["state"], "action": action, "images": {"front": obs["image"]}})import vlalab
# Initialize with detailed config
run = vlalab.init(
project="pick_and_place",
config={
"model": "diffusion_policy",
"action_horizon": 8,
"inference_freq": 10,
},
)
# Access config anywhere
print(f"Action horizon: {run.config.action_horizon}")
# Inference loop
for step in range(100):
obs = get_observation()
t_start = time.time()
action = model.predict(obs)
latency = (time.time() - t_start) * 1000
# Log everything in one call
vlalab.log({
"state": obs["state"],
"action": action,
"images": {"front": obs["front_cam"], "wrist": obs["wrist_cam"]},
"inference_latency_ms": latency,
})
robot.execute(action)
# Auto-finishes on exit, or call manually
vlalab.finish()# One command to view all your runs
vlalab viewπΈ Screenshots (Click to expand)
Coming soon: Inference Viewer, Latency Analyzer, Dataset Browser screenshots
Run β A single deployment session (one experiment, one episode, one evaluation)
Step β A single inference timestep with observations, actions, and timing
Artifacts β Images, point clouds, and other media saved alongside logs
vlalab.init() β Initialize a run
run = vlalab.init(
project: str = "default", # Project name (creates subdirectory)
name: str = None, # Run name (auto-generated if None)
config: dict = None, # Config accessible via run.config.key
dir: str = "./vlalab_runs", # Base directory (or $VLALAB_DIR)
tags: list = None, # Optional tags
notes: str = None, # Optional notes
)vlalab.log() β Log a step
vlalab.log({
# Robot state
"state": [...], # Full state vector
"pose": [x, y, z, qx, qy, qz, qw], # Position + quaternion
"gripper": 0.5, # Gripper opening (0-1)
# Actions
"action": [...], # Single action or action chunk
# Images (multi-camera support)
"images": {
"front": np.ndarray, # HWC numpy array
"wrist": np.ndarray,
},
# Timing (any *_ms field auto-captured)
"inference_latency_ms": 32.1,
"transport_latency_ms": 5.2,
"custom_metric_ms": 10.0,
})RunLogger β Advanced API
For fine-grained control over logging:
from vlalab import RunLogger
logger = RunLogger(
run_dir="runs/experiment_001",
model_name="diffusion_policy",
model_path="/path/to/checkpoint.pt",
task_name="pick_and_place",
robot_name="franka",
cameras=[
{"name": "front", "resolution": [640, 480]},
{"name": "wrist", "resolution": [320, 240]},
],
inference_freq=10.0,
)
logger.log_step(
step_idx=0,
state=[0.5, 0.2, 0.3, 0, 0, 0, 1, 1.0],
action=[[0.51, 0.21, 0.31, 0, 0, 0, 1, 1.0]],
images={"front": image_rgb},
timing={
"client_send": t1,
"server_recv": t2,
"infer_start": t3,
"infer_end": t4,
},
)
logger.close()# Launch visualization dashboard
vlalab view [--port 8501]
# Convert legacy logs (auto-detects format)
vlalab convert /path/to/old_log.json -o /path/to/output
# Inspect a run
vlalab info /path/to/run_dirvlalab_runs/
βββ pick_and_place/ # Project
βββ run_20240115_103000/ # Run
βββ meta.json # Metadata (model, task, robot, cameras)
βββ steps.jsonl # Step records (one JSON per line)
βββ artifacts/
βββ images/ # Saved images
βββ step_000000_front.jpg
βββ step_000000_wrist.jpg
βββ ...
- Core logging API
- Streamlit visualization suite
- Diffusion Policy adapter
- GR00T adapter
- Pi adapter
- Cloud sync & team collaboration
- Real-time streaming dashboard
- Automatic failure detection
- Integration with robot simulators
We welcome contributions!
git clone https://github.com/ky-ji/VLA-Lab.git
cd VLA-Lab
pip install -e .MIT License β see LICENSE for details.
β Star us on GitHub if VLA-Lab helps your research!
Built with β€οΈ for the robotics community