Building generative AI creative pipelines β image, video, multimodal, and everything in between.
π¨ Image generation β BFL Flux workflows, ComfyUI custom nodes, and multimodal pipelines for AI image creation and manipulation
π¬ Video generation β agentic video workflows with Kling, Seedance, and other video models; using 3D as a control surface (not final render) to drive generative output
π Prompt engineering β crafting and refining prompts for LLMs and image/video models; multimodal (image-to-text, text-to-image) experimentation
π€ AI automations β MCP-based agentic workflows, Claude + Blender pipelines, and tools that connect generative models to real production processes
π¬ Multimodal AI β bridging vision models, language models, and creative tooling
- 3D as a control layer for video model output (Blender β Kling / Seedance)
- Agentic camera control via MCP + Claude
- BFL Flux custom workflows and LoRA training
- Multimodal prompt design (vision β language)
- Generative AI for brand and advertising production
Always experimenting. Always one step beyond.