This is an n8n community node. It lets you use VLM Run in your n8n workflows.
VLM Run is a unified gateway for Visual AI that enables you to extract structured data from unstructured visual content like images, videos, audio, and documents using Vision Language Models (VLMs).
Follow the installation guide in the n8n community nodes documentation.
npm i @vlm-run/n8n-nodes-vlmrun
- Analyze Audio: Analyze audio files for transcription, speaker identification, sentiment analysis, and more.
- Analyze Document: Extract structured data from documents such as resumes, invoices, presentations, and more.
- Analyze Image: Extract information or generate captions from images.
- Analyze Video: Extract insights or transcribe content from video files.
- Manage Files: List uploaded files or upload new files to VLM Run.
- Sign up for a VLM Run account at vlm.run
- Get your API key from the dashboard
- Use the API key in the n8n VLM Run node credentials
-
Configure Credentials:
- Add your VLM Run API credentials in n8n
- Set the API base URL (default: https://api.vlm.run/v1)
-
Add VLM Run Node:
- Search for "VLM Run" in the n8n nodes panel
- Add it to your workflow
-
Configure Node:
- Select the operation (Analyze Audio, Analyze Document, Analyze Image, Analyze Video, or Manage Files)
- Provide the required input fields (such as file data, model, or domain)
- Configure any additional parameters as needed
-
Run the workflow to process your visual or audio data with VLM Run's AI models.
Here are some screenshots of the n8n-nodes-vlmrun in action:
Overview of the VLM Run node in n8n
Example workflow using the VLM Run node
- Extracting structured data from resumes, invoices, or utility bills
- Cataloging and captioning product images
- Transcribing and analyzing audio interviews or calls
- Extracting insights from video content
- Managing files in your VLM Run account from n8n workflows