inferpipe is currently in alpha stage and is a work in progress. It is not ready for production use. Features may change, and there may be bugs or incomplete functionality. Use at your own risk for development and testing purposes only.
inferpipe is a visual AI workflow builder that enables you to build, test, and deploy AI pipelines without complex development or infrastructure setup. Add intelligent automation to any application through an intuitive drag-and-drop interface.
Our goal is to democratize AI workflow creation, allowing teams to rapidly prototype and iterate on AI-powered features using structured outputs, multi-step processing, and integration with various AI models.
- Provide a visual, no-code interface for designing complex AI workflows
- Support structured JSON schema outputs for reliable data extraction
- Enable testing and debugging of AI pipelines in real-time
- Facilitate seamless deployment and integration via SDK and APIs
- Support a range of AI models with specialized capabilities like web search and audio processing
inferpipe supports a variety of OpenAI models, categorized as follows:
- GPT-4.1: Latest flagship reasoning model
- GPT-4.1 Mini: Lightweight reasoning with lower cost
- GPT-4o: Balanced quality for multimodal tasks
- GPT-4o Mini (default): Fast, cost-effective general model
- GPT-4 Turbo: High-quality GPT-4 generation
- GPT-3.5 Turbo: Legacy fast model
- GPT-4o Search Preview: Web-enabled GPT-4o preview
- GPT-4o Mini Search Preview: Web-enabled GPT-4o mini preview
- GPT-4o Transcribe: High quality speech-to-text
- GPT-4o Mini Transcribe: Fast speech-to-text
- GPT-4o Mini TTS: Text-to-speech generation
These models support various capabilities including text generation, web search integration, and audio processing.
The core of inferpipe is the visual Workflow Builder, which allows you to:
- Drag-and-drop interface to create AI workflows
- Connect nodes for multi-step processing (e.g., input → extraction → search → output)
- Pre-built nodes for common operations like data extraction, content generation, and analysis
- Define structured outputs using a UI schema editor
- Support for data types, nested objects, arrays, and validation
- Automatic parse retry to ensure schema compliance
- Built-in prompt testing with real-time results
- Step-by-step debugging and execution tracing
- A/B testing for prompts and models
- Performance monitoring and cost metrics
- Inspect node configurations and intermediate results
- View step-by-step output for each workflow stage
- Structured data visualization for easy validation
- SaaS Applications: Customer support automation, content generation for marketing
- E-commerce: Product description generation, review analysis, personalized campaigns
- Data Processing: Extraction from documents, sentiment analysis, translation
- Access the builder at
/appafter signing in - Start with templates for common workflows or build from scratch
- Test inputs and iterate on node configurations
For programmatic access, use the inferpipe SDK:
npm install @inferpipe/sdkBasic setup:
import { InferPipe } from '@inferpipe/sdk';
const inferpipe = new InferPipe({
apiKey: process.env.INFERPIPE_API_KEY,
workspaceId: process.env.INFERPIPE_WORKSPACE_ID,
});Execute a workflow:
const result = await inferpipe.execute({
workflowId: 'your-workflow-id',
input: {
// your input data
}
});- Deploy workflows directly from the builder
- Integrate via API calls or webhooks for async processing
- Monitor executions and handle errors with built-in tools
inferpipe uses:
- Frontend: Next.js with React Flow for the visual builder
- Backend: Convex for real-time data and authentication
- Workflow Engine: Inngest for durable execution
- UI Components: Shared Shadcn/UI components
Contributions welcome! Please review our code of conduct and open issues for bugs or feature requests.
MIT License - see LICENSE file for details.
