Skip to content

Add OSOP workflow example — portable chatflow format#6122

Closed
Archie0125 wants to merge 3 commits intoFlowiseAI:mainfrom
Archie0125:add-osop-example
Closed

Add OSOP workflow example — portable chatflow format#6122
Archie0125 wants to merge 3 commits intoFlowiseAI:mainfrom
Archie0125:add-osop-example

Conversation

@Archie0125
Copy link
Copy Markdown

Summary

  • Adds examples/osop/ with a RAG chatflow represented in OSOP format
  • OSOP is a portable YAML format for describing AI workflows across platforms
  • Shows how Flowise chatflow components (document loader, text splitter, embedding, vector store, retriever, LLM, response) map to OSOP nodes and edges

Why

Flowise chatflows are powerful but stored in a platform-specific JSON format. An OSOP representation enables cross-tool portability (n8n, LangFlow, custom agents), clean YAML diffs in version control, and human-readable workflow documentation.

Details

  • Purely additive — no existing files are modified
  • Standalone — the example is self-contained and works without any OSOP tooling installed
  • Files added:
    • examples/osop/README.md — brief explanation of OSOP + Flowise
    • examples/osop/chatflow-example.osop.yaml — a RAG chatflow in OSOP format

Add an example showing how Flowise RAG chatflows can be represented
in OSOP (Open Standard for Orchestration Protocols), a portable YAML
format for AI workflows. Purely additive — no existing files modified.
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an OSOP (Open Source Orchestration Protocol) workflow example for Flowise, including a README and a sample RAG chatflow YAML file. The review feedback identifies a structural error in the YAML configuration for the vector store provider and suggests updating the LLM model to a more current version for better cost-effectiveness.

Comment on lines +31 to +32
config:
provider: pinecone
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The provider key should be a direct child of runtime, not nested under config. This aligns with the OSOP specification and maintains consistency with how provider is defined for the embedding and llm nodes in this file.

      provider: pinecone

description: Generate responses using retrieved context and conversation history.
runtime:
provider: openai
model: gpt-4
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While gpt-4 is a valid model, consider updating this example to use a more recent and cost-effective model like gpt-4o. This would make the example more current and a better reference for users.

      model: gpt-4o

Addresses review feedback:
- vector-store: provider moved from runtime.config to runtime (consistency)
- llm: updated model from gpt-4 to gpt-4o (more current and cost-effective)
@Archie0125
Copy link
Copy Markdown
Author

Good catches! Fixed both:

  • Updated vector store provider configuration structure
  • Changed LLM model to gpt-4o for cost-effectiveness

Thanks for the review!

@HenryHengZJ HenryHengZJ closed this Apr 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants