A custom node collection for integrating various LLM (Large Language Model) providers with ComfyUI.
- True Context-to-Context node connections one-input / one-output design

-
Generators run even if nothing is conected and they are the only node in the workflow
-
Runs openai latest image model GPT-image-1 and we include various templates
-
Text generation using various LLM providers (OpenAI and local models, etc.)
-
Provider selection and configuration with dynamic model fetching
-
API key management
-
Seamless integration with ComfyUI workflows
@@ -33,6 +43,7 @@ The following Python packages are required:
- pyyaml
- python-dotenv
- requests
- openai
-
Clone this repository into your ComfyUI custom_nodes directory:
cd /path/to/ComfyUI/custom_nodes git clone https://github.com/yourusername/comfyui-llm-toolkit.git
-
Install the required dependencies:
cd comfyui-llm-toolkit pip install -r requirements.txt
-
Restart ComfyUI
The following Python packages are required:
- aiohttp
- pyyaml
- python-dotenv
- requests
-
API keys for various providers can be stored in a
.env
file in the root directory:OPENAI_API_KEY=your_openai_key # Add other provider keys as needed
-
Alternatively, you can provide API keys directly in the node interface.
Use this node to select the LLM provider and model. It outputs provider configuration within the wildcard "context" output.
- When you change the provider, the model dropdown updates dynamically with available models
- IP and Port fields appear/disappear based on whether the provider needs them
- The output is a single "context" type that contains the provider configuration
Basic text generation with the selected provider and model.
- Connect it to the LLM Provider Selector by connecting their "context" ports
- The node automatically detects if provider config is present in the input
- You can override the model with the dropdown
- The "context" output contains both the original input data and the LLM response
The LLM Toolkit uses a single "context" input/output approach for maximum flexibility:
- Single Connection Point: Each node has just one wildcard input and one wildcard output named "context"
- Smart Data Handling:
- Provider config is embedded within the "context" data structure
- Each node intelligently extracts the data it needs from the "context" input
- Nodes preserve all input data and add their own data to the "context" output
- Cascading Data Flow: As data flows through nodes, it accumulates in the "context" structure
For example, with nodes A → B → C:
- Node A creates an "context" with provider config
- Node B receives A's "context", extracts the provider config, and adds LLM response to the "context"
- Node C receives B's "context" which now contains both provider config and LLM response
This allows you to:
- Chain multiple LLM operations with a single connection
- Preserve and accumulate data throughout the workflow
- Easily integrate with other ComfyUI nodes
- OpenAI (Default: gpt-4o-mini)
- Ollama (local)
- And more... (Comming Soon)
If you encounter model list update issues:
- Make sure ComfyUI is running with the correct server configuration
- Check that JavaScript is enabled in your browser
- Verify that your API keys are correctly set in the .env file or provided in the node
If you encounter import errors:
- Make sure you've installed all dependencies:
pip install -r requirements.txt
- Verify that you've placed the custom node in the correct directory
- Restart ComfyUI after installation
Run the included test script to verify your setup:
cd /path/to/ComfyUI/custom_nodes/comfyui-llm-toolkit
python test_js_integration.py