A Python project that empowers LLMs with different meta-thinking models for solving real-life problems through a two-phase query handling system.
- Overview
- Features
- Getting Started
- Usage
- Deployment
- Project Structure
- MetaThinking Models
- Future Work
- License
MetaThinkingModels is a framework that enhances the problem-solving capabilities of Large Language Models (LLMs) by integrating them with a curated collection of 140 thinking models. The system uses a two-phase query handling process to provide structured, insightful solutions to complex problems.
- Core Engine: A backend that parses thinking models, integrates with LLM APIs, and processes queries in two phases.
- CLI Interface: A feature-rich command-line tool for interactive and batch query processing.
- Web Application: A modern web interface with real-time updates for exploring models and solving problems.
-
Model Selection (Phase 1): When a user submits a query, the system first selects the most relevant thinking models from its library. This is done by sending a specially crafted prompt to the LLM with the user query and a summary of available models.
-
Solution Generation (Phase 2): The selected models are then used to formulate a second prompt that guides the LLM to generate a comprehensive, structured solution. The LLM is instructed to use the thinking models as a framework for its response.
This two-phase approach ensures that the solutions are not just generic LLM responses, but are grounded in proven problem-solving methodologies, leading to more insightful and actionable answers.
- 140 Thinking Models: A comprehensive library of thinking models, from SWOT analysis to second-order thinking.
- Two-Phase Query Processing: Enhances LLM responses with structured problem-solving methodologies.
- OpenAI-Compatible API: Integrates with any OpenAI-compatible LLM API.
- CLI & Web Interfaces: Access the system through a command-line interface or a modern web application.
- Real-time Updates: Get live feedback during query processing via WebSockets.
- Model Browser: Explore, search, and filter thinking models through the web UI.
- Result Export: Save results to JSON for further analysis.
- Easy Deployment: Deploy the application with Docker and a simple launcher script.
- Python 3.8+
pipfor package management- An OpenAI-compatible LLM API endpoint
-
Clone the repository:
git clone https://github.com/your-username/ThinkingModels.git cd ThinkingModels -
Install dependencies:
pip install -r requirements.txt
-
Configure your LLM API:
Create a
.envfile in the project root and add your API credentials:# Required LLM_API_URL=https://your-llm-api-endpoint.com # Optional LLM_API_KEY=your-api-key LLM_MODEL_NAME=gpt-3.5-turbo
You can use ThinkingModels through either the command-line interface or the web application.
The CLI provides a powerful way to interact with the system, with support for single queries, batch processing, and various output formats.
Start interactive mode:
python thinking_models.py interactiveProcess a single query:
python thinking_models.py query "How can I improve my startup's marketing strategy?"For more details, see the CLI Documentation.
The web application provides a user-friendly interface for exploring thinking models and processing queries in real-time.
Start the web server:
python web_server.pyThen open your browser to http://127.0.0.1:8000.
The easiest way to deploy the ThinkingModels application is with Docker.
-
Build the Docker image:
docker build -t thinking-models . -
Run the Docker container:
docker run -d -p 8000:8000 \ -e LLM_API_URL="https://your-llm-api-endpoint.com" \ -e LLM_API_KEY="your-api-key" \ --name thinking-models-app \ thinking-models
This will start the web application on port 8000.
-
Install dependencies:
pip install -r requirements.txt
-
Set environment variables:
export LLM_API_URL="https://your-llm-api-endpoint.com" export LLM_API_KEY="your-api-key"
-
Run the web server:
python web_server.py
ThinkingModels/
├── models/ # Thinking model definitions
├── src/
│ ├── core/ # Core application logic
│ ├── cli/ # Command-line interface
│ └── web/ # Web application
├── tests/ # Test suite
├── requirements.txt # Dependencies
├── config.py # Configuration management
└── README.md # Project documentation
The project includes a library of 140 thinking models,
The thinking models are current broadly divided into 2 types: explain and solve. The former is to help LLMs to explain a phenomenon, while the latter is to guide LLMs to solve a problem.
The meta-thinking models include:
- Problem Solving: SWOT Analysis, First Principles Thinking, 5 Whys
- Decision Making: Pareto Principle, Eisenhower Matrix, Cost-Benefit Analysis
- Creativity: Lateral Thinking, Brainstorming, SCAMPER
- Systems Thinking: Feedback Loops, Emergence, Systems Mapping
- And many more...
-
More models: The 140 thinking models represent only a small portion of humanity's "metacognitive" system. Many similar core concepts, frameworks, and cognitive models exist across various industries and fields and should be incorporated into this database. We hope open-sourcing will enable the community to contribute and continuously enrich this database. For example, in software development, each programming language has many accumulated gems (best practices). When a Coding Agent is generating code, it can leverage gems from this database based on the chosen language to inform the Agent and guide it in writing high-quality code.
-
More Metadata for models: The model selection process is currently quite crude, allowing the LLM to directly choose from the models. We can add more metadata to the models, such as domain, problem type, keywords, etc., allowing the LLM to first categorize and narrow down the search scope before selecting a suitable model.
-
A new paradigm, maybe: This database can become a component for various Agents, and even be packaged as a cloud service API to support Agents in different fields. If thinking traces of this type reach a certain scale, we can consider using post-training to "internalize" these thinking models into the LLM.
Eventually, the output of the reasoning model will change from the current:
<think>Thinking Trace ...</think>
Final result is here
to:
<meta>Meta-thinking models to use ...</meta>
<think>Thinking Trace ...</think>
Final result is here
This way, the LLM will first determine the direction of thought (meta-model), then think according to that direction, and finally output the result to the user.
This project is licensed under the MIT License - see the LICENSE file for details.