Skip to content

An Open-Source Assistants API and GPTs alternative. Dify.AI is an LLM application development platform. It integrates the concepts of Backend as a Service and LLMOps, covering the core tech stack required for building generative AI-native applications, including a built-in RAG engine.

License

Notifications You must be signed in to change notification settings

littlewwwhite/jnu-dify

 
 

Repository files navigation

English | 简体中文 | 日本語 | Español | Klingon | Français

Static Badge chat on Discord follow on Twitter Docker Pulls

Dify.AI Upcoming Meetup Event [👉 Click to Join the Event Here 👈]

  • US EST: 09:00 (9:00 AM)
  • CET: 15:00 (3:00 PM)
  • CST: 22:00 (10:00 PM)

Dify.AI Unveils AI Agent: Creating GPTs and Assistants with Various LLMs

Dify is an LLM application development platform that has helped built over 100,000 applications. It integrates BaaS and LLMOps, covering the essential tech stack for building generative AI-native applications, including a built-in RAG engine. Dify allows you to deploy your own version of Assistants API and GPTs, based on any LLMs.

Using our Cloud Services

You can try out Dify.AI Cloud now. It provides all the capabilities of the self-deployed version, and includes 200 free requests to OpenAI GPT-3.5.

Dify vs. LangChain vs. Assistants API

Feature Dify.AI Assistants API LangChain
Programming Approach API-oriented API-oriented Python Code-oriented
Ecosystem Strategy Open Source Close Source Open Source
RAG Engine Supported Supported Not Supported
Prompt IDE Included Included None
Supported LLMs Rich Variety OpenAI-only Rich Variety
Local Deployment Supported Not Supported Not Applicable

Features

1. LLM Support: Integration with OpenAI's GPT family of models, or the open-source Llama2 family models. In fact, Dify supports mainstream commercial models and open-source models (locally deployed or based on MaaS).

2. Prompt IDE: Visual orchestration of applications and services based on LLMs with your team.

3. RAG Engine: Includes various RAG capabilities based on full-text indexing or vector database embeddings, allowing direct upload of PDFs, TXTs, and other text formats.

4. AI Agent: Based on Function Calling and ReAct, the Agent inference framework allows users to customize tools, what you see is what you get. Dify provides more than a dozen built-in tool calling capabilities, such as Google Search, DELL·E, Stable Diffusion, WolframAlpha, etc.

5. Continuous Operations: Monitor and analyze application logs and performance, continuously improving Prompts, datasets, or models using production data.

Before You Start

Star us on GitHub, and be instantly notified for new releases!

star-us

Install the Community Edition

System Requirements

Before installing Dify, make sure your machine meets the following minimum system requirements:

  • CPU >= 2 Core
  • RAM >= 4GB

Quick Start

The easiest way to start the Dify server is to run our docker-compose.yml file. Before running the installation command, make sure that Docker and Docker Compose are installed on your machine:

cd docker
docker compose up -d

After running, you can access the Dify dashboard in your browser at http://localhost/install and start the initialization installation process.

Helm Chart

Big thanks to @BorisPolonsky for providing us with a Helm Chart version, which allows Dify to be deployed on Kubernetes. You can go to https://github.com/BorisPolonsky/dify-helm for deployment information.

Configuration

If you need to customize the configuration, please refer to the comments in our docker-compose.yml file and manually set the environment configuration. After making the changes, please run docker-compose up -d again. You can see the full list of environment variables in our docs.

Star History

Star History Chart

Contributing

For those who'd like to contribute code, see our Contribution Guide.

At the same time, please consider supporting Dify by sharing it on social media and at events and conferences.

Contributors

Translations

We are looking for contributors to help with translating Dify to languages other than Mandarin or English. If you are interested in helping, please see the i18n README for more information, and leave us a comment in the global-users channel of our Discord Community Server.

Community & Support

  • Canny. Best for: sharing feedback and checking out our feature roadmap.
  • GitHub Issues. Best for: bugs you encounter using Dify.AI, and feature proposals. See our Contribution Guide.
  • Email Support. Best for: questions you have about using Dify.AI.
  • Discord. Best for: sharing your applications and hanging out with the community.
  • Twitter. Best for: sharing your applications and hanging out with the community.
  • Business Contact. Best for: business inquiries of licensing Dify.AI for commercial use.

Direct Meetings

Help us make Dify better. Reach out directly to us.

Point of Contact Purpose
Git-Hub-README-Button-3x Product design feedback, user experience discussions, feature planning and roadmaps.
Git-Hub-README-Button-2x Technical support, issues, or feature requests

Security Disclosure

To protect your privacy, please avoid posting security issues on GitHub. Instead, send your questions to security@dify.ai and we will provide you with a more detailed answer.

License

This repository is available under the Dify Open Source License, which is essentially Apache 2.0 with a few additional restrictions.

About

An Open-Source Assistants API and GPTs alternative. Dify.AI is an LLM application development platform. It integrates the concepts of Backend as a Service and LLMOps, covering the core tech stack required for building generative AI-native applications, including a built-in RAG engine.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 50.2%
  • TypeScript 44.2%
  • MDX 3.1%
  • CSS 1.4%
  • JavaScript 0.5%
  • SCSS 0.4%
  • Other 0.2%