Skip to content

The OpenAI API has a token limit of 4K per request. This application addresses this limitation by dividing large text requests into smaller parts, each processed as a separate API request. The resulting responses are combined and presented to the user in their entirety. The approach enhances manageability and ensures adherence to the token limit.

Notifications You must be signed in to change notification settings

bhaskatripathi/FusionGPT

Repository files navigation

FusionGPT

FusionGPT is a web application that allows users to split a large text into smaller parts and send each part along with a request to the OpenAI API. The application is built with Next.js and uses the Langchain's Chain of thoughts and has been deployed on Vercel.

Requirements

To use FusionGPT, you will need an OpenAI API key, which can be obtained at https://beta.openai.com/docs/api-reference/authentication.

Usage

Enter your text and request into the provided fields. Click the "Split Text" button to break up the text into smaller parts. Click the "Send Requests" button to send each part of the text along with the request to the OpenAI API. Your API key and request are stored in the browser's local storage, so there is no need to re-enter them if you revisit the page. Note that the API key is not stored on the server.

Deployment

The FusionGPT application is deployed on Vercel at https://fusiongpt.vercel.app/.

Sequence Diagram

sequenceDiagram
    participant User
    participant Browser
    participant Server
    participant OpenAI
    User->>Browser: Enter API Key, request and text
    Browser->>Server: POST 1st part to /api
    Server->>OpenAI: POST /v1/engines/davinci/completions
    OpenAI->>Server: Response
    Server->>Browser: Response
    Browser->>User: Display response for part 1
    Browser->>Server: POST 2nd part to /api
    Server->>OpenAI: POST /v1/engines/davinci/completions
    LangChain->>Server: Response
    OpenAI->>Server: Response
    Server->>Browser: Response
    Browser->>User: Display response for part 2
    Browser->>Server: POST Nth part to /api
    Server->>OpenAI: POST /v1/engines/davinci/completions
    OpenAI->>Server: Response
    Server->>Browser: Response
    Browser->>User: Display response for part N

Getting Started

First, run the development server:

npm i

npm run dev

About

The OpenAI API has a token limit of 4K per request. This application addresses this limitation by dividing large text requests into smaller parts, each processed as a separate API request. The resulting responses are combined and presented to the user in their entirety. The approach enhances manageability and ensures adherence to the token limit.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published