Skip to content

This is a LlamaIndex project bootstrapped with create-llama to act as a full stack UI to accompany Retrieval-Augmented Generation (RAG) Bootstrap Application.

Notifications You must be signed in to change notification settings

tyrell/llm-ollama-llamaindex-bootstrap-ui

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Retrieval-Augmented Generation (RAG) Bootstrap Application UI

This is a LlamaIndex project bootstrapped with create-llama to act as a full stack UI to accompany Retrieval-Augmented Generation (RAG) Bootstrap Application, which can be found in its own repository at https://github.com/tyrell/llm-ollama-llamaindex-bootstrap

My blog post provides more context, motivation and thinking behind these projects.

UI Screenshot

The backend code of this application has been modified as below;

  1. Loading the Vector Store Index created previously in the Retrieval-Augmented Generation (RAG) Bootstrap Application in response to user queries submitted through the frontend UI.
    • Refer backend/app/utils/index.py and the code comments to understand the modifications.
  2. Querying the index with streaming enabled
    • Refer backend/app/api/routers/chat.py and the code comments to understand the modifications.

Running the full stack application

First, startup the backend as described in the backend README.

Second, run the development server of the frontend as described in the frontend README.

Open http://localhost:3000 with your browser to see the result.

License

Apache 2.0

~ Tyrell Perera

About

This is a LlamaIndex project bootstrapped with create-llama to act as a full stack UI to accompany Retrieval-Augmented Generation (RAG) Bootstrap Application.

Topics

Resources

Stars

Watchers

Forks