Skip to content

fgblanch/OutlookLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OutlookLLM https://github.com/fgblanch/OutlookLLM/OutlookLLM Add-in/assets/OutlookLLM.png

Outlook Add-in to use Generative AI features (Email composition, Email Thread Summarization (WIP), Inbox Q&A(WIP)) securely and privately. It uses a local LLM served via Nvidia TensorRT-LLM.

Installation and getting started (Windows):

This system has two componentes: 1) An Outlook Add-in front end (React, Office Add-in framework) 2) An LLM inference backend (Python, Flask, TensorRT-LLM)

To get the system running:

  1. Clone this repository:

    git clone https://github.com/fgblanch/OutlookLLM.git
    
  2. Install LLM dependencies:

2.1 Install TensorRT-LLM for Windows using the instructions here.

2.2 Download or Build your TensorRT-LLM LLM model of choice. The model needs to be Instruct Tuned (Llama format). - I used Mistral 7B Intruct tuned from HuggingFace: Mistral-7B-Instruct-v0.2, and converted to TensorRT-LLM using the instructions here - Other models tested are [Llama2 7B HF Chat] (https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) and Gemma 7b IT

  1. Outlook Add-in generation and sideloading. (WIP)

  2. Install LLM Backend dependencies

pip install -r requirements.txt
  1. Configure LLM Backend Https certificates (WIP)

  2. Run LLM Backend (WIP)

  3. Enjoy! ;)

Next Steps and Roadmap:

  • Build RAG on Backend for Inbox and Calendar Q&A

About

Add-in for new Outlook that adds LLM new features (Composition, Summarizing, Q&A). It uses a local LLM via Nvidia TensorRT-LLM

Topics

Resources

Stars

Watchers

Forks