Skip to content

kapria/llama-knowledge-query

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

2 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Historical QA using Llama-2 7B Chat (HuggingFace Transformers)

This project demonstrates how to use the Llama-2-7B-Chat model to answer questions based on a provided text corpus.
It loads the model, processes a historical document, and generates an answer to the user's question using causal language modeling.


๐Ÿš€ Features

  • Uses Llama-2-7B-Chat via HuggingFace Transformers
  • Accepts user questions dynamically
  • Provides context-aware answers using your custom text corpus
  • Runs automatically on GPU if available
  • Clean decoded output without prompt repetition

๐Ÿ“ฆ Requirements

Install dependencies:

pip install -r requirements.txt

About

A contextual Question-Answering system using Llama-2-7B-Chat. Reads your text corpus and generates accurate, history-aware answers using transformer-based inference.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages