Project demonstrates the power and simplicity of NVIDIA NIM (NVIDIA Inference Model), a suite of optimized cloud-native microservices, by setting up and running a Retrieval-Augmented Generation (RAG) pipeline.
nim
nvidia
nvidia-docker
nvidia-gpu
rag
enterpriseservices
large-language-models
langchain
llmops
genai
llm-inference
retrieval-augmented-generation
-
Updated
Mar 21, 2024 - Python