Skip to content

Conversation

ofermend
Copy link
Contributor

No description provided.

@ofermend ofermend requested a review from Copilot September 30, 2025 15:31
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds example notebooks demonstrating how to integrate Vectara's HHEM (Hughes Hallucination Evaluation Model) and VHC (Vectara Hallucination Corrector) with popular RAG frameworks LangChain and LlamaIndex.

  • Provides complete integration examples for both LangChain and LlamaIndex workflows
  • Demonstrates hallucination detection using HHEM and correction using VHC
  • Shows both conservative (no hallucination) and expansive (prone to hallucination) RAG implementations

Reviewed Changes

Copilot reviewed 2 out of 3 changed files in this pull request and generated 1 comment.

File Description
notebooks/hallucination_mitigation/vhc-llamaindex-integration.ipynb Complete LlamaIndex integration example with VectaraClient, dual RAG engines, and pipeline demonstration
notebooks/hallucination_mitigation/vhc-langchain-integration.ipynb Complete LangChain integration example with VectaraClient, dual RAG chains, and pipeline demonstration

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

"## Summary\n",
"\n",
"This notebook demonstrated the integration of Vectara's HHEM and VHC with standard LangChain workflows.\n",
"We've seen that when a LangChain RAG pipeline hallcuinates, HHEM identifies the hallucination and VHC can correct it.\n",
Copy link

Copilot AI Sep 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a typo in 'hallcuinates' which should be 'hallucinates'.

Suggested change
"We've seen that when a LangChain RAG pipeline hallcuinates, HHEM identifies the hallucination and VHC can correct it.\n",
"We've seen that when a LangChain RAG pipeline hallucinates, HHEM identifies the hallucination and VHC can correct it.\n",

Copilot uses AI. Check for mistakes.

@ofermend ofermend requested a review from Copilot October 9, 2025 17:09
@ofermend ofermend merged commit 3b4297d into main Oct 9, 2025
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 2 out of 3 changed files in this pull request and generated no new comments.

Comments suppressed due to low confidence (3)

notebooks/hallucination_mitigation/vhc-llamaindex-integration.ipynb:1

  • Corrected spelling of 'Devinci' to 'da Vinci'.
{

notebooks/hallucination_mitigation/vhc-langchain-integration.ipynb:1

  • Corrected spelling of 'Devinci' to 'da Vinci'.
{

notebooks/hallucination_mitigation/vhc-langchain-integration.ipynb:1

  • Corrected spelling of 'Devinci' to 'da Vinci'.
{

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant