Skip to content

Integrate Language Model for chat functionality#2

Merged
CodeRifle merged 1 commit intomainfrom
feat/llm-chat-integration
Jun 22, 2025
Merged

Integrate Language Model for chat functionality#2
CodeRifle merged 1 commit intomainfrom
feat/llm-chat-integration

Conversation

@CodeRifle
Copy link
Copy Markdown
Owner

  • Modified script.js to use LanguageModel.create() and session.prompt() for chat responses.
  • Removed hardcoded responses.
  • Added a mock LanguageModel object in index.html for testing purposes, to be replaced with the actual SDK.
  • Implemented basic error handling for LLM session initialization and prompting.

- Modified script.js to use LanguageModel.create() and session.prompt() for chat responses.
- Removed hardcoded responses.
- Added a mock LanguageModel object in index.html for testing purposes, to be replaced with the actual SDK.
- Implemented basic error handling for LLM session initialization and prompting.
@CodeRifle CodeRifle merged commit 8b07647 into main Jun 22, 2025
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @CodeRifle, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly upgrades the chat functionality by transitioning from static, hardcoded responses to dynamic, AI-generated replies using a Language Model. It sets up the necessary asynchronous communication with the LLM, includes a convenient in-browser mock for development, and incorporates error handling to ensure a more robust user experience.

Highlights

  • Language Model Integration: The chat functionality now uses an external Language Model (LLM) via LanguageModel.create() and session.prompt() for generating responses, replacing previous static replies.
  • Hardcoded Response Removal: The responses object containing predefined answers has been removed from script.js, making the chat dynamic.
  • Mock LLM for Development: A mock LanguageModel object has been added directly into index.html. This allows for local development and testing of the LLM integration without requiring the actual SDK to be present.
  • Robust Error Handling: Basic error handling has been implemented for both the initialization of the LLM session and during the prompting process, providing user feedback in case of issues.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request integrates a language model for chat functionality, replacing hardcoded responses. A mock object is used for local development. The review suggests separating the mock implementation, removing dead code, refining error handling, and reordering operations to improve code quality and user experience.

Comment thread index.html
Comment on lines +120 to +142
<script type="module">
// This is a placeholder. The actual LanguageModel library (e.g., from a CDN)
// would make `LanguageModel` available globally or as a module.
// For demonstration, we'll mock a simple version if it's not already defined.
if (typeof window.LanguageModel === 'undefined') {
console.log("Mocking LanguageModel for development purposes.");
window.LanguageModel = {
create: async () => {
console.log("Mock LanguageModel.create() called");
return {
prompt: async (message) => {
console.log(`Mock session.prompt() called with: "${message}"`);
await new Promise(resolve => setTimeout(resolve, 1000)); // Simulate network delay
if (message.toLowerCase().includes("error test")) {
throw new Error("Simulated LLM error.");
}
return `Mock response to: "${message}"`;
}
};
}
};
}
</script>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider moving the mock LanguageModel implementation to a separate file (e.g., mock-llm.js) and including it only during development. This prevents shipping development-only code to production.

Comment thread script.js
Comment on lines +1 to +19
// const responses = {
// 'who is rishabh gupta?': `Rishabh Gupta is an accomplished AI Engineer with expertise in developing and implementing cutting-edge artificial intelligence solutions. He has a strong background in machine learning, deep learning, and natural language processing, with a proven track record of delivering innovative AI-powered applications.`,
//
// 'what are his skills?': `Rishabh's technical skills include:
// • Deep Learning & Machine Learning (PyTorch, TensorFlow)
// • Natural Language Processing
// • Computer Vision
// • Large Language Models (LLMs)
// • Python, JavaScript, and various AI/ML frameworks
// • Cloud platforms (AWS, GCP)`,
//
// 'show me his projects': `Here are some of Rishabh's notable projects:
// 1. Developed an advanced NLP system for automated customer support
// 2. Created a computer vision solution for real-time object detection
// 3. Implemented a recommendation engine using deep learning
// 4. Built scalable ML pipelines for production environments`,
//
// 'default': `I'm not sure about that specific query, but I'd be happy to tell you about Rishabh's experience, skills, or projects. Feel free to ask about those topics!`
// };
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Remove the commented-out responses object to improve code clarity. Version control can be used to retrieve old code if needed.

Comment thread script.js
Comment on lines +29 to +45
async function initializeSession() {
try {
// Assuming LanguageModel is available globally e.g. via a <script> tag
if (window.LanguageModel && typeof window.LanguageModel.create === 'function') {
modelSession = await window.LanguageModel.create();
console.log("LanguageModel session initialized.");
} else {
console.error("LanguageModel library not found or 'create' method is missing.");
addMessage("Error: Chat functionality is currently unavailable. LanguageModel library not found.", false);
typingIndicator.style.display = 'none';
}
} catch (error) {
console.error("Error initializing LanguageModel session:", error);
addMessage("Error: Could not initialize the chat assistant. Please try again later.", false);
typingIndicator.style.display = 'none';
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Simplify error handling in initializeSession by throwing an error when LanguageModel is not found. This consolidates error handling and ensures a consistent user experience.

async function initializeSession() {
    try {
        if (!window.LanguageModel || typeof window.LanguageModel.create !== 'function') {
            throw new Error("LanguageModel library not found or 'create' method is missing.");
        }
        modelSession = await window.LanguageModel.create();
        console.log("LanguageModel session initialized.");
    } catch (error) {
        console.error("Error initializing LanguageModel session:", error);
        addMessage("Error: Could not initialize the chat assistant. Please try again later.", false);
        typingIndicator.style.display = 'none';
    }
}

Comment thread script.js
Comment on lines +69 to +74
// async function getResponse(message) {
// const normalizedMessage = message.toLowerCase().trim();
// await simulateTyping();
// typingIndicator.style.display = 'none';
// return responses[normalizedMessage] || responses['default'];
// }
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Remove the commented-out getResponse function as it's no longer used.

Comment thread script.js
Comment on lines 76 to 99
async function sendMessage() {
const message = userInput.value.trim();
if (!message) return;

addMessage(message, true);
userInput.value = '';
simulateTyping(); // Show typing indicator

const response = await getResponse(message);
addMessage(response);
if (!modelSession) {
addMessage("Chat assistant is not available. Session not initialized.", false);
typingIndicator.style.display = 'none';
return;
}

try {
const result = await modelSession.prompt(message);
addMessage(result);
} catch (error) {
console.error("Error getting response from LanguageModel:", error);
addMessage("Sorry, I encountered an error trying to respond. Please try again.", false);
} finally {
typingIndicator.style.display = 'none'; // Hide typing indicator
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Check for modelSession availability before showing the typing indicator in sendMessage to prevent UI flicker if the session hasn't initialized.

async function sendMessage() {
    const message = userInput.value.trim();
    if (!message) return;

    addMessage(message, true);
    userInput.value = '';

    if (!modelSession) {
        addMessage("Chat assistant is currently unavailable. Please try again later.", false);
        typingIndicator.style.display = 'none';
        return;
    }

    simulateTyping();

    try {
        const result = await modelSession.prompt(message);
        addMessage(result);
    } catch (error) {
        console.error("Error getting response from LanguageModel:", error);
        addMessage("Sorry, I encountered an error trying to respond. Please try again.", false);
    } finally {
        typingIndicator.style.display = 'none';
    }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant