Skip to content

fix: train edit nlu sample #905

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

abdou6666
Copy link
Contributor

@abdou6666 abdou6666 commented Apr 9, 2025

Motivation

This pr fixes edit nlu sample by disabling the validate button when the required fields are not selected.

fixes #786

Type of change:

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • I have performed a self-review of my own code

Summary by CodeRabbit

  • New Features
    • Added visual feedback to indicate when sample creation or update operations are in progress.
    • The validate button is now disabled while required fields are empty or a mutation is loading, improving form usability.

@abdou6666 abdou6666 self-assigned this Apr 9, 2025
Copy link
Contributor

coderabbitai bot commented Apr 9, 2025

Walkthrough

The changes introduce a loading state variable for mutation operations in the NLU sample editing and creation flows. The isMutationLoading prop is added and propagated through relevant components, allowing the UI to reflect when a mutation is in progress. The logic for enabling or disabling the "validate" button is consolidated to account for both field completeness and mutation loading state. No exported or public entity declarations are altered except for the addition of the new prop.

Changes

Files/Paths Change Summary
frontend/src/components/nlp/components/NlpSampleForm.tsx Added isUpdatingSample to track mutation loading; passed as isMutationLoading prop to NlpDatasetSample.
frontend/src/components/nlp/components/NlpTrainForm.tsx Added isMutationLoading prop to NlpDatasetSampleProps; updated validate button logic to use consolidated state.
frontend/src/components/nlp/index.tsx Added isLoading from useCreate hook; passed as isMutationLoading prop to NlpDatasetSample.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant NlpSampleForm
    participant useUpdate
    participant NlpDatasetSample

    User->>NlpSampleForm: Edit and submit sample
    NlpSampleForm->>useUpdate: Trigger updateSample mutation
    useUpdate-->>NlpSampleForm: Return isUpdatingSample (loading state)
    NlpSampleForm->>NlpDatasetSample: Pass isMutationLoading prop
    NlpDatasetSample->>User: Reflect loading state in UI (e.g., disable button)
Loading

Assessment against linked issues

Objective Addressed Explanation
Fix error on editing NLU sample and clicking validate (#786)

Suggested labels

needs-review

Suggested reviewers

  • marrouchi
  • MohamedAliBouhaouala

Poem

A sample to edit, a button to press,
Now loading states save us from UI distress.
No more errors when you validate—
The bunny’s code ensures you’ll celebrate!
With every hop and every fix,
The NLU page learns new tricks.
🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d42ab6c and 9f0f5b0.

📒 Files selected for processing (3)
  • frontend/src/components/nlp/components/NlpSampleForm.tsx (2 hunks)
  • frontend/src/components/nlp/components/NlpTrainForm.tsx (4 hunks)
  • frontend/src/components/nlp/index.tsx (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • frontend/src/components/nlp/components/NlpSampleForm.tsx
  • frontend/src/components/nlp/index.tsx
  • frontend/src/components/nlp/components/NlpTrainForm.tsx
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: API-Tests
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Frontend Tests
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🔭 Outside diff range comments (1)
api/src/nlp/services/nlp-sample.service.spec.ts (1)

56-363: 🛠️ Refactor suggestion

Missing test cases for new functionality

While the necessary dependencies for the new automate_inference functionality have been added, there are no test cases that verify this functionality. This could lead to untested code.

Add test cases for the new functionality that uses the automate_inference setting. I can help you generate appropriate test cases if needed.

🧹 Nitpick comments (3)
api/src/nlp/services/nlp-sample.service.ts (3)

282-282: Add null check for settings.chatbot_settings.

There's a potential issue if the settings object doesn't have the expected structure.

-    if (!settings.chatbot_settings.automate_inference) {
+    if (!settings.chatbot_settings?.automate_inference) {

290-290: Fix typo in log message.

-      this.logger.debug(`${helper.getName()} infered these entites`, entities);
+      this.logger.debug(`${helper.getName()} inferred these entities`, entities);

305-306: Add success log after sample update.

Adding a log message after successfully updating a sample would improve traceability and debugging.

      await this.repository.updateOne(foundSample.id, {
        type: 'train',
      });
+      this.logger.debug(`Sample ${foundSample.id} updated from inbox to train`);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 208ddd4 and d7965dd.

📒 Files selected for processing (8)
  • api/src/nlp/services/nlp-sample.service.spec.ts (3 hunks)
  • api/src/nlp/services/nlp-sample.service.ts (3 hunks)
  • api/src/setting/seeds/setting.seed-model.ts (1 hunks)
  • frontend/public/locales/en/chatbot_settings.json (1 hunks)
  • frontend/public/locales/fr/chatbot_settings.json (1 hunks)
  • frontend/src/components/nlp/components/NlpSampleForm.tsx (2 hunks)
  • frontend/src/components/nlp/components/NlpTrainForm.tsx (4 hunks)
  • frontend/src/components/nlp/index.tsx (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Frontend Tests
  • GitHub Check: API-Tests
🔇 Additional comments (15)
frontend/src/components/nlp/components/NlpSampleForm.tsx (2)

32-32: Good enhancement to track loading state

The addition of isLoading: isUpdatingSample properly captures the mutation loading state from the useUpdate hook, which is essential for providing user feedback during async operations.


66-70: Correctly passing loading state to child component

The loading state is now properly passed to the NlpDatasetSample component, enabling it to reflect the mutation state appropriately in the UI.

frontend/src/components/nlp/index.tsx (2)

65-65: Good enhancement to track creation loading state

The addition of isLoading destructured from the useCreate hook properly captures the mutation loading state, consistent with the approach in NlpSampleForm.


94-97: Correctly passing loading state to child component

The loading state from the create mutation is now properly passed to the NlpDatasetSample component, ensuring consistent UX during data submission.

frontend/src/components/nlp/components/NlpTrainForm.tsx (5)

52-52: Good enhancement to NlpDatasetSampleProps type

Adding the isMutationLoading prop to the component's props type ensures proper type checking and documents the new requirement.


58-58: Correctly destructuring the new prop

The isMutationLoading prop is properly destructured from the component props.


101-101: Added monitoring of language field

Good addition of the language field to the watched form values, which is now used in validation logic.


171-180: Improved validation logic with clear variable names

The code now uses descriptive variable names to check for empty values in various form fields, which enhances readability and maintainability. The shouldDisableValidateButton aggregates all validation checks including the mutation loading state.


458-458: Simplified button disabled state logic

Using the comprehensive shouldDisableValidateButton variable instead of inline checks makes the code more readable and maintainable.

api/src/setting/seeds/setting.seed-model.ts (1)

88-94: LGTM! New setting properly structured

The addition of the automate_inference setting is well-structured and follows the existing pattern. It's correctly categorized under 'chatbot_settings' with an appropriate type and default value.

api/src/nlp/services/nlp-sample.service.spec.ts (3)

13-20: New imports added correctly

The imports for HelperService, SettingRepository, SettingModel, SettingSeeder, and SettingService have been properly added, which are necessary for the enhanced functionality.


77-78: SettingModel properly added to Mongoose feature imports

The SettingModel has been correctly added to the Mongoose module feature imports.


92-95: Added required services to test providers

The necessary services (SettingService, SettingRepository, SettingSeeder, HelperService) have been properly added to the providers array.

api/src/nlp/services/nlp-sample.service.ts (2)

18-19: New imports are clearly structured and well-organized.

The additions of HelperService, HelperType, and SettingService are appropriate for the new functionality being introduced.

Also applies to: 22-22


52-53: Constructor dependencies are properly injected.

The addition of helperService and settingService as private readonly dependencies follows the established pattern in this class and maintains consistency with dependency injection principles.

Comment on lines 11 to 12
"default_storage_helper": "Utilitaire de stockage par défaut",
"automate_inference": "Inférence Automatisée"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Missing help text for new setting

Unlike other settings in this file, there is no corresponding help text added in the "help" section for the new "automate_inference" setting. Users may not understand the purpose of this setting.

Add help text by including the following in the "help" section:

    "default_llm_helper": "Utilitaire responsable de l'intelligence artificielle générative avancée pour effectuer des tâches telles que la génération de texte, la complétion de chat et les réponses à des requêtes complexes.",
    "default_storage_helper": "Utilitaire de stockage définit l'emplacement où stocker les fichiers joints. Par défaut, le stockage local les conserve localement, mais vous pouvez choisir d'utiliser Minio ou toute autre solution de stockage."
+   "automate_inference": "L'inférence automatisée permet au système de traiter et de comprendre automatiquement les entrées des utilisateurs sans intervention manuelle, améliorant ainsi le temps de réponse et la précision."
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"default_storage_helper": "Utilitaire de stockage par défaut",
"automate_inference": "Inférence Automatisée"
{
"help": {
"default_llm_helper": "Utilitaire responsable de l'intelligence artificielle générative avancée pour effectuer des tâches telles que la génération de texte, la complétion de chat et les réponses à des requêtes complexes.",
"default_storage_helper": "Utilitaire de stockage définit l'emplacement où stocker les fichiers joints. Par défaut, le stockage local les conserve localement, mais vous pouvez choisir d'utiliser Minio ou toute autre solution de stockage.",
"automate_inference": "L'inférence automatisée permet au système de traiter et de comprendre automatiquement les entrées des utilisateurs sans intervention manuelle, améliorant ainsi le temps de réponse et la précision."
}
}

@abdou6666 abdou6666 mentioned this pull request Apr 9, 2025
2 tasks
@abdou6666 abdou6666 force-pushed the fix/train-edit-nlu-sample-2 branch from 142bd28 to a8002e5 Compare April 14, 2025 09:48
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
api/src/nlp/services/nlp-sample.service.ts (2)

313-341: ⚠️ Potential issue

Add error handling to the handleInference method.

The method needs error handling around external service calls and database operations, similar to other methods in this class (like handleNewMessage). Without proper error handling, issues could propagate and cause unexpected behavior.

Apply this diff to implement error handling:

  @OnEvent('hook:message:postCreate')
  async handleInference(doc: THydratedDocument<Message>) {
-    const settings = await this.settingService.getSettings();
-    if (!settings.chatbot_settings.automate_inference) {
-      this.logger.log('Automated inference is disabled');
-      return;
-    }
-    this.logger.log('Automated inference running');
-    if ('text' in doc.message) {
-      const helper = await this.helperService.getDefaultHelper(HelperType.NLU);
-      const { entities = [] } = await helper.predict(doc.message.text);
-      this.logger.debug(`${helper.getName()} infered these entites`, entities);
-
-      const foundSample = await this.repository.findOne({
-        text: doc.message.text,
-        type: 'inbox',
-      });
-      if (!foundSample) {
-        return;
-      }
-      await this.nlpSampleEntityService.storeSampleEntities(
-        foundSample,
-        entities,
-      );
-      await this.repository.updateOne(foundSample.id, {
-        type: 'train',
-      });
-    }
+    try {
+      const settings = await this.settingService.getSettings();
+      if (!settings.chatbot_settings?.automate_inference) {
+        this.logger.log('Automated inference is disabled');
+        return;
+      }
+      this.logger.log('Automated inference running');
+      if ('text' in doc.message) {
+        const helper = await this.helperService.getDefaultHelper(HelperType.NLU);
+        const { entities = [] } = await helper.predict(doc.message.text);
+        this.logger.debug(`${helper.getName()} infered these entites`, entities);
+
+        const foundSample = await this.repository.findOne({
+          text: doc.message.text,
+          type: 'inbox',
+        });
+        if (!foundSample) {
+          return;
+        }
+        await this.nlpSampleEntityService.storeSampleEntities(
+          foundSample,
+          entities,
+        );
+        await this.repository.updateOne(foundSample.id, {
+          type: 'train',
+        });
+        this.logger.debug(`Sample ${foundSample.id} updated from inbox to train`);
+      }
+    } catch (error) {
+      this.logger.error('Error during automated inference', error);
+    }
  }

316-317: ⚠️ Potential issue

Add null check for chatbot_settings.

Add a null/undefined check for the settings.chatbot_settings object to prevent potential errors if the settings structure changes.

-    if (!settings.chatbot_settings.automate_inference) {
+    if (!settings.chatbot_settings?.automate_inference) {
🧹 Nitpick comments (3)
api/src/nlp/services/nlp-sample.service.ts (3)

313-314: Add documentation to explain the handleInference method.

Following the pattern in the rest of the codebase, consider adding JSDoc comments to explain the purpose and functionality of this method.

+  /**
+   * Handles automated inference for new messages
+   * 
+   * When a new message is created and the automate_inference setting is enabled,
+   * this method automatically predicts entities for the message text,
+   * finds a matching sample in the inbox, and updates it to be a training sample.
+   * 
+   * @param doc - The message document that was created
+   */
  @OnEvent('hook:message:postCreate')
  async handleInference(doc: THydratedDocument<Message>) {

324-325: Fix typo in debug message.

There's a typo in the debug message: "infered" should be "inferred".

-      this.logger.debug(`${helper.getName()} infered these entites`, entities);
+      this.logger.debug(`${helper.getName()} inferred these entities`, entities);

337-339: Add debug logging for successful sample update.

Consider adding a debug log entry to record when a sample is successfully updated from inbox to train, similar to other logging patterns in this class.

      await this.repository.updateOne(foundSample.id, {
        type: 'train',
      });
+      this.logger.debug(`Sample ${foundSample.id} updated from inbox to train`);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d7965dd and a8002e5.

📒 Files selected for processing (8)
  • api/src/nlp/services/nlp-sample.service.spec.ts (3 hunks)
  • api/src/nlp/services/nlp-sample.service.ts (3 hunks)
  • api/src/setting/seeds/setting.seed-model.ts (1 hunks)
  • frontend/public/locales/en/chatbot_settings.json (1 hunks)
  • frontend/public/locales/fr/chatbot_settings.json (1 hunks)
  • frontend/src/components/nlp/components/NlpSampleForm.tsx (2 hunks)
  • frontend/src/components/nlp/components/NlpTrainForm.tsx (4 hunks)
  • frontend/src/components/nlp/index.tsx (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
  • frontend/public/locales/en/chatbot_settings.json
  • frontend/public/locales/fr/chatbot_settings.json
  • api/src/setting/seeds/setting.seed-model.ts
  • frontend/src/components/nlp/components/NlpSampleForm.tsx
  • frontend/src/components/nlp/index.tsx
  • frontend/src/components/nlp/components/NlpTrainForm.tsx
  • api/src/nlp/services/nlp-sample.service.spec.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
api/src/nlp/services/nlp-sample.service.ts (2)
api/src/utils/types/filter.types.ts (1)
  • THydratedDocument (139-139)
api/src/extensions/channels/web/types.ts (1)
  • Message (221-221)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: API-Tests
🔇 Additional comments (2)
api/src/nlp/services/nlp-sample.service.ts (2)

19-20: Appropriately imported the necessary dependencies.

The additions of HelperService, HelperType, and SettingService imports support the new automated inference functionality.

Also applies to: 23-23


54-55: Dependencies properly injected into the constructor.

The constructor has been correctly updated to include the necessary services for the new automated inference functionality.

foundSample,
entities,
);
await this.repository.updateOne(foundSample.id, {
Copy link
Contributor

@MohamedAliBouhaouala MohamedAliBouhaouala Apr 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Language should also be updated because it's actually taking the default value. Should be fetched from the variable entities

this.logger.log('Automated inference running');
if ('text' in doc.message) {
const helper = await this.helperService.getDefaultHelper(HelperType.NLU);
const { entities = [] } = await helper.predict(doc.message.text);
Copy link
Contributor

@MohamedAliBouhaouala MohamedAliBouhaouala Apr 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Language entity should be filtered out for entities

@MohamedAliBouhaouala
Copy link
Contributor

We need to handle the migration of newly added settings. Currently it's not working as far as I tested.

@abdou6666
Copy link
Contributor Author

@MohamedAliBouhaouala Thanks for the review, feed back applied !

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
api/src/nlp/services/nlp-sample.service.ts (1)

313-320: ⚠️ Potential issue

Add error handling and null safety check

While the event handler implementation looks good, it's missing error handling and a null check for chatbot_settings.

@OnEvent('hook:message:postCreate')
async handleInference(doc: THydratedDocument<Message>) {
-  const settings = await this.settingService.getSettings();
-  if (!settings.chatbot_settings.automate_inference) {
-    this.logger.log('Automated inference is disabled');
-    return;
-  }
-  this.logger.log('Automated inference running');
+  try {
+    const settings = await this.settingService.getSettings();
+    if (!settings.chatbot_settings?.automate_inference) {
+      this.logger.log('Automated inference is disabled');
+      return;
+    }
+    this.logger.log('Automated inference running');
🧹 Nitpick comments (2)
api/src/nlp/services/nlp-sample.service.ts (2)

321-331: Fix typo in debug log message

There's a typo in the debug log message.

const helper = await this.helperService.getDefaultHelper(HelperType.NLU);
let { entities = [] } = await helper.predict(doc.message.text);
this.logger.debug(
-  `${helper.getName()} inferred these entities`,
+  `${helper.getName()} inferred these entities`,
  entities,
);

Also, consider renaming variables for better clarity when filtering out language entity:

- const inferredLanguage = entities.find((e) => e.entity === 'language');
- entities = entities.filter((e) => e.entity !== 'language');
+ const inferredLanguage = entities.find((e) => e.entity === 'language');
+ const filteredEntities = entities.filter((e) => e.entity !== 'language');

313-360: Add JSDoc documentation to the method

All other methods in this class have JSDoc documentation, but this new method doesn't. Add documentation to maintain consistency.

+/**
+ * Handles automated inference for new messages.
+ * When a message is created and automate_inference is enabled,
+ * this method predicts entities in the message text, sets them in the 
+ * corresponding NLP sample, and changes the sample type from 'inbox' to 'train'.
+ *
+ * @param doc - The message document that was created
+ */
@OnEvent('hook:message:postCreate')
async handleInference(doc: THydratedDocument<Message>) {
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a8002e5 and 3270ec5.

📒 Files selected for processing (1)
  • api/src/nlp/services/nlp-sample.service.ts (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
api/src/nlp/services/nlp-sample.service.ts (2)
api/src/utils/types/filter.types.ts (1)
  • THydratedDocument (139-139)
api/src/extensions/channels/web/types.ts (1)
  • Message (221-221)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Frontend Tests
  • GitHub Check: API-Tests
🔇 Additional comments (3)
api/src/nlp/services/nlp-sample.service.ts (3)

19-20: LGTM: Appropriate imports added for new functionality

The imports added for HelperService, HelperType, and SettingService align properly with the new functionality being implemented.

Also applies to: 23-23


54-55: LGTM: Constructor dependencies properly injected

The new dependencies for helperService and settingService are correctly injected via the constructor.


340-343:

✅ Verification successful

Verify entity filtering implementation

The code correctly filters out the language entity before storing entities in the sample, addressing the comment by MohamedAliBouhaouala.

Run this script to verify that language filtering is working properly in other NLU-related functions:


🏁 Script executed:

#!/bin/bash
# Check if language entity filtering is consistent across the codebase

# Look for other instances where entities are processed
rg -A 5 -B 5 "entities\s*=\s*.*filter" --type ts --glob "api/src/nlp/**"

Length of output: 898


Entity filtering implementation verified

The shell script output confirms that the language entity is correctly filtered out in api/src/nlp/services/nlp-sample.service.ts before storing the entities. The filtering operation:

  • Locates the language entity with:
    const inferredLanguage = entities.find((e) => e.entity === 'language');
  • Filters it out with:
    entities = entities.filter((e) => e.entity !== 'language');

This behavior matches the intention stated in the review comment by MohamedAliBouhaouala. No further changes are required.

Comment on lines 345 to 358
const language = await this.languageService.findOne(
{ code: inferredLanguage?.value },
undefined,
{ _id: 1 },
);

if (!language) {
this.logger.warn('Unable to find inferred language', inferredLanguage);
}

await this.repository.updateOne(foundSample.id, {
type: 'train',
...(language && { language: language.id }),
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Check language update logic

The logic for updating the language based on inference looks correct, but there's a potential edge case if inferredLanguage is undefined.


🏁 Script executed:

#!/bin/bash
# Check how language detection is handled elsewhere in the codebase

# Look for language detection patterns
rg -A 3 -B 3 "entity === 'language'" --type ts
rg -A 3 -B 3 "e.entity === 'language'" --type ts

# Check for similar language update patterns
rg -A 5 "language: language" --type ts

Length of output: 9245


Action: Enhance Guard for Inferred Language

The language update logic generally works as expected. However, note that if inferredLanguage is undefined, the call to this.languageService.findOne({ code: inferredLanguage?.value }, …) will pass undefined for the language code. This could lead to unnecessary database lookups and a vague warning message. Consider adding an explicit check for the existence of inferredLanguage before performing the lookup and update.

  • File & Lines: api/src/nlp/services/nlp-sample.service.ts (Lines 345-358)
  • Issue: findOne is called with { code: undefined } when inferredLanguage is not set.
  • Suggestion: Guard against an undefined inferredLanguage (e.g., log and skip the lookup) to improve clarity and efficiency.

Comment on lines 332 to 359
const foundSample = await this.repository.findOne({
text: doc.message.text,
type: 'inbox',
});
if (!foundSample) {
return;
}

await this.nlpSampleEntityService.storeSampleEntities(
foundSample,
entities,
);

const language = await this.languageService.findOne(
{ code: inferredLanguage?.value },
undefined,
{ _id: 1 },
);

if (!language) {
this.logger.warn('Unable to find inferred language', inferredLanguage);
}

await this.repository.updateOne(foundSample.id, {
type: 'train',
...(language && { language: language.id }),
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Complete error handling and add success logging

The code needs to complete the error handling wrapper and add a success log when a sample is updated.

const foundSample = await this.repository.findOne({
  text: doc.message.text,
  type: 'inbox',
});
if (!foundSample) {
  return;
}

await this.nlpSampleEntityService.storeSampleEntities(
  foundSample,
-  entities,
+  filteredEntities,
);

const language = await this.languageService.findOne(
  { code: inferredLanguage?.value },
  undefined,
  { _id: 1 },
);

if (!language) {
  this.logger.warn('Unable to find inferred language', inferredLanguage);
}

await this.repository.updateOne(foundSample.id, {
  type: 'train',
  ...(language && { language: language.id }),
});
+this.logger.debug(`Sample ${foundSample.id} updated from inbox to train`);
+  } catch (error) {
+    this.logger.error('Error during automated inference', error);
+  }

Committable suggestion skipped: line range outside the PR's diff.

await this.nlpSampleEntityService.storeSampleEntities(
foundSample,
entities,
);

const language = await this.languageService.findOne(
Copy link
Contributor

@MohamedAliBouhaouala MohamedAliBouhaouala Apr 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just minor enhancement to avoid extra DB calls:

 let languageId: Language | null = null;
      if (languageEntity?.value) {
        languageId = await this.languageService.findOne(
          { code: languageEntity.value },
          undefined,
          { _id: 1 }
        );
      }

@abdou6666
Copy link
Contributor Author

This PR has been put on hold as other development tasks have taken higher priority

@marrouchi
Copy link
Contributor

We should emit 'hook:chatbot:received' after the NLU prediction, that way we avoid doing 2 requests.


      // Trigger message received event
      this.eventEmitter.emit('hook:chatbot:received', event);

      if (subscriber?.assignedTo) {
        this.logger.debug('Conversation taken over', subscriber.assignedTo);
        return;
      }

      if (event.getText() && !event.getNLP()) {
        try {
          const helper = await this.helperService.getDefaultNluHelper();
          const nlp = await helper.predict(event.getText(), true);
          event.setNLP(nlp);
        } catch (err) {
          this.logger.error('Unable to perform NLP parse', err);
        }
      }

@abdou6666 abdou6666 force-pushed the fix/train-edit-nlu-sample-2 branch from 3270ec5 to 5840841 Compare May 5, 2025 09:35
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
api/src/chat/services/chat.service.ts (1)

147-147: Fix typo in variable name.

There's a typo in the variable name "nlpEntites" that should be corrected.

-      const nlpEntites = event.getNLP();
+      const nlpEntities = event.getNLP();

Make sure to update the usage of this variable in the condition below as well.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5840841 and d42ab6c.

📒 Files selected for processing (4)
  • api/src/chat/chat.module.ts (2 hunks)
  • api/src/chat/services/chat.service.ts (4 hunks)
  • api/src/nlp/services/nlp-sample.service.spec.ts (0 hunks)
  • api/src/nlp/services/nlp-sample.service.ts (2 hunks)
💤 Files with no reviewable changes (1)
  • api/src/nlp/services/nlp-sample.service.spec.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • api/src/nlp/services/nlp-sample.service.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
api/src/chat/chat.module.ts (1)
api/src/nlp/schemas/nlp-sample.schema.ts (1)
  • NlpSampleModel (79-82)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: API-Tests
  • GitHub Check: Frontend Tests
🔇 Additional comments (7)
api/src/chat/chat.module.ts (3)

19-20: Nice integration of NLP module.

This change adds the required imports for the NLP module and schema, allowing the chat module to use NLP functionality.


67-67: Good addition of NlpSampleModel to Mongoose features.

This correctly registers the NLP sample model with Mongoose, allowing it to be used within the chat module.


74-74: Proper module registration.

The NlpModule is correctly imported into the ChatModule, enabling injection of NLP-related services in the ChatService.

api/src/chat/services/chat.service.ts (4)

32-32: Correct import of NlpSampleService.

The import path follows the project's conventions and gives access to the NLP sample service functionality.


50-50: Proper dependency injection of NlpSampleService.

The service is correctly injected into the constructor, following NestJS best practices.


369-371: Event emission reordering aligns with PR objectives.

Moving the event emission ('hook:chatbot:received') to this position ensures NLP data is processed before notifying other components, which aligns with the PR objective of automating the inference process. This change matches marrouchi's suggestion to emit the event after NLU prediction is performed, avoiding duplicate requests.


358-368:

❓ Verification inconclusive

Consider enhancing NLP extraction with proposed settings.

The PR mentions adding an "automate inference" setting that defaults to true. However, this code doesn't appear to check that setting before running the NLP prediction. The current implementation always attempts to run the helper prediction if text exists without NLP data.

Could we verify if the "automate inference" setting mentioned in the PR should be checked here before running the prediction? This might look like:


🏁 Script executed:

#!/bin/bash
# Check if there's code in the codebase that checks the "automate inference" setting
rg -A 5 "automate.?inference" --no-ignore

Length of output: 43


Verify existence of automateInference flag before NLP execution
I searched the codebase for any reference to “automate inference” (and common variants) and found none. Please confirm that:

  • The automateInference (or correctly named) setting is defined (with a default of true) in your configuration.
  • This block in api/src/chat/services/chat.service.ts (around lines 358–368) checks that flag before calling helper.predict.

Without such a guard, NLP will always run on incoming text, regardless of your new setting.

Comment on lines 147 to 155
const nlpEntites = event.getNLP();

if (nlpEntites?.entities) {
await this.nlpSampleService.upgradeSampleWithEntities(
nlpEntites.entities,
msg,
);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding error handling for NLP sample upgrades.

While the code correctly extracts NLP entities and processes them, errors in the upgradeSampleWithEntities method aren't caught here and could potentially crash the handleReceivedMessage method.

      const msg = await this.messageService.create(received);
      const nlpEntites = event.getNLP();

      if (nlpEntites?.entities) {
+       try {
          await this.nlpSampleService.upgradeSampleWithEntities(
            nlpEntites.entities,
            msg,
          );
+       } catch (err) {
+         this.logger.error('Failed to upgrade NLP sample with entities', err);
+       }
      }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const nlpEntites = event.getNLP();
if (nlpEntites?.entities) {
await this.nlpSampleService.upgradeSampleWithEntities(
nlpEntites.entities,
msg,
);
}
const nlpEntites = event.getNLP();
if (nlpEntites?.entities) {
try {
await this.nlpSampleService.upgradeSampleWithEntities(
nlpEntites.entities,
msg,
);
} catch (err) {
this.logger.error('Failed to upgrade NLP sample with entities', err);
}
}

@abdou6666 abdou6666 force-pushed the fix/train-edit-nlu-sample-2 branch from d42ab6c to 9f0f5b0 Compare May 5, 2025 10:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

🐛 [BUG] - Error on editing NLU sample
4 participants