Skip to content

Conversation

ErikBjare
Copy link
Member

@ErikBjare ErikBjare commented Aug 19, 2025

🎯 Overview

Implements auto-naming functionality for conversations in the gptme server API, addressing gptme-webui#15.

✨ Features

  • πŸ”„ Auto-generates contextual display names after the first assistant response
  • πŸ’° Cost optimization - Uses cheaper summary models (e.g., claude-3-haiku instead of claude-sonnet-4)
  • 🎯 Concise naming - Generates descriptive 2-4 word names that capture conversation topics
  • πŸ›‘οΈ Respects existing names - Only runs once per conversation, won't overwrite predefined names
  • ⚑ Real-time updates - Emits config_changed SSE events for immediate client notification

πŸ”§ Technical Implementation

  • ConfigChangedEvent - New general-purpose event type for config updates
  • Summary model selection - Leverages existing get_summary_model() infrastructure
  • Graceful error handling - Falls back to "New conversation" if generation fails
  • Comprehensive testing - Full test suite covering all scenarios

πŸ§ͺ Test Results

βœ… Auto-generates meaningful names - "Python script debugging", "React Todo List DnD"
βœ… Preserves existing names - Won't overwrite predefined display names
βœ… Contextually relevant - Names match conversation content and intent
βœ… Uses cheaper models - Significant cost savings while maintaining quality

πŸ’Έ Cost Savings Examples

Provider Main Model Auto-naming Model Savings
OpenAI gpt-4o gpt-4o-mini ~20x cheaper
Anthropic claude-sonnet-4 claude-3-haiku ~12x cheaper
Gemini gemini-2.5-pro gemini-2.5-flash ~4x cheaper

🌐 Integration

The gptme-webui can now listen for config_changed events with changed_fields: ["name"] to automatically update conversation names in the sidebar, making it meaningful and user-friendly like ChatGPT.

πŸ“‹ Checklist

  • Auto-naming logic implemented
  • Cost optimization with summary models
  • SSE event integration
  • Comprehensive test coverage
  • Pre-commit hooks passing
  • Type safety with mypy

Important

Introduces auto-naming for conversations in the server API, generating contextual names post-assistant response using cheaper models, with real-time updates and comprehensive testing.

  • Behavior:
    • Auto-generates conversation names after the first assistant response using generate_conversation_name() in auto_naming.py.
    • Uses cheaper summary models for cost efficiency, e.g., claude-3-haiku.
    • Emits config_changed SSE events for real-time client updates.
    • Respects existing names, only runs once per conversation.
  • Technical Implementation:
    • Adds ConfigChangedEvent to api_v2_common.py for config updates.
    • Utilizes generate_conversation_name() in auto_naming.py for name generation.
    • Updates step() in api_v2_sessions.py to handle auto-naming and emit events.
  • Testing:
    • Adds test_auto_naming.py to verify auto-naming functionality and event emissions.
    • Tests ensure names are contextually relevant and not overwritten if pre-set.
  • Misc:
    • Renames generate_name() to generate_llm_name() in commands.py and scripts/auto_rename_logs.py.
    • Updates get_summary_model() in models.py to use claude-3-5-haiku-20241022.

This description was created by Ellipsis for 42a394a. You can customize this summary. It will automatically update as commits are pushed.

- Auto-generates contextual display names after first assistant response
- Uses cheaper summary models (e.g. claude-haiku vs claude-sonnet) for cost optimization
- Generates concise 2-4 word names that capture conversation topics
- Only runs once per conversation, respects existing display names
- Emits config_changed SSE events for real-time client updates
- Adds ConfigChangedEvent type for general config update notifications
- Includes comprehensive tests covering all scenarios

Addresses gptme-webui#15 for meaningful conversation sidebar names
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! πŸ‘

Reviewed everything up to 6c032c3 in 1 minute and 49 seconds. Click for details.
  • Reviewed 312 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with πŸ‘ or πŸ‘Ž to teach Ellipsis.
1. gptme/server/api_v2_common.py:44
  • Draft comment:
    Added the 'config_changed' event type and its corresponding TypedDict. Looks consistent with other event types. Ensure downstream clients handle the new event appropriately.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
2. gptme/server/api_v2_sessions.py:172
  • Draft comment:
    The auto_generate_display_name function correctly builds a prompt from the last 4 messages and uses a summary model. Consider handling the edge case when the messages list is empty, and review if truncating the final result to 50 chars is sufficient for all cases.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
3. gptme/server/api_v2_sessions.py:350
  • Draft comment:
    The auto-naming trigger in the step function (checking that there's exactly one assistant message and no preset name) is logical. Consider potential race conditions if multiple responses are processed concurrently.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
4. tests/test_auto_naming.py:97
  • Draft comment:
    In the test 'test_auto_naming_only_runs_once', the wait for 'config_changed' event is used to confirm that no auto-naming occurs when a predefined name exists. It may be clearer to assert explicitly that no such event is received.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None

Workflow ID: wflow_wlMJV5rsheVe2FD6

You can customize Ellipsis by changing your verbosity settings, reacting with πŸ‘ or πŸ‘Ž, replying to comments, or adding code review rules.

Replace complex React component prompt with simple CSS question
to reduce test runtime while still validating contextual naming
Copy link

codecov bot commented Aug 19, 2025

Codecov Report

❌ Patch coverage is 34.45378% with 78 lines in your changes missing coverage. Please review.
βœ… All tests successful. No failed tests found.

Files with missing lines Patch % Lines
gptme/util/auto_naming.py 32.96% 61 Missing ⚠️
gptme/server/api_v2_sessions.py 11.76% 15 Missing ⚠️
gptme/cli.py 75.00% 1 Missing ⚠️
gptme/eval/agents.py 50.00% 1 Missing ⚠️

πŸ“’ Thoughts on this report? Let us know!

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! πŸ‘

Reviewed cf1778e in 1 minute and 2 seconds. Click for details.
  • Reviewed 55 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with πŸ‘ or πŸ‘Ž to teach Ellipsis.
1. tests/test_auto_naming.py:119
  • Draft comment:
    Changed the test message from a complex React todo-component request to a simpler CSS centering question. Ensure this change still adequately tests the auto-naming context and consider covering a range of topics.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
2. tests/test_auto_naming.py:134
  • Draft comment:
    Reduced timeouts from 20 to 15 seconds for waiting on 'config_changed' and 'generation_complete' events. Confirm that 15 seconds is sufficient under all test conditions.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
3. tests/test_auto_naming.py:149
  • Draft comment:
    Updated the list of relevant keywords to match the new CSS topic. Verify that these keywords reliably capture context in the generated names.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None

Workflow ID: wflow_HDor4opR6z2MUth0

You can customize Ellipsis by changing your verbosity settings, reacting with πŸ‘ or πŸ‘Ž, replying to comments, or adding code review rules.

The Anthropic API requires the first message to be a system message, which was causing an AssertionError. Added a system message to properly set context for the naming task.
Instead of setting a generic 'New conversation' name when auto-naming fails,
leave the name unset so:
- UI shows meaningful fallback (date-based name)
- Auto-naming can retry on subsequent assistant messages
- Conversations don't get stuck with non-descriptive names
Made the prompt much more explicit and directive to ensure LLM provides
only the title without explanations or cut-off responses:
- Clear task definition
- Explicit 'ONLY the title' instruction
- Format examples
- Direct 'Title:' prompt ending
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! πŸ‘

Reviewed 0f63486 in 1 minute and 39 seconds. Click for details.
  • Reviewed 32 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with πŸ‘ or πŸ‘Ž to teach Ellipsis.
1. gptme/server/api_v2_sessions.py:206
  • Draft comment:
    The updated prompt provides clear and explicit instructions with rules and examples, which improves consistency. Consider aligning the term used here ('title') with the rest of the codebase (which uses 'display name') to avoid potential confusion.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% While consistency in terminology is generally good, this comment is about a very minor wording difference in a prompt string. The prompt is working correctly either way - it's just about whether to say "title" vs "display name". The comment doesn't point out a real issue that needs fixing. The rules say not to make purely informative comments or comments about obvious/unimportant things. The terminology inconsistency could potentially cause confusion for future developers reading the code. Maybe this is more important than I initially thought? No, this is exactly the kind of nitpicky comment the rules are trying to avoid. The prompt works fine either way, and the slight terminology difference doesn't impact functionality or maintainability in any meaningful way. Delete this comment. It's a purely informative comment about a very minor terminology inconsistency that doesn't impact functionality.
2. gptme/server/api_v2_sessions.py:223
  • Draft comment:
    The prompt now ends with 'Title:' which may be inconsistent with the notion of a 'display name' used elsewhere. Consider using consistent terminology to improve clarity.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The comment points out a minor terminology inconsistency. However, this is a prompt to an LLM - what matters is clarity and effectiveness, not perfect terminology consistency with the rest of the codebase. The prompt was deliberately rewritten to be more concise and direct. The term "title" is perfectly clear and appropriate for this context. The suggestion would make the prompt slightly longer without adding value. I could be undervaluing the importance of consistent terminology across the codebase. Maybe this inconsistency could cause confusion for future developers. While consistency is good, this is an internal prompt to an LLM where clarity and conciseness matter more. The term "title" is clear and appropriate here. The inconsistency is minor and unlikely to cause real confusion. Delete this comment. The terminology change would not improve the code meaningfully, and the current prompt's conciseness and clarity is more important than perfect terminology consistency.

Workflow ID: wflow_u28L1hjxjvk0y72n

You can customize Ellipsis by changing your verbosity settings, reacting with πŸ‘ or πŸ‘Ž, replying to comments, or adding code review rules.

- Create unified auto_naming.py with function-based approach
- Support multiple strategies: random, llm, auto (automatic selection)
- Support both dash-separated (filesystem) and space-separated (display) formats
- Robust LLM fallback to random naming when API unavailable
- Reduce code duplication by 129 lines while maintaining all functionality
- Full backwards compatibility for existing code

Fixes server auto-naming failures mentioned in issue analysis
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed fe69e42 in 1 minute and 49 seconds. Click for details.
  • Reviewed 394 lines of code in 4 files
  • Skipped 0 files when reviewing.
  • Skipped posting 5 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with πŸ‘ or πŸ‘Ž to teach Ellipsis.
1. gptme/server/api_v2_sessions.py:172
  • Draft comment:
    Nice consolidation: auto_generate_display_name now defers to util.auto_naming, reducing duplicated logic. Ensure LLM responses are robust.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment starts with a compliment about code consolidation, which is purely informative. The second part, "Ensure LLM responses are robust," is a vague request for verification, which violates the rules against asking the author to ensure behavior is intended or tested. Overall, the comment does not provide a specific code suggestion or point out a specific issue.
2. gptme/util/auto_naming.py:16
  • Draft comment:
    The strategy-based naming in generate_conversation_name is effective. Consider parameterizing truncation lengths or max output size if future adjustments are needed.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The comment is making a suggestion about potential future improvements rather than pointing out a current issue. It's speculative ("if future adjustments are needed") and doesn't identify a concrete problem. The comment also seems to be praising existing code ("is effective") which isn't necessary in a review. The suggestion to parameterize lengths could be valid if there's evidence that these values need to be configurable. Maybe the current hardcoded values are causing problems? There's no evidence provided that the hardcoded values are problematic or that parameterization is needed. The comment is speculative about future needs rather than addressing current issues. This comment should be deleted as it's speculative, doesn't point out a concrete issue, and partially serves as praise rather than actionable feedback.
3. gptme/util/auto_naming.py:80
  • Draft comment:
    The _generate_llm_name function loops over summary and original models as fallback. Ensure that _chat_complete output is properly sanitized and that response length limits are configurable.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that the output is sanitized and that response length limits are configurable. This is a request for confirmation and testing, which violates the rules. The comment does not provide a specific suggestion or point out a specific issue with the code.
4. scripts/auto_rename_logs.py:130
  • Draft comment:
    Raising NotImplementedError here signals a placeholder for renaming after chat config update. Consider adding clearer instructions or fallback to avoid runtime exceptions in production.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. gptme/cli.py:321
  • Draft comment:
    Typo in the docstring: "If name is starts with a date, uses it as is." should likely be "If name starts with a date, uses it as is."
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_ut7uU8ktEooBMN23

You can customize Ellipsis by changing your verbosity settings, reacting with πŸ‘ or πŸ‘Ž, replying to comments, or adding code review rules.

- Remove hardcoded "openai/gpt-4o-mini" from auto-naming tests
- Remove hardcoded "openai/gpt-4o-mini" from SSE tests
- Tests now use system default model configuration
- Allows tests to properly respect MODEL environment variable set by CI matrix
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! πŸ‘

Reviewed 919b9e5 in 2 minutes and 2 seconds. Click for details.
  • Reviewed 34 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with πŸ‘ or πŸ‘Ž to teach Ellipsis.
1. tests/test_auto_naming.py:95
  • Draft comment:
    Consistency: The generation_complete event is awaited without an explicit timeout. Consider adding a timeout (e.g., timeout=15) as in other event assertions to prevent potential hangs.
  • Reason this comment was not posted:
    Comment was on unchanged code.
2. tests/test_auto_naming.py:137
  • Draft comment:
    Clarification: The timeout parameter was removed in waiting for the 'config_changed' event in the meaningful content test. Consider adding a comment explaining why waiting solely on 'config_changed' (without waiting for 'generation_complete') is sufficient, and optionally use an explicit timeout for consistency.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The test already has a clear comment "Wait for config_changed event first (comes before generation_complete)". The timeout parameter difference is intentional - in one test we're asserting the event DOESN'T happen (hence timeout), in the other we're waiting for it to happen (no timeout needed). The test has @pytest.mark.timeout(30) at the function level anyway. I might be overlooking potential race conditions or timing issues that could make the test flaky without an explicit timeout. The test already has a global 30-second timeout decorator, and the event ordering is documented. The timeout parameter serves different purposes in the two tests - presence vs absence verification. The comment should be deleted as it misunderstands the intentional difference in timeout usage and requests redundant documentation.

Workflow ID: wflow_dsAabfWNOC0ua1a8

You can customize Ellipsis by changing your verbosity settings, reacting with πŸ‘ or πŸ‘Ž, replying to comments, or adding code review rules.

@ErikBjare ErikBjare force-pushed the feat/server-auto-naming branch from 919b9e5 to 48332a0 Compare August 19, 2025 14:09
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! πŸ‘

Reviewed 48332a0 in 1 minute and 9 seconds. Click for details.
  • Reviewed 63 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with πŸ‘ or πŸ‘Ž to teach Ellipsis.
1. tests/test_auto_naming.py:27
  • Draft comment:
    The user prompt now appends 'Short answer.' in test_auto_naming_generates_display_name. This explicit instruction helps trigger concise responses. Ensure that the extra phrasing doesn't inadvertently alter the conversational context for auto-naming.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
2. tests/test_auto_naming.py:95
  • Draft comment:
    In test_auto_naming_only_runs_once, an extra assert checks that no 'config_changed' event fires after generation_complete. This double-check improves robustness but might be flaky in environments with variable timing. Consider documenting the expected event order or relaxing the timeout if needed.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
3. tests/test_auto_naming.py:124
  • Draft comment:
    The input in test_auto_naming_meaningful_content now includes 'Short answer please.' This reinforces a prompt for concise output. Verify that this modification consistently supports generating contextually relevant names without skewing content analysis.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
4. tests/test_auto_naming.py:138
  • Draft comment:
    The timeout for waiting on the 'config_changed' event in test_auto_naming_meaningful_content was increased from 15 to 20 seconds. This adjustment may be necessary for slower responses, so consider adding a comment or configuration to clarify the reasoning for future maintainers.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None

Workflow ID: wflow_xrrCEY8yyqKiVEMH

You can customize Ellipsis by changing your verbosity settings, reacting with πŸ‘ or πŸ‘Ž, replying to comments, or adding code review rules.

Copy link

openhands-ai bot commented Aug 19, 2025

Looks like there are a few issues preventing this PR from being merged!

  • GitHub Actions are failing:
    • Test

If you'd like me to help, just leave a comment, like

@OpenHands please fix the failing actions on PR #621 at branch `feat/server-auto-naming`

Feel free to include any additional details that might help me get this PR into a better state.

You can manage your notification settings

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! πŸ‘

Reviewed 2d29fac in 1 minute and 53 seconds. Click for details.
  • Reviewed 63 lines of code in 2 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with πŸ‘ or πŸ‘Ž to teach Ellipsis.
1. gptme/cli.py:313
  • Draft comment:
    Removed the get_name() function in favor of generate_conversation_id. This refactor improves consistency. Ensure that no other parts of the CLI depend on the old get_name implementation.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 40% <= threshold 50% The comment is asking the PR author to ensure that no other parts of the CLI depend on the old implementation, which violates the rule against asking the author to double-check things. However, it also provides information about the refactor, which is useful. The comment could be improved by focusing on the suggestion or observation rather than asking for confirmation.
2. gptme/cli.py:362
  • Draft comment:
    Refactored get_logdir to use generate_conversation_id for 'random' names. This improves naming consistency. Verify that get_logs_dir() reliably returns a valid logs directory.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment starts by explaining a refactoring change, which is informative. However, it ends with a request to verify the reliability of a function, which is not allowed by the rules. The comment does not provide a specific suggestion or ask for a test to be written, so it should be removed.
3. gptme/eval/agents.py:44
  • Draft comment:
    Good use of generate_conversation_id in the GPTMe agent for auto-naming. Note: the system prompt message concatenated on line 71 contains a minor typo ('dont' should be "don't").
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The first part of the comment is purely informative praise, which violates the rule about not making purely informative comments. The second part about the typo is about line 71, which isn't even part of the changes shown in the diff - it's unchanged code. Neither part of the comment requires any action from the author regarding the actual changes made. Maybe the typo fix is important enough to keep even though it's in unchanged code? Maybe the praise helps reinforce good practices? No - the rules clearly state not to comment on unchanged code and not to make purely informative comments. Both parts of this comment violate these rules. Delete this comment as it combines purely informative praise about a change with feedback about unchanged code.

Workflow ID: wflow_zrmL4fO6pe8ETdFR

You can customize Ellipsis by changing your verbosity settings, reacting with πŸ‘ or πŸ‘Ž, replying to comments, or adding code review rules.

- Remove duplicate generate_name() function from llm module
- Update commands.py to use robust auto_naming.generate_llm_name()
- Simplify model fallback logic in auto_naming (use summary model directly)
- Consolidate all conversation naming logic into auto_naming module

This eliminates 44 lines of duplicate code and ensures consistent,
robust name generation across CLI and server components.
- Add @trace_function decorator to generate_llm_name()
- Maintains telemetry tracing that was lost during refactoring
- Enables monitoring of name generation performance and usage
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! πŸ‘

Reviewed 9f2bb66 in 2 minutes and 9 seconds. Click for details.
  • Reviewed 118 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with πŸ‘ or πŸ‘Ž to teach Ellipsis.
1. gptme/commands.py:392
  • Draft comment:
    Using generate_llm_name instead of llm.generate_name cleanly decouples auto-naming logic. Consider handling cases where the generated name contains spaces (e.g., by replacing spaces with dashes) rather than asserting, to improve resiliency.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. gptme/llm/__init__.py:242
  • Draft comment:
    Deprecated generate_name function has been removed from the llm module. Ensure all call sites now rely on the new auto-naming routines in the util module.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it is simply stating that a deprecated function has been removed and suggesting to ensure all call sites rely on a new routine. It does not provide a specific code suggestion or ask for a specific test to be written. It also does not point out a potential issue or ask for confirmation of an intentional change.
3. gptme/util/auto_naming.py:150
  • Draft comment:
    The fallback loop in _generate_llm_name has been replaced with a single call using the summary model. This cost-saving approach is clear but reduces resilience if the call fails; ensure this trade-off aligns with your requirements.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 20% <= threshold 50% The comment is pointing out a change in the code logic that could have implications on the system's resilience. It suggests that the author should ensure that the trade-off between cost-saving and resilience aligns with their requirements. However, it is asking the author to ensure that the trade-off aligns with their requirements, which is against the rules. The comment does not provide a specific suggestion or ask for a specific confirmation about the code logic itself.
4. gptme/util/auto_naming.py:148
  • Draft comment:
    The string 'Title:"""' on line 148 looks like it might be a typographical error. If this is intended to start a docstring, please verify its format and content. Otherwise, consider removing or correcting it.
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_Tic0uPH6sZQAOQxK

You can customize Ellipsis by changing your verbosity settings, reacting with πŸ‘ or πŸ‘Ž, replying to comments, or adding code review rules.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! πŸ‘

Reviewed 42a394a in 1 minute and 9 seconds. Click for details.
  • Reviewed 22 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with πŸ‘ or πŸ‘Ž to teach Ellipsis.
1. gptme/util/auto_naming.py:9
  • Draft comment:
    Import added for telemetry decorator. Confirm the import order matches coding guidelines.
  • Reason this comment was not posted:
    Confidence changes required: 20% <= threshold 50% None
2. gptme/util/auto_naming.py:185
  • Draft comment:
    Decorator @trace_function is correctly applied to generate_llm_name; ensure telemetry logs as expected.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None

Workflow ID: wflow_MfLjMUxua83op2bz

You can customize Ellipsis by changing your verbosity settings, reacting with πŸ‘ or πŸ‘Ž, replying to comments, or adding code review rules.

@ErikBjare ErikBjare merged commit cfce2bc into master Aug 19, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant