Conversation
- BUG-20251103-019: Update Live Session tab label to '🚧 Live Session (Coming Soon)' for clarity - BUG-20251103-017: Filter campaign dashboard sessions by campaign_id - BUG-20251103-007: Add auto-scroll to results section after processing completes Details: - Live Session: Changed tab title to clearly indicate WIP status - Campaign Dashboard: Updated _check_processed_sessions() to filter by campaign_id from session metadata - Process Session: Added elem_id to results section and JavaScript scroll trigger for better UX
Summary of ChangesHello @Gambitnl, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses several user interface bugs to enhance the overall stability and usability of the application's dashboard and session processing features. It improves data presentation by ensuring relevant sessions are displayed, streamlines user interaction with automated scrolling, and provides clearer expectations for upcoming features. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request successfully addresses several UI bugs, including filtering sessions by campaign, marking the live session tab as 'coming soon', and implementing auto-scrolling on the processing page. The code changes are logical and correct. I've provided a couple of suggestions to improve the robustness of the implementation. One suggestion is for src/campaign_dashboard.py to better handle cases with ambiguous session data files. The other, for src/ui/process_session_tab_modern.py, is to make the auto-scroll JavaScript more resilient to timing issues. Overall, this is a good set of fixes.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| def _check_processed_sessions(self, campaign_id: Optional[str] = None) -> ComponentStatus: | ||
| try: | ||
| session_dirs = [d for d in Config.OUTPUT_DIR.iterdir() if d.is_dir()] if Config.OUTPUT_DIR.exists() else [] | ||
| if session_dirs: | ||
| details = f"{StatusIndicators.SUCCESS} **Status**: {len(session_dirs)} session(s) found\n\n" | ||
| recent = sorted(session_dirs, key=lambda d: d.stat().st_mtime, reverse=True)[:5] | ||
|
|
||
| # Filter sessions by campaign_id if provided | ||
| filtered_sessions = [] | ||
| for session_dir in session_dirs: | ||
| # Find the *_data.json file | ||
| data_files = list(session_dir.glob('*_data.json')) | ||
| if not data_files: | ||
| continue # Skip incomplete sessions | ||
|
|
||
| # Read metadata to check campaign_id | ||
| try: | ||
| data_file = data_files[0] | ||
| with open(data_file, 'r', encoding='utf-8') as f: | ||
| data = json.load(f) | ||
| session_campaign_id = data.get('metadata', {}).get('campaign_id') | ||
|
|
||
| # Include session if: | ||
| # 1. No campaign_id filter specified (show all) | ||
| # 2. Session matches the campaign_id | ||
| # 3. Legacy session (no campaign_id) and no filter specified | ||
| if campaign_id is None or session_campaign_id == campaign_id: | ||
| filtered_sessions.append(session_dir) | ||
| except (json.JSONDecodeError, IOError): | ||
| # Skip sessions with corrupted data files | ||
| continue |
There was a problem hiding this comment.
Dropping incomplete sessions hides processing errors
The refactor now skips any session directory whose _data.json cannot be opened or parsed. Those directories are no longer listed and the method returns “No sessions processed yet” even when output folders exist but contain incomplete or corrupted results. Previously the dashboard surfaced these sessions and marked them (incomplete), which let users spot failed runs and the current unit test relies on that behaviour. With the new logic, a single unreadable metadata file causes all visible progress to disappear and the dashboard incorrectly reports that nothing has been processed, making failures much harder to diagnose (tests/test_campaign_dashboard.py::test_check_processed_sessions). Consider still reporting these directories and indicating that their metadata is missing or invalid instead of silently dropping them.
Useful? React with 👍 / 👎.
BUG-20251103-017 follow-up: The previous fix silently dropped incomplete and corrupted sessions, making it impossible to spot processing failures. Changes: - Incomplete sessions (no *_data.json): shown in all campaigns as '(incomplete)' - Corrupted sessions (unreadable metadata): shown in all campaigns as '(corrupted metadata)' - Complete sessions: filtered by campaign_id as intended, marked with ✓ This restores diagnostic visibility while maintaining campaign isolation for successfully processed sessions.
…Q7GHFRJ4UneHaeEoTM
- Create OllamaConfig dataclass for centralized configuration - Implement OllamaClientFactory with retry logic and exponential backoff - Add connection testing and model availability checking - Add client caching support for improved performance - Add comprehensive test suite with 25+ test cases - Centralizes duplicate Ollama client initialization code This factory consolidates client initialization logic that was previously duplicated between OllamaClassifier and LlmClient classes. Related to refactor #9 - Consolidate LLM Client Initialization
- Replace inline client initialization with factory pattern - Add model availability checking at initialization - Use factory's retry logic and connection testing - Update tests to mock factory instead of ollama.Client - Maintain backward compatibility with existing RuntimeError Benefits: - Consistent error handling across application - Automatic retry with exponential backoff - Better logging of connection attempts - Reduced code duplication Related to refactor #9 - Consolidate LLM Client Initialization
- Replace duplicate initialization code with factory pattern - Enable client caching for improved performance - Add structured logging via logger integration - Use factory's connection testing and retry logic - Maintain backward compatibility with existing RuntimeError Benefits: - Eliminates ~15 lines of duplicate code - Consistent behavior with OllamaClassifier - Better error messages and retry handling - Improved performance through client caching Related to refactor #9 - Consolidate LLM Client Initialization
* feat(llm): add OllamaClientFactory - Create OllamaConfig dataclass for centralized configuration - Implement OllamaClientFactory with retry logic and exponential backoff - Add connection testing and model availability checking - Add client caching support for improved performance - Add comprehensive test suite with 25+ test cases - Centralizes duplicate Ollama client initialization code This factory consolidates client initialization logic that was previously duplicated between OllamaClassifier and LlmClient classes. Related to refactor #9 - Consolidate LLM Client Initialization * refactor(classifier): use OllamaClientFactory - Replace inline client initialization with factory pattern - Add model availability checking at initialization - Use factory's retry logic and connection testing - Update tests to mock factory instead of ollama.Client - Maintain backward compatibility with existing RuntimeError Benefits: - Consistent error handling across application - Automatic retry with exponential backoff - Better logging of connection attempts - Reduced code duplication Related to refactor #9 - Consolidate LLM Client Initialization * refactor(llm): use OllamaClientFactory - Replace duplicate initialization code with factory pattern - Enable client caching for improved performance - Add structured logging via logger integration - Use factory's connection testing and retry logic - Maintain backward compatibility with existing RuntimeError Benefits: - Eliminates ~15 lines of duplicate code - Consistent behavior with OllamaClassifier - Better error messages and retry handling - Improved performance through client caching Related to refactor #9 - Consolidate LLM Client Initialization * fix(llm): address code review feedback for OllamaClientFactory Addresses feedback from code review: 1. Remove unused verify_ssl parameter - OllamaConfig no longer includes verify_ssl field - Ollama Client doesn't support this parameter - Prevents confusion for future developers 2. Eliminate redundant API calls - _test_connection now returns list of available models - Added model_to_check parameter to create_client() - Model availability checking reuses already-fetched models - Reduces API calls from 2 to 1 during initialization - OllamaClassifier updated to use model_to_check parameter 3. Make ollama import lazy (not a hard dependency) - Moved import ollama inside create_client() method - Groq-only deployments can now run without ollama installed - Better error message when ollama is not available - Maintains backward compatibility Additional improvements: - Added _fetch_available_models() helper method - Added _extract_model_names() helper method - Updated test_model_available() to use new helpers - All tests updated to mock lazy import correctly - Added new tests for model_to_check parameter - Added test for ImportError when ollama not installed All tests updated and syntax validated. Fixes issues raised in code review by @gemini-code-assist and @chatgpt-codex-connector --------- Co-authored-by: Claude <noreply@anthropic.com>
Summary of Changes: - Updated MASTER_PLAN.md: Moved 6 completed bugs to Done section - Updated TEST_PLANS.md: Marked P0-2, P2-1, P2-2 as completed with summaries - Updated refactoring/README.md: Marked #5, #6, #8, #9 as completed - Created CONCURRENT_ORCHESTRATION_SPRINT_2025-11-14.md: Comprehensive sprint report Sprint Results: - 15 concurrent tasks completed across 3 batches - 220+ tests added, 99% coverage for party_config.py - ~5,000 lines of production code - Zero merge conflicts maintained - 100% task independence validated Documentation Updates: - 6 bugs moved from "To Do" to "Done" in MASTER_PLAN.md - 3 test plans marked complete with implementation summaries - 4 refactoring tasks marked complete with actual effort tracking - Added comprehensive sprint report with metrics and lessons learned Files Modified: - docs/MASTER_PLAN.md - docs/TEST_PLANS.md - docs/refactoring/README.md - docs/CONCURRENT_ORCHESTRATION_SPRINT_2025-11-14.md (new)
Summary of Changes: - Updated MASTER_PLAN.md: Moved 6 completed bugs to Done section - Updated TEST_PLANS.md: Marked P0-2, P2-1, P2-2 as completed with summaries - Updated refactoring/README.md: Marked #5, #6, #8, #9 as completed - Created CONCURRENT_ORCHESTRATION_SPRINT_2025-11-14.md: Comprehensive sprint report Sprint Results: - 15 concurrent tasks completed across 3 batches - 220+ tests added, 99% coverage for party_config.py - ~5,000 lines of production code - Zero merge conflicts maintained - 100% task independence validated Documentation Updates: - 6 bugs moved from "To Do" to "Done" in MASTER_PLAN.md - 3 test plans marked complete with implementation summaries - 4 refactoring tasks marked complete with actual effort tracking - Added comprehensive sprint report with metrics and lessons learned Files Modified: - docs/MASTER_PLAN.md - docs/TEST_PLANS.md - docs/refactoring/README.md - docs/CONCURRENT_ORCHESTRATION_SPRINT_2025-11-14.md (new) Co-authored-by: Claude <noreply@anthropic.com>
No description provided.