Skip to content

Claude/fix multiple bugs UI dashboard 011 c us q7 ghfrj4 une hae eo tm#9

Merged
Gambitnl merged 7 commits intomainfrom
claude/fix-multiple-bugs-ui-dashboard-011CUsQ7GHFRJ4UneHaeEoTM
Nov 6, 2025
Merged

Claude/fix multiple bugs UI dashboard 011 c us q7 ghfrj4 une hae eo tm#9
Gambitnl merged 7 commits intomainfrom
claude/fix-multiple-bugs-ui-dashboard-011CUsQ7GHFRJ4UneHaeEoTM

Conversation

@Gambitnl
Copy link
Copy Markdown
Owner

@Gambitnl Gambitnl commented Nov 6, 2025

No description provided.

- BUG-20251103-019: Update Live Session tab label to '🚧 Live Session (Coming Soon)' for clarity
- BUG-20251103-017: Filter campaign dashboard sessions by campaign_id
- BUG-20251103-007: Add auto-scroll to results section after processing completes

Details:
- Live Session: Changed tab title to clearly indicate WIP status
- Campaign Dashboard: Updated _check_processed_sessions() to filter by campaign_id from session metadata
- Process Session: Added elem_id to results section and JavaScript scroll trigger for better UX
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Gambitnl, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses several user interface bugs to enhance the overall stability and usability of the application's dashboard and session processing features. It improves data presentation by ensuring relevant sessions are displayed, streamlines user interaction with automated scrolling, and provides clearer expectations for upcoming features.

Highlights

  • Campaign Dashboard Session Filtering: The campaign dashboard now correctly filters processed sessions by campaign ID, preventing sessions from other campaigns from appearing. This was achieved by modifying the _check_processed_sessions method to read campaign metadata from session data files.
  • Process Session Auto-Scroll: The 'Process Session' tab now automatically scrolls to the results section once processing is complete and the results are displayed, improving user experience by ensuring the user sees the output without manual scrolling.
  • Live Session Tab Status Update: The 'Live Session' tab has been updated to clearly indicate that it is 'Coming Soon' by changing its title and adding a prominent '🚧 Coming Soon' markdown message.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully addresses several UI bugs, including filtering sessions by campaign, marking the live session tab as 'coming soon', and implementing auto-scrolling on the processing page. The code changes are logical and correct. I've provided a couple of suggestions to improve the robustness of the implementation. One suggestion is for src/campaign_dashboard.py to better handle cases with ambiguous session data files. The other, for src/ui/process_session_tab_modern.py, is to make the auto-scroll JavaScript more resilient to timing issues. Overall, this is a good set of fixes.

Comment thread src/campaign_dashboard.py Outdated
Comment thread src/ui/process_session_tab_modern.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/campaign_dashboard.py Outdated
Comment on lines +119 to +146
def _check_processed_sessions(self, campaign_id: Optional[str] = None) -> ComponentStatus:
try:
session_dirs = [d for d in Config.OUTPUT_DIR.iterdir() if d.is_dir()] if Config.OUTPUT_DIR.exists() else []
if session_dirs:
details = f"{StatusIndicators.SUCCESS} **Status**: {len(session_dirs)} session(s) found\n\n"
recent = sorted(session_dirs, key=lambda d: d.stat().st_mtime, reverse=True)[:5]

# Filter sessions by campaign_id if provided
filtered_sessions = []
for session_dir in session_dirs:
# Find the *_data.json file
data_files = list(session_dir.glob('*_data.json'))
if not data_files:
continue # Skip incomplete sessions

# Read metadata to check campaign_id
try:
data_file = data_files[0]
with open(data_file, 'r', encoding='utf-8') as f:
data = json.load(f)
session_campaign_id = data.get('metadata', {}).get('campaign_id')

# Include session if:
# 1. No campaign_id filter specified (show all)
# 2. Session matches the campaign_id
# 3. Legacy session (no campaign_id) and no filter specified
if campaign_id is None or session_campaign_id == campaign_id:
filtered_sessions.append(session_dir)
except (json.JSONDecodeError, IOError):
# Skip sessions with corrupted data files
continue
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Dropping incomplete sessions hides processing errors

The refactor now skips any session directory whose _data.json cannot be opened or parsed. Those directories are no longer listed and the method returns “No sessions processed yet” even when output folders exist but contain incomplete or corrupted results. Previously the dashboard surfaced these sessions and marked them (incomplete), which let users spot failed runs and the current unit test relies on that behaviour. With the new logic, a single unreadable metadata file causes all visible progress to disappear and the dashboard incorrectly reports that nothing has been processed, making failures much harder to diagnose (tests/test_campaign_dashboard.py::test_check_processed_sessions). Consider still reporting these directories and indicating that their metadata is missing or invalid instead of silently dropping them.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

@Gambitnl Gambitnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved

claude and others added 2 commits November 6, 2025 22:36
BUG-20251103-017 follow-up: The previous fix silently dropped incomplete
and corrupted sessions, making it impossible to spot processing failures.

Changes:
- Incomplete sessions (no *_data.json): shown in all campaigns as '(incomplete)'
- Corrupted sessions (unreadable metadata): shown in all campaigns as '(corrupted metadata)'
- Complete sessions: filtered by campaign_id as intended, marked with ✓

This restores diagnostic visibility while maintaining campaign isolation for
successfully processed sessions.
@Gambitnl Gambitnl merged commit 575b9a6 into main Nov 6, 2025
@Gambitnl Gambitnl deleted the claude/fix-multiple-bugs-ui-dashboard-011CUsQ7GHFRJ4UneHaeEoTM branch November 6, 2025 22:52
Gambitnl pushed a commit that referenced this pull request Nov 12, 2025
- Create OllamaConfig dataclass for centralized configuration
- Implement OllamaClientFactory with retry logic and exponential backoff
- Add connection testing and model availability checking
- Add client caching support for improved performance
- Add comprehensive test suite with 25+ test cases
- Centralizes duplicate Ollama client initialization code

This factory consolidates client initialization logic that was previously
duplicated between OllamaClassifier and LlmClient classes.

Related to refactor #9 - Consolidate LLM Client Initialization
Gambitnl pushed a commit that referenced this pull request Nov 12, 2025
- Replace inline client initialization with factory pattern
- Add model availability checking at initialization
- Use factory's retry logic and connection testing
- Update tests to mock factory instead of ollama.Client
- Maintain backward compatibility with existing RuntimeError

Benefits:
- Consistent error handling across application
- Automatic retry with exponential backoff
- Better logging of connection attempts
- Reduced code duplication

Related to refactor #9 - Consolidate LLM Client Initialization
Gambitnl pushed a commit that referenced this pull request Nov 12, 2025
- Replace duplicate initialization code with factory pattern
- Enable client caching for improved performance
- Add structured logging via logger integration
- Use factory's connection testing and retry logic
- Maintain backward compatibility with existing RuntimeError

Benefits:
- Eliminates ~15 lines of duplicate code
- Consistent behavior with OllamaClassifier
- Better error messages and retry handling
- Improved performance through client caching

Related to refactor #9 - Consolidate LLM Client Initialization
Gambitnl added a commit that referenced this pull request Nov 12, 2025
* feat(llm): add OllamaClientFactory

- Create OllamaConfig dataclass for centralized configuration
- Implement OllamaClientFactory with retry logic and exponential backoff
- Add connection testing and model availability checking
- Add client caching support for improved performance
- Add comprehensive test suite with 25+ test cases
- Centralizes duplicate Ollama client initialization code

This factory consolidates client initialization logic that was previously
duplicated between OllamaClassifier and LlmClient classes.

Related to refactor #9 - Consolidate LLM Client Initialization

* refactor(classifier): use OllamaClientFactory

- Replace inline client initialization with factory pattern
- Add model availability checking at initialization
- Use factory's retry logic and connection testing
- Update tests to mock factory instead of ollama.Client
- Maintain backward compatibility with existing RuntimeError

Benefits:
- Consistent error handling across application
- Automatic retry with exponential backoff
- Better logging of connection attempts
- Reduced code duplication

Related to refactor #9 - Consolidate LLM Client Initialization

* refactor(llm): use OllamaClientFactory

- Replace duplicate initialization code with factory pattern
- Enable client caching for improved performance
- Add structured logging via logger integration
- Use factory's connection testing and retry logic
- Maintain backward compatibility with existing RuntimeError

Benefits:
- Eliminates ~15 lines of duplicate code
- Consistent behavior with OllamaClassifier
- Better error messages and retry handling
- Improved performance through client caching

Related to refactor #9 - Consolidate LLM Client Initialization

* fix(llm): address code review feedback for OllamaClientFactory

Addresses feedback from code review:

1. Remove unused verify_ssl parameter
   - OllamaConfig no longer includes verify_ssl field
   - Ollama Client doesn't support this parameter
   - Prevents confusion for future developers

2. Eliminate redundant API calls
   - _test_connection now returns list of available models
   - Added model_to_check parameter to create_client()
   - Model availability checking reuses already-fetched models
   - Reduces API calls from 2 to 1 during initialization
   - OllamaClassifier updated to use model_to_check parameter

3. Make ollama import lazy (not a hard dependency)
   - Moved import ollama inside create_client() method
   - Groq-only deployments can now run without ollama installed
   - Better error message when ollama is not available
   - Maintains backward compatibility

Additional improvements:
   - Added _fetch_available_models() helper method
   - Added _extract_model_names() helper method
   - Updated test_model_available() to use new helpers
   - All tests updated to mock lazy import correctly
   - Added new tests for model_to_check parameter
   - Added test for ImportError when ollama not installed

All tests updated and syntax validated.

Fixes issues raised in code review by @gemini-code-assist and @chatgpt-codex-connector

---------

Co-authored-by: Claude <noreply@anthropic.com>
Gambitnl pushed a commit that referenced this pull request Nov 14, 2025
Summary of Changes:
- Updated MASTER_PLAN.md: Moved 6 completed bugs to Done section
- Updated TEST_PLANS.md: Marked P0-2, P2-1, P2-2 as completed with summaries
- Updated refactoring/README.md: Marked #5, #6, #8, #9 as completed
- Created CONCURRENT_ORCHESTRATION_SPRINT_2025-11-14.md: Comprehensive sprint report

Sprint Results:
- 15 concurrent tasks completed across 3 batches
- 220+ tests added, 99% coverage for party_config.py
- ~5,000 lines of production code
- Zero merge conflicts maintained
- 100% task independence validated

Documentation Updates:
- 6 bugs moved from "To Do" to "Done" in MASTER_PLAN.md
- 3 test plans marked complete with implementation summaries
- 4 refactoring tasks marked complete with actual effort tracking
- Added comprehensive sprint report with metrics and lessons learned

Files Modified:
- docs/MASTER_PLAN.md
- docs/TEST_PLANS.md
- docs/refactoring/README.md
- docs/CONCURRENT_ORCHESTRATION_SPRINT_2025-11-14.md (new)
Gambitnl added a commit that referenced this pull request Nov 14, 2025
Summary of Changes:
- Updated MASTER_PLAN.md: Moved 6 completed bugs to Done section
- Updated TEST_PLANS.md: Marked P0-2, P2-1, P2-2 as completed with summaries
- Updated refactoring/README.md: Marked #5, #6, #8, #9 as completed
- Created CONCURRENT_ORCHESTRATION_SPRINT_2025-11-14.md: Comprehensive sprint report

Sprint Results:
- 15 concurrent tasks completed across 3 batches
- 220+ tests added, 99% coverage for party_config.py
- ~5,000 lines of production code
- Zero merge conflicts maintained
- 100% task independence validated

Documentation Updates:
- 6 bugs moved from "To Do" to "Done" in MASTER_PLAN.md
- 3 test plans marked complete with implementation summaries
- 4 refactoring tasks marked complete with actual effort tracking
- Added comprehensive sprint report with metrics and lessons learned

Files Modified:
- docs/MASTER_PLAN.md
- docs/TEST_PLANS.md
- docs/refactoring/README.md
- docs/CONCURRENT_ORCHESTRATION_SPRINT_2025-11-14.md (new)

Co-authored-by: Claude <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants