Skip to content

Release/v1.0.0#1

Merged
NotYuSheng merged 13 commits intomainfrom
release/v1.0.0
Feb 1, 2026
Merged

Release/v1.0.0#1
NotYuSheng merged 13 commits intomainfrom
release/v1.0.0

Conversation

@NotYuSheng
Copy link
Copy Markdown
Owner

No description provided.

NotYuSheng and others added 8 commits February 1, 2026 18:24
- Change page title from 'frontend' to 'TracePcap'
- Remove Vite default favicon

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Upload Configuration:
- Fix upload size display from 476.837158203125MB to exactly 512MB
- Centralize configuration using MAX_UPLOAD_SIZE_BYTES in root .env
- Update all services (Spring, MinIO, nginx) to use single source of truth
- Add nginx template system with dynamic config generation at runtime
- Update file size from 500MB to 512MB (536870912 bytes)

Filter Generator Feature:
- Add AI-powered filter generator page for natural language to BPF/display filter conversion
- Implement backend FilterController and FilterService
- Add filter execution with packet matching
- Include Wireshark cheat sheet reference

Story Page Enhancements:
- Add traffic timeline visualization to story page
- Display traffic statistics (total packets, data, averages, peaks)
- Reorganize story layout with traffic insights
- Replace timeline tab with filter generator tab

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add detailed README based on MeetMemo template including:
- Project overview and features
- Quick start installation guide
- Usage workflow and examples
- Technology stack documentation
- Common tasks and deployment guide
- Architecture highlights and security notes

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add centered demo GIF section for visual preview of TracePcap features.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Frontend:
- Add NetworkDiagram page with 3D network topology visualization
- Add network graph components (NetworkGraph, NetworkControls, NodeDetails)
- Add network data hooks and services for fetching/processing network data
- Update upload components with improved styling and user experience
- Add lucide-react icon library for enhanced UI
- Add favicon for better branding
- Update routing to include network diagram page
- Refine CSS styling across upload zone, file list, and layout components

Backend:
- Enhance FilterService with improved filter generation capabilities

Other:
- Update package dependencies (lucide-react)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Configuration:
- Add .env.example with NGINX_PORT=80 as default
- Update docker-compose.yml to use ${NGINX_PORT:-80} from .env
- Document environment configuration in README

Nginx:
- Add Swagger UI proxy configuration to nginx.conf
- Add Swagger UI proxy configuration to nginx.conf.template
- Enable access to /swagger-ui, /v3/api-docs, /swagger-resources, /webjars

Documentation:
- Update README installation instructions to use .env.example
- Document NGINX_PORT configuration
- Clarify Swagger UI access at configured port

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @NotYuSheng, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This release marks a significant milestone for the TracePcap project, introducing a suite of intelligent features designed to simplify and deepen network traffic analysis. The core focus is on leveraging AI to transform complex packet data into actionable insights and user-friendly visualizations. This includes the ability to generate precise BPF filters from plain English, visualize network interactions in an interactive graph, and receive a narrative summary of network events. Alongside these new capabilities, the project has undergone a complete rebranding and received substantial infrastructure and UI enhancements to improve usability and performance.

Highlights

  • Rebranding to TracePcap: The entire project, including backend packages, frontend components, configuration files, and documentation, has been renamed from 'TraceCap' to 'TracePcap' to establish a new brand identity.
  • AI-Powered Filter Generation: A new feature allows users to generate Berkeley Packet Filter (BPF) expressions from natural language queries using a Large Language Model (LLM). This includes validation of generated filters and a user interface to execute them against PCAP files.
  • Interactive Network Diagram: An interactive network topology visualization has been introduced, displaying hosts as nodes and conversations as edges. Users can filter by protocol, view detailed node information, and switch between force-directed and hierarchical layouts.
  • AI-Driven Network Storytelling: A 'Story Mode' has been added, leveraging an LLM to generate narrative summaries, identify key highlights (anomalies, insights), and create chronological event timelines from PCAP analysis data.
  • Enhanced Analysis & Infrastructure: Improvements include a more robust analysis status polling mechanism, increased maximum upload size to 512MB, containerization of the backend with a dedicated Dockerfile, and dynamic Nginx configuration for better deployment and scalability.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the v1.0.0 release of TracePcap, featuring a project-wide rename from TraceCap, new major features like AI-powered Story Generation and a Natural Language to BPF Filter Generator, and significant UI/UX and backend improvements. However, it introduces critical security vulnerabilities, primarily a complete lack of authentication and authorization in the backend API, allowing unauthorized access to sensitive network capture data. The integration with Large Language Models (LLMs) is also vulnerable to prompt injection through user queries and file metadata, and there are hardcoded credentials and insecure default configurations. Immediate remediation of these access control flaws and sanitization of LLM inputs is required. Additionally, minor areas for improvement include addressing other hardcoded values and documentation links.

Comment on lines +24 to +27
@RestController
@RequestMapping("/api/analysis")
@RequiredArgsConstructor
public class AnalysisController {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-critical critical

The application appears to be missing authentication and authorization mechanisms. All API endpoints are publicly accessible, and there are no ownership checks (IDOR) to ensure that users can only access their own PCAP files and analysis results. Given that the frontend code references an authToken, the backend should enforce authentication and verify resource ownership for every request.

Include suggestions only if there are alternative approaches or refinements.
""";

String userPrompt = String.format("Create a BPF filter for: %s", naturalLanguageQuery);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

User-supplied natural language queries are directly concatenated into the LLM prompt. This makes the service vulnerable to prompt injection attacks. An attacker could craft a query that manipulates the LLM's instructions to return malicious BPF filters or misleading explanations.

Suggested change
String userPrompt = String.format("Create a BPF filter for: %s", naturalLanguageQuery);
String userPrompt = String.format("Create a BPF filter for the following query. The query is delimited by triple quotes.\n\n\"\"\"\n%s\n\"\"\"", naturalLanguageQuery.replace("\"\"\"", ""));

Comment thread setup-database.sql

-- Create user
CREATE USER tracecap_user WITH PASSWORD 'tracecap_pass';
CREATE USER tracepcap_user WITH PASSWORD 'tracepcap_pass';
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The database setup script contains a hardcoded password for the application user. Hardcoded credentials should be avoided as they are easily discovered if the script is committed to version control.

prompt.append("Analyze this network traffic capture and create a comprehensive story:\n\n");

prompt.append("## File Information\n");
prompt.append(String.format("- Filename: %s\n", file.getFileName()));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The original filename of the uploaded PCAP file is included in the LLM prompt without sanitization. This allows for prompt injection via file metadata. An attacker could upload a file with a name containing malicious instructions to manipulate the generated narrative.

Suggested change
prompt.append(String.format("- Filename: %s\n", file.getFileName()));
prompt.append(String.format("- Filename: %s\n", file.getFileName().replaceAll("[\\r\\n]", " ")));

password: ${DATABASE_PASSWORD:tracecap_pass}
url: ${DATABASE_URL:jdbc:postgresql://localhost:5432/tracepcap}
username: ${DATABASE_USERNAME:tracepcap_user}
password: ${DATABASE_PASSWORD:tracepcap_pass}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The application configuration provides insecure default credentials for the database and MinIO. If these defaults are not overridden in production, the system is vulnerable to unauthorized access using well-known credentials. This also applies to the LLM API key placeholder on line 136.

Comment thread backend/README.md
Comment on lines +91 to +93
setError('🔴 LLM Service Unavailable: The AI service at http://100.64.0.1:1234 is not responding. Please start LM Studio or another OpenAI-compatible LLM server, then try again.')
} else if (errorMsg.includes('timeout') || errorMsg.includes('ECONNREFUSED')) {
setError('🔴 Connection Failed: Cannot reach the LLM service. Make sure it\'s running on http://100.64.0.1:1234')
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error messages for LLM service connection issues contain a hardcoded IP address (http://100.64.0.1:1234). This is very specific and might confuse users with different local setups. It would be more robust to either make this URL configurable via an environment variable or provide a more generic error message that prompts the user to check their LLM service configuration as defined in the project's .env file.

NotYuSheng and others added 5 commits February 1, 2026 22:57
This commit fixes critical bugs in the story generation feature and UI:

**Story Generation Fixes:**
- Increased LLM_MAX_TOKENS from 2000 to 8000 to prevent JSON truncation
- Added automatic model capability detection via /v1/models endpoint
- Implemented safe enum parsing with case-insensitive matching and fallbacks
- Added comprehensive null safety checks for all parsed fields
- Improved JSON extraction to handle markdown code blocks
- Added graceful degradation with default values when parsing fails
- Enhanced error logging with LLM response content for debugging

**Frontend Fixes:**
- Fixed z-index issue where key events card overlapped navigation bar
- Set sticky card z-index to 10 (navbar is 100) in StoryPage

**Additional Improvements:**
- Added file cleanup service for automatic MinIO file deletion
- Updated environment configuration with cleanup settings
- Improved error handling throughout the story generation pipeline

These changes ensure story generation works reliably with various LLM
responses and prevent UI rendering issues.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Implement pagination for conversations list (25 items per page, max 100)
- Add pagination for filter generator packet results
- Create reusable SGDS-styled Pagination component
- Update backend controllers to return PagedResponse with metadata
- Fix Analysis Summary overview to display correct values
- Update frontend services to handle paginated responses
- Add pagination state management to ConversationPage and FilterGeneratorPage

Backend changes:
- Add PagedResponse<T> generic DTO for consistent pagination
- Update ConversationsController with page/pageSize query params
- Update FilterController.executeFilter() with pagination support
- Modify FilterService to collect up to 10,000 packets and paginate
- Fix AnalysisSummaryResponse field mappings (totalPackets, fileSize, uploadTime)

Frontend changes:
- Create Pagination component with Previous/Next buttons and page numbers
- Update conversationService.getConversations() to return PaginatedResponse
- Update filterService.executeFilter() with pagination parameters
- Fix useNetworkData hook to extract data from paginated response
- Fix analysisService field mappings to match backend structure

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add important notice that the project is in early development and currently
suboptimal for larger PCAP files (>100MB). Recommend small to medium-sized
files for best performance.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add detailed comments explaining LLM_MAX_TOKENS behavior:
- Auto-detects model context length from /v1/models endpoint
- Uses minimum of configured value or 80% of model's context
- Serves as fallback if auto-detection fails
- Acts as cost control upper limit
- Recommend 8000-16000 for most models

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@NotYuSheng
Copy link
Copy Markdown
Owner Author

Will come back to these later

@NotYuSheng NotYuSheng merged commit c4330fb into main Feb 1, 2026
@NotYuSheng NotYuSheng deleted the release/v1.0.0 branch February 1, 2026 15:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant