Conversation
- Change page title from 'frontend' to 'TracePcap' - Remove Vite default favicon Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Upload Configuration: - Fix upload size display from 476.837158203125MB to exactly 512MB - Centralize configuration using MAX_UPLOAD_SIZE_BYTES in root .env - Update all services (Spring, MinIO, nginx) to use single source of truth - Add nginx template system with dynamic config generation at runtime - Update file size from 500MB to 512MB (536870912 bytes) Filter Generator Feature: - Add AI-powered filter generator page for natural language to BPF/display filter conversion - Implement backend FilterController and FilterService - Add filter execution with packet matching - Include Wireshark cheat sheet reference Story Page Enhancements: - Add traffic timeline visualization to story page - Display traffic statistics (total packets, data, averages, peaks) - Reorganize story layout with traffic insights - Replace timeline tab with filter generator tab Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add detailed README based on MeetMemo template including: - Project overview and features - Quick start installation guide - Usage workflow and examples - Technology stack documentation - Common tasks and deployment guide - Architecture highlights and security notes Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add centered demo GIF section for visual preview of TracePcap features. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Frontend: - Add NetworkDiagram page with 3D network topology visualization - Add network graph components (NetworkGraph, NetworkControls, NodeDetails) - Add network data hooks and services for fetching/processing network data - Update upload components with improved styling and user experience - Add lucide-react icon library for enhanced UI - Add favicon for better branding - Update routing to include network diagram page - Refine CSS styling across upload zone, file list, and layout components Backend: - Enhance FilterService with improved filter generation capabilities Other: - Update package dependencies (lucide-react) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Configuration:
- Add .env.example with NGINX_PORT=80 as default
- Update docker-compose.yml to use ${NGINX_PORT:-80} from .env
- Document environment configuration in README
Nginx:
- Add Swagger UI proxy configuration to nginx.conf
- Add Swagger UI proxy configuration to nginx.conf.template
- Enable access to /swagger-ui, /v3/api-docs, /swagger-resources, /webjars
Documentation:
- Update README installation instructions to use .env.example
- Document NGINX_PORT configuration
- Clarify Swagger UI access at configured port
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Summary of ChangesHello @NotYuSheng, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This release marks a significant milestone for the TracePcap project, introducing a suite of intelligent features designed to simplify and deepen network traffic analysis. The core focus is on leveraging AI to transform complex packet data into actionable insights and user-friendly visualizations. This includes the ability to generate precise BPF filters from plain English, visualize network interactions in an interactive graph, and receive a narrative summary of network events. Alongside these new capabilities, the project has undergone a complete rebranding and received substantial infrastructure and UI enhancements to improve usability and performance. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces the v1.0.0 release of TracePcap, featuring a project-wide rename from TraceCap, new major features like AI-powered Story Generation and a Natural Language to BPF Filter Generator, and significant UI/UX and backend improvements. However, it introduces critical security vulnerabilities, primarily a complete lack of authentication and authorization in the backend API, allowing unauthorized access to sensitive network capture data. The integration with Large Language Models (LLMs) is also vulnerable to prompt injection through user queries and file metadata, and there are hardcoded credentials and insecure default configurations. Immediate remediation of these access control flaws and sanitization of LLM inputs is required. Additionally, minor areas for improvement include addressing other hardcoded values and documentation links.
| @RestController | ||
| @RequestMapping("/api/analysis") | ||
| @RequiredArgsConstructor | ||
| public class AnalysisController { |
There was a problem hiding this comment.
The application appears to be missing authentication and authorization mechanisms. All API endpoints are publicly accessible, and there are no ownership checks (IDOR) to ensure that users can only access their own PCAP files and analysis results. Given that the frontend code references an authToken, the backend should enforce authentication and verify resource ownership for every request.
| Include suggestions only if there are alternative approaches or refinements. | ||
| """; | ||
|
|
||
| String userPrompt = String.format("Create a BPF filter for: %s", naturalLanguageQuery); |
There was a problem hiding this comment.
User-supplied natural language queries are directly concatenated into the LLM prompt. This makes the service vulnerable to prompt injection attacks. An attacker could craft a query that manipulates the LLM's instructions to return malicious BPF filters or misleading explanations.
| String userPrompt = String.format("Create a BPF filter for: %s", naturalLanguageQuery); | |
| String userPrompt = String.format("Create a BPF filter for the following query. The query is delimited by triple quotes.\n\n\"\"\"\n%s\n\"\"\"", naturalLanguageQuery.replace("\"\"\"", "")); |
|
|
||
| -- Create user | ||
| CREATE USER tracecap_user WITH PASSWORD 'tracecap_pass'; | ||
| CREATE USER tracepcap_user WITH PASSWORD 'tracepcap_pass'; |
| prompt.append("Analyze this network traffic capture and create a comprehensive story:\n\n"); | ||
|
|
||
| prompt.append("## File Information\n"); | ||
| prompt.append(String.format("- Filename: %s\n", file.getFileName())); |
There was a problem hiding this comment.
The original filename of the uploaded PCAP file is included in the LLM prompt without sanitization. This allows for prompt injection via file metadata. An attacker could upload a file with a name containing malicious instructions to manipulate the generated narrative.
| prompt.append(String.format("- Filename: %s\n", file.getFileName())); | |
| prompt.append(String.format("- Filename: %s\n", file.getFileName().replaceAll("[\\r\\n]", " "))); |
| password: ${DATABASE_PASSWORD:tracecap_pass} | ||
| url: ${DATABASE_URL:jdbc:postgresql://localhost:5432/tracepcap} | ||
| username: ${DATABASE_USERNAME:tracepcap_user} | ||
| password: ${DATABASE_PASSWORD:tracepcap_pass} |
There was a problem hiding this comment.
| setError('🔴 LLM Service Unavailable: The AI service at http://100.64.0.1:1234 is not responding. Please start LM Studio or another OpenAI-compatible LLM server, then try again.') | ||
| } else if (errorMsg.includes('timeout') || errorMsg.includes('ECONNREFUSED')) { | ||
| setError('🔴 Connection Failed: Cannot reach the LLM service. Make sure it\'s running on http://100.64.0.1:1234') |
There was a problem hiding this comment.
The error messages for LLM service connection issues contain a hardcoded IP address (http://100.64.0.1:1234). This is very specific and might confuse users with different local setups. It would be more robust to either make this URL configurable via an environment variable or provide a more generic error message that prompts the user to check their LLM service configuration as defined in the project's .env file.
This commit fixes critical bugs in the story generation feature and UI: **Story Generation Fixes:** - Increased LLM_MAX_TOKENS from 2000 to 8000 to prevent JSON truncation - Added automatic model capability detection via /v1/models endpoint - Implemented safe enum parsing with case-insensitive matching and fallbacks - Added comprehensive null safety checks for all parsed fields - Improved JSON extraction to handle markdown code blocks - Added graceful degradation with default values when parsing fails - Enhanced error logging with LLM response content for debugging **Frontend Fixes:** - Fixed z-index issue where key events card overlapped navigation bar - Set sticky card z-index to 10 (navbar is 100) in StoryPage **Additional Improvements:** - Added file cleanup service for automatic MinIO file deletion - Updated environment configuration with cleanup settings - Improved error handling throughout the story generation pipeline These changes ensure story generation works reliably with various LLM responses and prevent UI rendering issues. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Implement pagination for conversations list (25 items per page, max 100) - Add pagination for filter generator packet results - Create reusable SGDS-styled Pagination component - Update backend controllers to return PagedResponse with metadata - Fix Analysis Summary overview to display correct values - Update frontend services to handle paginated responses - Add pagination state management to ConversationPage and FilterGeneratorPage Backend changes: - Add PagedResponse<T> generic DTO for consistent pagination - Update ConversationsController with page/pageSize query params - Update FilterController.executeFilter() with pagination support - Modify FilterService to collect up to 10,000 packets and paginate - Fix AnalysisSummaryResponse field mappings (totalPackets, fileSize, uploadTime) Frontend changes: - Create Pagination component with Previous/Next buttons and page numbers - Update conversationService.getConversations() to return PaginatedResponse - Update filterService.executeFilter() with pagination parameters - Fix useNetworkData hook to extract data from paginated response - Fix analysisService field mappings to match backend structure Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add important notice that the project is in early development and currently suboptimal for larger PCAP files (>100MB). Recommend small to medium-sized files for best performance. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add detailed comments explaining LLM_MAX_TOKENS behavior: - Auto-detects model context length from /v1/models endpoint - Uses minimum of configured value or 80% of model's context - Serves as fallback if auto-detection fails - Acts as cost control upper limit - Recommend 8000-16000 for most models Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
Will come back to these later |
No description provided.