From 286c0a93fd37115eb182092703332206c607765c Mon Sep 17 00:00:00 2001 From: Gage Krumbach Date: Thu, 16 Oct 2025 23:02:09 -0500 Subject: [PATCH 01/30] Normalize message type handling in transformToClaudeFormat function - Updated the transformToClaudeFormat function to normalize message types by replacing periods with underscores and converting to lowercase, improving message processing accuracy. - Enhanced logging to include both original and normalized message types for better traceability during message transformation. Enhance session service handling and frontend display for temporary pods - Updated backend session handlers to prioritize temporary content services for completed sessions, improving resource management and service selection logic. - Enhanced logging to provide clearer insights into the service being used for session operations. - Refactored the OverviewTab component to improve the display of temporary pods, including expandable views for pod containers, enhancing user interaction and visibility of Kubernetes resources. Enhance session continuation logic for interactive sessions - Updated ProjectSessionsListPage to conditionally allow continuation of sessions based on interactivity. - Modified ProjectSessionDetailPage to display the continue button only for completed interactive sessions, improving user clarity and experience. Add session continuation functionality and Claude format support - Introduced a new endpoint to retrieve session messages in Claude SDK format, enhancing session continuation capabilities. - Updated session creation logic to support parent session IDs, allowing users to continue from previous sessions. - Enhanced frontend components to include session continuation options, improving user experience. - Refactored backend handlers to manage session state and PVC reuse for continued sessions. - Added necessary hooks and types to support the new functionality across frontend and backend components. Update session timeout to 4 hours for safety in session handling - Increased the ActiveDeadlineSeconds from 30 minutes to 4 hours in the session event handler to allow for longer-running jobs without premature termination. Add Kubernetes resource management for agentic sessions - Introduced new endpoints for managing temporary content pods, including spawning, checking status, and deleting pods. - Added functionality to retrieve Kubernetes resources (jobs, pods, PVC) associated with agentic sessions. - Enhanced frontend components to support automatic spawning of content pods for completed sessions, improving user experience. - Implemented a new K8sResourceTree component to visualize job and pod statuses in the session detail view. - Updated backend handlers to manage the lifecycle of temporary content pods and ensure proper cleanup of resources. - Added hooks and API services to facilitate interaction with the new Kubernetes resource management features. Remove unused DialogTrigger import from K8sResourceTree component for cleaner code. Enhance not-found component with 'use client' directive for improved client-side rendering Enhance Git diff summary to include counts of added and removed files - Updated the DiffSummary struct to track the number of files added and removed. - Modified the DiffRepo function to calculate and log these new metrics. - Adjusted the ContentGitDiff handler to return file counts in the JSON response. Refactor session handling and improve frontend interactions - Updated backend session handlers to utilize UpdateStatus for resource updates and added checks for terminal states before updating session status. - Enhanced frontend components to streamline project session management, including removing the restart functionality and improving user feedback during session interactions. - Introduced loading indicators for chat actions in the MessagesTab component to enhance user experience. - Simplified project retrieval logic in session-related components to improve code clarity and maintainability. Add session continuation and Kubernetes resource display in session detail - Implemented functionality to allow users to continue completed or failed sessions, enhancing user experience. - Introduced a dropdown menu for session actions, including cloning and deleting sessions. - Enhanced the OverviewTab and WorkspaceTab components to display Kubernetes resources (jobs, pods, PVC) associated with sessions. - Updated MessagesTab to include a button for continuing sessions directly from the chat interface. - Improved UI elements for better clarity and interaction during session management. Enhance logging and error handling in content management handlers - Added detailed logging for JSON binding, path validation, directory creation, and file writing in ContentWrite, ContentRead, and ContentList functions to improve traceability and debugging. - Updated error messages to provide clearer feedback for invalid paths and failed operations. - Improved frontend session components by renaming references from "Workspace service" to "Workspace viewer" for consistency and clarity in user interactions. Enhance session management and deployment configurations - Added support for self-subject access reviews in Kubernetes role provisioning to improve authorization handling. - Updated session detail page to streamline the delete action in the dropdown menu for better user experience. - Modified session query hooks to ensure all related queries are invalidated and refetched upon session actions, enhancing data consistency. - Introduced new environment variables for the content service image and image pull policy in the backend deployment configuration for improved deployment flexibility. - Updated deployment script to reflect changes in content service image handling during updates. Enhance session continuation and logging for interactive sessions - Added support for parent session IDs in session metadata to facilitate continuation of sessions. - Improved logging for session handling, including detailed messages for session state and history restoration. - Updated session status messages to clarify session restart requests. - Enhanced error handling for session metadata updates and message history fetching, providing clearer feedback on failures. Add OpenShift console URL generation for Kubernetes resources in OverviewTab - Introduced a utility function to generate OpenShift console URLs for Jobs, Pods, and PVCs based on the current window location and project namespace. - Updated the OverviewTab component to display these URLs as clickable links for Kubernetes resources, enhancing user navigation to the OpenShift console. - Improved the display logic for resource names, ensuring users can easily access relevant console information directly from the session overview. Enhance session cleanup and job management in backend handlers - Improved the StopSession function to derive job names when not present in status and added detailed logging for job and pod deletion processes. - Updated the GetSessionK8sResources function to handle job not found scenarios, providing clearer status feedback. - Enhanced the handleAgenticSessionEvent function to clean up running jobs and associated pods when a session is stopped, ensuring proper resource management. - Added necessary RBAC permissions for job and pod management in the backend cluster role configuration. Implement content service mode and enhance role permission management - Added support for a new CONTENT_SERVICE_MODE in main.go, allowing minimal initialization without K8s access for content services. - Enhanced session handling in sessions.go by introducing ensureRunnerRolePermissions function to update runner roles with required permissions, specifically adding selfsubjectaccessreviews for existing sessions. - Improved logging for role updates and permission checks to facilitate better debugging and traceability. Enhance content service mode and session metadata handling - Initialized configuration for CONTENT_SERVICE_MODE in main.go to set StateBaseDir from the environment, improving content service startup. - Updated session metadata update logic in sessions.go to retrieve the updated object after the metadata update, ensuring accurate status handling. - Improved annotation retrieval in operator session handling for better clarity and reliability in session continuation logic. Enhance session handling and message formatting in backend and frontend - Updated session handling in sessions.go to conditionally set parent session annotations based on completion time, improving session continuation logic. - Enhanced GetSessionK8sResources function to provide clearer job and pod status feedback, including handling job existence checks and pod termination states. - Improved message retrieval in websocket handlers by filtering to only include conversational messages, with detailed logging for better traceability. - Updated OverviewTab component to visually represent additional session states, including 'terminating' and 'terminated', enhancing user experience. Enhance message filtering and session resource display in backend and frontend - Improved message filtering in GetSessionMessagesClaudeFormat to exclude non-conversational messages and ensure payload validation, enhancing message retrieval accuracy. - Updated OverviewTab component to display additional Kubernetes resources, including Persistent Volume Claims (PVCs) and temporary pods, with improved visual representation and status indicators for better user experience. - Enhanced logging for message filtering and resource management to provide clearer insights into session handling and state. Refactor message handling and improve session interactions - Updated backend websocket handlers to enhance message filtering, ensuring only valid user messages are processed for the Claude SDK connect() format. - Modified the transformToClaudeFormat function to clarify the expected message structure, replacing 'role' with 'type' for user messages and skipping assistant messages entirely. - Cleaned up frontend components by removing unused props related to session state management, streamlining the user interface. - Improved logging for message validation and session history restoration in the Claude code runner, enhancing traceability and debugging capabilities. Enhance message transformation for Claude SDK integration - Updated the transformToClaudeFormat function to align with the standard Anthropic Messages API format, replacing 'type' with 'role' for user and assistant messages. - Improved content extraction methods for user and assistant messages, supporting various content block types. - Enhanced logging for message validation and processing, providing clearer insights into the transformation process. - Removed outdated comments and added references to the API documentation for better clarity. Enhance content pod error handling and user feedback in session management - Introduced contentPodError state in ProjectSessionDetailPage to track errors during content pod spawning. - Updated logic to prevent auto-retry of content pod spawning if an error has occurred, requiring explicit user action to retry. - Enhanced WorkspaceTab component to display error messages with a retry button when content pod spawning fails, improving user experience. - Improved logging and error handling in the spawnContentPodAsync function to provide clearer feedback on failures. Refactor message transformation for Claude SDK control protocol - Updated the transformToClaudeFormat function to align with the Claude SDK control protocol, wrapping user messages in an envelope and introducing parent_tool_use_id for tool result chaining. - Enhanced logging to provide clearer insights into the transformation process and message validation. - Removed support for assistant messages, as the SDK only accepts user messages, improving message filtering accuracy. - Updated comments to clarify the expected message structure and requirements for the SDK integration. Enhance session token management and logging in session handlers - Updated the provisionRunnerTokenForSession function to refresh existing tokens by updating the Kubernetes Secret if it already exists, improving token management. - Added detailed logging for both token creation and updates, providing clearer insights into the session token lifecycle. - Enhanced the StartSession function to regenerate the runner token for session continuation, ensuring that sessions can proceed smoothly even if the old token has expired. Enhance session continuation logic in StartSession function - Updated the StartSession function to check for terminal session phases (Completed, Failed, Cancelled) instead of relying on completion time, improving the accuracy of session continuation detection. - Added logging to indicate when a continuation is detected based on the session phase, enhancing traceability in session management. --- components/backend/git/operations.go | 34 +- components/backend/handlers/content.go | 40 +- components/backend/handlers/sessions.go | 874 ++++++++++++++++-- components/backend/main.go | 32 +- components/backend/routes.go | 5 + components/backend/types/common.go | 13 + components/backend/types/session.go | 13 +- components/backend/websocket/handlers.go | 310 +++++++ .../[sessionName]/content-pod-status/route.ts | 17 + .../[sessionName]/content-pod/route.ts | 17 + .../[sessionName]/k8s-resources/route.ts | 17 + .../[sessionName]/spawn-content-pod/route.ts | 17 + .../[sessionName]/start/route.ts | 17 + .../components/k8s-resource-tree.tsx | 217 +++++ .../sessions/[sessionName]/not-found.tsx | 2 + .../[name]/sessions/[sessionName]/page.tsx | 223 ++++- .../src/app/projects/[name]/sessions/page.tsx | 53 +- .../src/components/session/MessagesTab.tsx | 120 ++- .../src/components/session/OverviewTab.tsx | 275 +++++- .../src/components/session/WorkspaceTab.tsx | 48 +- .../frontend/src/services/api/sessions.ts | 56 ++ .../src/services/queries/use-sessions.ts | 67 +- .../frontend/src/types/agentic-session.ts | 1 + components/frontend/src/types/api/sessions.ts | 1 + components/manifests/backend-deployment.yaml | 4 + .../manifests/crds/agenticsessions-crd.yaml | 36 +- components/manifests/deploy.sh | 7 + .../manifests/rbac/backend-clusterrole.yaml | 27 + .../operator/internal/handlers/sessions.go | 413 +++++++-- components/operator/main.go | 3 + .../runners/claude-code-runner/wrapper.py | 144 ++- 31 files changed, 2854 insertions(+), 249 deletions(-) create mode 100644 components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/content-pod-status/route.ts create mode 100644 components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/content-pod/route.ts create mode 100644 components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/k8s-resources/route.ts create mode 100644 components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/spawn-content-pod/route.ts create mode 100644 components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/start/route.ts create mode 100644 components/frontend/src/app/projects/[name]/sessions/[sessionName]/components/k8s-resource-tree.tsx diff --git a/components/backend/git/operations.go b/components/backend/git/operations.go index c484fbdea..825d6dedd 100644 --- a/components/backend/git/operations.go +++ b/components/backend/git/operations.go @@ -40,6 +40,8 @@ type ProjectSettings struct { type DiffSummary struct { TotalAdded int `json:"total_added"` TotalRemoved int `json:"total_removed"` + FilesAdded int `json:"files_added"` + FilesRemoved int `json:"files_removed"` } // getProjectSettings retrieves the ProjectSettings CR for a project using the provided dynamic client @@ -871,7 +873,7 @@ func DiffRepo(ctx context.Context, repoDir string) (*DiffSummary, error) { summary := &DiffSummary{} - // Get numstat for modified files (working tree vs HEAD) + // Get numstat for modified tracked files (working tree vs HEAD) numstatOut, err := run("git", "diff", "--numstat", "HEAD") if err == nil && strings.TrimSpace(numstatOut) != "" { lines := strings.Split(strings.TrimSpace(numstatOut), "\n") @@ -896,11 +898,37 @@ func DiffRepo(ctx context.Context, repoDir string) (*DiffSummary, error) { fmt.Sscanf(removed, "%d", &n) summary.TotalRemoved += n } + // If file was deleted (0 added, all removed), count as removed file + if added == "0" && removed != "0" { + summary.FilesRemoved++ + } + } + } + + // Get untracked files (new files not yet added to git) + untrackedOut, err := run("git", "ls-files", "--others", "--exclude-standard") + if err == nil && strings.TrimSpace(untrackedOut) != "" { + untrackedFiles := strings.Split(strings.TrimSpace(untrackedOut), "\n") + for _, filePath := range untrackedFiles { + if filePath == "" { + continue + } + // Count lines in the untracked file + fullPath := filepath.Join(repoDir, filePath) + if data, err := os.ReadFile(fullPath); err == nil { + // Count lines (all lines in a new file are "added") + lineCount := strings.Count(string(data), "\n") + if len(data) > 0 && !strings.HasSuffix(string(data), "\n") { + lineCount++ // Count last line if it doesn't end with newline + } + summary.TotalAdded += lineCount + summary.FilesAdded++ + } } } - log.Printf("gitDiffRepo: total_added=%d total_removed=%d", - summary.TotalAdded, summary.TotalRemoved) + log.Printf("gitDiffRepo: files_added=%d files_removed=%d total_added=%d total_removed=%d", + summary.FilesAdded, summary.FilesRemoved, summary.TotalAdded, summary.TotalRemoved) return summary, nil } diff --git a/components/backend/handlers/content.go b/components/backend/handlers/content.go index eea730ee6..6dae788df 100644 --- a/components/backend/handlers/content.go +++ b/components/backend/handlers/content.go @@ -128,11 +128,22 @@ func ContentGitDiff(c *gin.Context) { summary, err := GitDiffRepo(c.Request.Context(), repoDir) if err != nil { - c.JSON(http.StatusOK, gin.H{"total_added": 0, "total_removed": 0}) + c.JSON(http.StatusOK, gin.H{ + "files": gin.H{ + "added": 0, + "removed": 0, + }, + "total_added": 0, + "total_removed": 0, + }) return } c.JSON(http.StatusOK, gin.H{ + "files": gin.H{ + "added": summary.FilesAdded, + "removed": summary.FilesRemoved, + }, "total_added": summary.TotalAdded, "total_removed": summary.TotalRemoved, }) @@ -146,16 +157,23 @@ func ContentWrite(c *gin.Context) { Encoding string `json:"encoding"` } if err := c.ShouldBindJSON(&req); err != nil { + log.Printf("ContentWrite: bind JSON failed: %v", err) c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()}) return } + log.Printf("ContentWrite: path=%q contentLen=%d encoding=%q StateBaseDir=%q", req.Path, len(req.Content), req.Encoding, StateBaseDir) + path := filepath.Clean("/" + strings.TrimSpace(req.Path)) if path == "/" || strings.Contains(path, "..") { + log.Printf("ContentWrite: invalid path rejected: path=%q", path) c.JSON(http.StatusBadRequest, gin.H{"error": "invalid path"}) return } abs := filepath.Join(StateBaseDir, path) + log.Printf("ContentWrite: absolute path=%q", abs) + if err := os.MkdirAll(filepath.Dir(abs), 0755); err != nil { + log.Printf("ContentWrite: mkdir failed for %q: %v", filepath.Dir(abs), err) c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create directory"}) return } @@ -163,6 +181,7 @@ func ContentWrite(c *gin.Context) { if strings.EqualFold(req.Encoding, "base64") { b, err := base64.StdEncoding.DecodeString(req.Content) if err != nil { + log.Printf("ContentWrite: base64 decode failed: %v", err) c.JSON(http.StatusBadRequest, gin.H{"error": "invalid base64 content"}) return } @@ -171,22 +190,31 @@ func ContentWrite(c *gin.Context) { data = []byte(req.Content) } if err := os.WriteFile(abs, data, 0644); err != nil { + log.Printf("ContentWrite: write failed for %q: %v", abs, err) c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to write file"}) return } + log.Printf("ContentWrite: successfully wrote %d bytes to %q", len(data), abs) c.JSON(http.StatusOK, gin.H{"message": "ok"}) } // ContentRead handles GET /content/file?path= func ContentRead(c *gin.Context) { path := filepath.Clean("/" + strings.TrimSpace(c.Query("path"))) + log.Printf("ContentRead: requested path=%q StateBaseDir=%q", c.Query("path"), StateBaseDir) + log.Printf("ContentRead: cleaned path=%q", path) + if path == "/" || strings.Contains(path, "..") { + log.Printf("ContentRead: invalid path rejected: path=%q", path) c.JSON(http.StatusBadRequest, gin.H{"error": "invalid path"}) return } abs := filepath.Join(StateBaseDir, path) + log.Printf("ContentRead: absolute path=%q", abs) + b, err := os.ReadFile(abs) if err != nil { + log.Printf("ContentRead: read failed for %q: %v", abs, err) if os.IsNotExist(err) { c.JSON(http.StatusNotFound, gin.H{"error": "not found"}) } else { @@ -194,19 +222,28 @@ func ContentRead(c *gin.Context) { } return } + log.Printf("ContentRead: successfully read %d bytes from %q", len(b), abs) c.Data(http.StatusOK, "application/octet-stream", b) } // ContentList handles GET /content/list?path= func ContentList(c *gin.Context) { path := filepath.Clean("/" + strings.TrimSpace(c.Query("path"))) + log.Printf("ContentList: requested path=%q", c.Query("path")) + log.Printf("ContentList: cleaned path=%q", path) + log.Printf("ContentList: StateBaseDir=%q", StateBaseDir) + if path == "/" || strings.Contains(path, "..") { + log.Printf("ContentList: invalid path rejected: path=%q", path) c.JSON(http.StatusBadRequest, gin.H{"error": "invalid path"}) return } abs := filepath.Join(StateBaseDir, path) + log.Printf("ContentList: absolute path=%q", abs) + info, err := os.Stat(abs) if err != nil { + log.Printf("ContentList: stat failed for %q: %v", abs, err) if os.IsNotExist(err) { c.JSON(http.StatusNotFound, gin.H{"error": "not found"}) } else { @@ -241,5 +278,6 @@ func ContentList(c *gin.Context) { "modifiedAt": info.ModTime().UTC().Format(time.RFC3339), }) } + log.Printf("ContentList: returning %d items for path=%q", len(items), path) c.JSON(http.StatusOK, gin.H{"items": items}) } diff --git a/components/backend/handlers/sessions.go b/components/backend/handlers/sessions.go index 9949ade03..8361ea1e8 100644 --- a/components/backend/handlers/sessions.go +++ b/components/backend/handlers/sessions.go @@ -19,10 +19,12 @@ import ( corev1 "k8s.io/api/core/v1" rbacv1 "k8s.io/api/rbac/v1" "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/resource" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/runtime/schema" ktypes "k8s.io/apimachinery/pkg/types" + intstr "k8s.io/apimachinery/pkg/util/intstr" "k8s.io/client-go/dynamic" "k8s.io/client-go/kubernetes" ) @@ -151,7 +153,7 @@ func parseSpec(spec map[string]interface{}) types.AgenticSessionSpec { ng.URL = s } if s, ok := in["branch"].(string); ok && strings.TrimSpace(s) != "" { - ng.Branch = StringPtr(s) + ng.Branch = types.StringPtr(s) } r.Input = ng } @@ -161,13 +163,13 @@ func parseSpec(spec map[string]interface{}) types.AgenticSessionSpec { og.URL = s } if s, ok := out["branch"].(string); ok && strings.TrimSpace(s) != "" { - og.Branch = StringPtr(s) + og.Branch = types.StringPtr(s) } r.Output = og } // Include per-repo status if present if st, ok := m["status"].(string); ok { - r.Status = StringPtr(st) + r.Status = types.StringPtr(st) } if strings.TrimSpace(r.Input.URL) != "" { repos = append(repos, r) @@ -359,9 +361,40 @@ func CreateSession(c *gin.Context) { } // Optional environment variables passthrough (always, independent of git config presence) - if len(req.EnvironmentVariables) > 0 { + envVars := make(map[string]string) + for k, v := range req.EnvironmentVariables { + envVars[k] = v + } + + // Handle session continuation + if req.ParentSessionID != "" { + envVars["PARENT_SESSION_ID"] = req.ParentSessionID + // Add annotation to track continuation lineage + if metadata["annotations"] == nil { + metadata["annotations"] = make(map[string]interface{}) + } + annotations := metadata["annotations"].(map[string]interface{}) + annotations["vteam.ambient-code/parent-session-id"] = req.ParentSessionID + log.Printf("Creating continuation session from parent %s", req.ParentSessionID) + + // Clean up temp-content pod from parent session to free the PVC + // This prevents Multi-Attach errors when the new session tries to mount the same workspace + reqK8s, _ := GetK8sClientsForRequest(c) + if reqK8s != nil { + tempPodName := fmt.Sprintf("temp-content-%s", req.ParentSessionID) + if err := reqK8s.CoreV1().Pods(project).Delete(c.Request.Context(), tempPodName, v1.DeleteOptions{}); err != nil { + if !errors.IsNotFound(err) { + log.Printf("CreateSession: failed to delete temp-content pod %s (non-fatal): %v", tempPodName, err) + } + } else { + log.Printf("CreateSession: deleted temp-content pod %s to free PVC for continuation", tempPodName) + } + } + } + + if len(envVars) > 0 { spec := session["spec"].(map[string]interface{}) - spec["environmentVariables"] = req.EnvironmentVariables + spec["environmentVariables"] = envVars } // Interactive flag @@ -574,7 +607,7 @@ func provisionRunnerTokenForSession(c *gin.Context, reqK8s *kubernetes.Clientset Kind: obj.GetKind(), Name: obj.GetName(), UID: obj.GetUID(), - Controller: BoolPtr(true), + Controller: types.BoolPtr(true), } // Create ServiceAccount @@ -612,6 +645,11 @@ func provisionRunnerTokenForSession(c *gin.Context, reqK8s *kubernetes.Clientset Resources: []string{"agenticsessions"}, Verbs: []string{"get", "list", "watch"}, }, + { + APIGroups: []string{"authorization.k8s.io"}, + Resources: []string{"selfsubjectaccessreviews"}, + Verbs: []string{"create"}, + }, }, } if _, err := reqK8s.RbacV1().Roles(project).Create(c.Request.Context(), role, v1.CreateOptions{}); err != nil { @@ -653,7 +691,7 @@ func provisionRunnerTokenForSession(c *gin.Context, reqK8s *kubernetes.Clientset "k8s-token": k8sToken, } - // Store both tokens in a Secret + // Store token in a Secret (update if exists to refresh token) secretName := fmt.Sprintf("ambient-runner-token-%s", sessionName) sec := &corev1.Secret{ ObjectMeta: v1.ObjectMeta{ @@ -665,8 +703,17 @@ func provisionRunnerTokenForSession(c *gin.Context, reqK8s *kubernetes.Clientset Type: corev1.SecretTypeOpaque, StringData: secretData, } + + // Try to create the secret if _, err := reqK8s.CoreV1().Secrets(project).Create(c.Request.Context(), sec, v1.CreateOptions{}); err != nil { - if !errors.IsAlreadyExists(err) { + if errors.IsAlreadyExists(err) { + // Secret exists - update it with fresh token + log.Printf("Updating existing secret %s with fresh token", secretName) + if _, err := reqK8s.CoreV1().Secrets(project).Update(c.Request.Context(), sec, v1.UpdateOptions{}); err != nil { + return fmt.Errorf("update Secret: %w", err) + } + log.Printf("Successfully updated secret %s with fresh token", secretName) + } else { return fmt.Errorf("create Secret: %w", err) } } @@ -1117,11 +1164,62 @@ func CloneSession(c *gin.Context) { c.JSON(http.StatusCreated, session) } +// ensureRunnerRolePermissions updates the runner role to ensure it has all required permissions +// This is useful for existing sessions that were created before we added new permissions +func ensureRunnerRolePermissions(c *gin.Context, reqK8s *kubernetes.Clientset, project string, sessionName string) error { + roleName := fmt.Sprintf("ambient-session-%s-role", sessionName) + + // Get existing role + existingRole, err := reqK8s.RbacV1().Roles(project).Get(c.Request.Context(), roleName, v1.GetOptions{}) + if err != nil { + if errors.IsNotFound(err) { + log.Printf("Role %s not found for session %s - will be created by operator", roleName, sessionName) + return nil + } + return fmt.Errorf("get role: %w", err) + } + + // Check if role has selfsubjectaccessreviews permission + hasSelfSubjectAccessReview := false + for _, rule := range existingRole.Rules { + for _, apiGroup := range rule.APIGroups { + if apiGroup == "authorization.k8s.io" { + for _, resource := range rule.Resources { + if resource == "selfsubjectaccessreviews" { + hasSelfSubjectAccessReview = true + break + } + } + } + } + } + + if hasSelfSubjectAccessReview { + log.Printf("Role %s already has selfsubjectaccessreviews permission", roleName) + return nil + } + + // Add missing permission + log.Printf("Updating role %s to add selfsubjectaccessreviews permission", roleName) + existingRole.Rules = append(existingRole.Rules, rbacv1.PolicyRule{ + APIGroups: []string{"authorization.k8s.io"}, + Resources: []string{"selfsubjectaccessreviews"}, + Verbs: []string{"create"}, + }) + + _, err = reqK8s.RbacV1().Roles(project).Update(c.Request.Context(), existingRole, v1.UpdateOptions{}) + if err != nil { + return fmt.Errorf("update role: %w", err) + } + + log.Printf("Successfully updated role %s with selfsubjectaccessreviews permission", roleName) + return nil +} + func StartSession(c *gin.Context) { project := c.GetString("project") sessionName := c.Param("sessionName") reqK8s, reqDyn := GetK8sClientsForRequest(c) - _ = reqK8s gvr := GetAgenticSessionV1Alpha1Resource() // Get current resource @@ -1136,18 +1234,88 @@ func StartSession(c *gin.Context) { return } - // Update status to trigger start + // Ensure runner role has required permissions (update if needed for existing sessions) + if err := ensureRunnerRolePermissions(c, reqK8s, project, sessionName); err != nil { + log.Printf("Warning: failed to ensure runner role permissions for %s: %v", sessionName, err) + // Non-fatal - continue with restart + } + + // Clean up temp-content pod if it exists to free the PVC + // This prevents Multi-Attach errors when the session job tries to mount the workspace + if reqK8s != nil { + tempPodName := fmt.Sprintf("temp-content-%s", sessionName) + if err := reqK8s.CoreV1().Pods(project).Delete(c.Request.Context(), tempPodName, v1.DeleteOptions{}); err != nil { + if !errors.IsNotFound(err) { + log.Printf("StartSession: failed to delete temp-content pod %s (non-fatal): %v", tempPodName, err) + } + } else { + log.Printf("StartSession: deleted temp-content pod %s to free PVC", tempPodName) + } + } + + // Check if this is a continuation (session is in a terminal phase) + // Terminal phases: Completed, Failed, Cancelled + isActualContinuation := false + if currentStatus, ok := item.Object["status"].(map[string]interface{}); ok { + if phase, ok := currentStatus["phase"].(string); ok { + terminalPhases := []string{"Completed", "Failed", "Cancelled"} + for _, terminalPhase := range terminalPhases { + if phase == terminalPhase { + isActualContinuation = true + log.Printf("StartSession: Detected continuation - session is in terminal phase: %s", phase) + break + } + } + } + } + + // Only set parent session annotation if this is an actual continuation + // Don't set it on first start, even though StartSession can be called for initial creation + if isActualContinuation { + annotations := item.GetAnnotations() + if annotations == nil { + annotations = make(map[string]string) + } + annotations["vteam.ambient-code/parent-session-id"] = sessionName + item.SetAnnotations(annotations) + log.Printf("StartSession: Set parent-session-id annotation to %s for continuation (has completion time)", sessionName) + + // Update the metadata to persist the annotation + item, err = reqDyn.Resource(gvr).Namespace(project).Update(context.TODO(), item, v1.UpdateOptions{}) + if err != nil { + log.Printf("Failed to update agentic session metadata %s in project %s: %v", sessionName, project, err) + c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to update session metadata"}) + return + } + + // Regenerate runner token for continuation (old token may have expired) + log.Printf("StartSession: Regenerating runner token for session continuation") + if err := provisionRunnerTokenForSession(c, reqK8s, reqDyn, project, sessionName); err != nil { + log.Printf("Warning: failed to regenerate runner token for session %s/%s: %v", project, sessionName, err) + // Non-fatal: continue anyway, operator may retry + } else { + log.Printf("StartSession: Successfully regenerated runner token for continuation") + } + } else { + log.Printf("StartSession: Not setting parent-session-id (first run, no completion time)") + } + + // Now update status to trigger start (using the fresh object from Update) if item.Object["status"] == nil { item.Object["status"] = make(map[string]interface{}) } status := item.Object["status"].(map[string]interface{}) - status["phase"] = "Creating" - status["message"] = "Session start requested" + // Set to Pending so operator will process it (operator only acts on Pending phase) + status["phase"] = "Pending" + status["message"] = "Session restart requested" + // Clear completion time from previous run + delete(status, "completionTime") + // Update start time for this run status["startTime"] = time.Now().Format(time.RFC3339) - // Update the resource - updated, err := reqDyn.Resource(gvr).Namespace(project).Update(context.TODO(), item, v1.UpdateOptions{}) + // Update the status subresource (must use UpdateStatus, not Update) + updated, err := reqDyn.Resource(gvr).Namespace(project).UpdateStatus(context.TODO(), item, v1.UpdateOptions{}) if err != nil { log.Printf("Failed to start agentic session %s in project %s: %v", sessionName, project, err) c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to start agentic session"}) @@ -1207,19 +1375,54 @@ func StopSession(c *gin.Context) { // Get job name from status jobName, jobExists := status["jobName"].(string) - if jobExists && jobName != "" { - // Delete the job - err := reqK8s.BatchV1().Jobs(project).Delete(context.TODO(), jobName, v1.DeleteOptions{}) - if err != nil && !errors.IsNotFound(err) { + if !jobExists || jobName == "" { + // Try to derive job name if not in status + jobName = fmt.Sprintf("%s-job", sessionName) + log.Printf("Job name not in status, trying derived name: %s", jobName) + } + + // Delete the job and its pods + log.Printf("Attempting to delete job %s for session %s", jobName, sessionName) + + // First, explicitly delete all pods for this job (by job-name label) + podSelector := fmt.Sprintf("job-name=%s", jobName) + log.Printf("Deleting pods with job-name selector: %s", podSelector) + err = reqK8s.CoreV1().Pods(project).DeleteCollection(context.TODO(), v1.DeleteOptions{}, v1.ListOptions{ + LabelSelector: podSelector, + }) + if err != nil && !errors.IsNotFound(err) { + log.Printf("Failed to delete pods for job %s: %v (continuing anyway)", jobName, err) + } else { + log.Printf("Successfully deleted pods for job %s", jobName) + } + + // Also delete any pods labeled with this session (in case owner refs are lost) + sessionPodSelector := fmt.Sprintf("agentic-session=%s", sessionName) + log.Printf("Deleting pods with agentic-session selector: %s", sessionPodSelector) + err = reqK8s.CoreV1().Pods(project).DeleteCollection(context.TODO(), v1.DeleteOptions{}, v1.ListOptions{ + LabelSelector: sessionPodSelector, + }) + if err != nil && !errors.IsNotFound(err) { + log.Printf("Failed to delete session pods: %v (continuing anyway)", err) + } else { + log.Printf("Successfully deleted session-labeled pods") + } + + // Then delete the job itself with foreground propagation + deletePolicy := v1.DeletePropagationForeground + err = reqK8s.BatchV1().Jobs(project).Delete(context.TODO(), jobName, v1.DeleteOptions{ + PropagationPolicy: &deletePolicy, + }) + if err != nil { + if errors.IsNotFound(err) { + log.Printf("Job %s not found (may have already completed or been deleted)", jobName) + } else { log.Printf("Failed to delete job %s: %v", jobName, err) // Don't fail the request if job deletion fails - continue with status update log.Printf("Continuing with status update despite job deletion failure") - } else { - log.Printf("Deleted job %s for agentic session %s", jobName, sessionName) } } else { - // Handle case where job was never created or jobName is missing - log.Printf("No job found to delete for agentic session %s", sessionName) + log.Printf("Successfully deleted job %s for agentic session %s", jobName, sessionName) } // Update status to Stopped @@ -1227,8 +1430,8 @@ func StopSession(c *gin.Context) { status["message"] = "Session stopped by user" status["completionTime"] = time.Now().Format(time.RFC3339) - // Update the resource - updated, err := reqDyn.Resource(gvr).Namespace(project).Update(context.TODO(), item, v1.UpdateOptions{}) + // Update the resource using UpdateStatus for status subresource + updated, err := reqDyn.Resource(gvr).Namespace(project).UpdateStatus(context.TODO(), item, v1.UpdateOptions{}) if err != nil { if errors.IsNotFound(err) { // Session was deleted while we were trying to update it @@ -1320,44 +1523,497 @@ func UpdateSessionStatus(c *gin.Context) { c.JSON(http.StatusOK, gin.H{"message": "agentic session status updated"}) } -// setRepoStatus updates spec.repos[idx].status to a new value +// SpawnContentPod creates a temporary pod for workspace access on completed sessions +// POST /api/projects/:projectName/agentic-sessions/:sessionName/spawn-content-pod +func SpawnContentPod(c *gin.Context) { + // Get project from context (set by middleware) or param + project := c.GetString("project") + if project == "" { + project = c.Param("projectName") + } + sessionName := c.Param("sessionName") + + reqK8s, _ := GetK8sClientsForRequest(c) + if reqK8s == nil { + c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"}) + return + } + + podName := fmt.Sprintf("temp-content-%s", sessionName) + + // Check if already exists + if existing, err := reqK8s.CoreV1().Pods(project).Get(c.Request.Context(), podName, v1.GetOptions{}); err == nil { + ready := false + for _, cond := range existing.Status.Conditions { + if cond.Type == corev1.PodReady && cond.Status == corev1.ConditionTrue { + ready = true + break + } + } + c.JSON(http.StatusOK, gin.H{"status": "exists", "podName": podName, "ready": ready}) + return + } + + // Verify PVC exists + pvcName := fmt.Sprintf("ambient-workspace-%s", sessionName) + if _, err := reqK8s.CoreV1().PersistentVolumeClaims(project).Get(c.Request.Context(), pvcName, v1.GetOptions{}); err != nil { + c.JSON(http.StatusNotFound, gin.H{"error": "workspace PVC not found"}) + return + } + + // Get content service image from env + contentImage := os.Getenv("CONTENT_SERVICE_IMAGE") + if contentImage == "" { + contentImage = "quay.io/ambient_code/vteam_backend:latest" + } + imagePullPolicy := corev1.PullIfNotPresent + if os.Getenv("IMAGE_PULL_POLICY") == "Always" { + imagePullPolicy = corev1.PullAlways + } + + // Create temporary pod + pod := &corev1.Pod{ + ObjectMeta: v1.ObjectMeta{ + Name: podName, + Namespace: project, + Labels: map[string]string{ + "app": "temp-content-service", + "temp-content-for-session": sessionName, + }, + Annotations: map[string]string{ + "vteam.ambient-code/ttl": "900", + "vteam.ambient-code/created-at": time.Now().Format(time.RFC3339), + }, + }, + Spec: corev1.PodSpec{ + RestartPolicy: corev1.RestartPolicyNever, + Containers: []corev1.Container{ + { + Name: "content", + Image: contentImage, + ImagePullPolicy: imagePullPolicy, + Env: []corev1.EnvVar{ + {Name: "CONTENT_SERVICE_MODE", Value: "true"}, + {Name: "STATE_BASE_DIR", Value: "/workspace"}, + }, + Ports: []corev1.ContainerPort{{ContainerPort: 8080, Name: "http"}}, + ReadinessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/health", + Port: intstr.FromString("http"), + }, + }, + InitialDelaySeconds: 2, + PeriodSeconds: 2, + }, + VolumeMounts: []corev1.VolumeMount{ + { + Name: "workspace", + MountPath: "/workspace", + ReadOnly: false, + }, + }, + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("100m"), + corev1.ResourceMemory: resource.MustParse("128Mi"), + }, + Limits: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("500m"), + corev1.ResourceMemory: resource.MustParse("512Mi"), + }, + }, + }, + }, + Volumes: []corev1.Volume{ + { + Name: "workspace", + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: pvcName, + }, + }, + }, + }, + }, + } + + created, err := reqK8s.CoreV1().Pods(project).Create(c.Request.Context(), pod, v1.CreateOptions{}) + if err != nil { + log.Printf("Failed to create temp content pod: %v", err) + c.JSON(http.StatusInternalServerError, gin.H{"error": fmt.Sprintf("failed to create pod: %v", err)}) + return + } + + // Create service + svc := &corev1.Service{ + ObjectMeta: v1.ObjectMeta{ + Name: fmt.Sprintf("temp-content-%s", sessionName), + Namespace: project, + Labels: map[string]string{ + "app": "temp-content-service", + "temp-content-for-session": sessionName, + }, + OwnerReferences: []v1.OwnerReference{ + { + APIVersion: "v1", + Kind: "Pod", + Name: podName, + UID: created.UID, + Controller: types.BoolPtr(true), + }, + }, + }, + Spec: corev1.ServiceSpec{ + Selector: map[string]string{ + "temp-content-for-session": sessionName, + }, + Ports: []corev1.ServicePort{ + {Port: 8080, TargetPort: intstr.FromString("http")}, + }, + }, + } + + if _, err := reqK8s.CoreV1().Services(project).Create(c.Request.Context(), svc, v1.CreateOptions{}); err != nil && !errors.IsAlreadyExists(err) { + log.Printf("Failed to create temp service: %v", err) + } + + c.JSON(http.StatusOK, gin.H{ + "status": "creating", + "podName": podName, + }) +} + +// GetContentPodStatus checks if temporary content pod is ready +// GET /api/projects/:projectName/agentic-sessions/:sessionName/content-pod-status +func GetContentPodStatus(c *gin.Context) { + // Get project from context (set by middleware) or param + project := c.GetString("project") + if project == "" { + project = c.Param("projectName") + } + sessionName := c.Param("sessionName") + + reqK8s, _ := GetK8sClientsForRequest(c) + if reqK8s == nil { + c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"}) + return + } + + podName := fmt.Sprintf("temp-content-%s", sessionName) + pod, err := reqK8s.CoreV1().Pods(project).Get(c.Request.Context(), podName, v1.GetOptions{}) + if err != nil { + if errors.IsNotFound(err) { + c.JSON(http.StatusNotFound, gin.H{"status": "not_found"}) + return + } + c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get pod"}) + return + } + + ready := false + for _, cond := range pod.Status.Conditions { + if cond.Type == corev1.PodReady && cond.Status == corev1.ConditionTrue { + ready = true + break + } + } + + c.JSON(http.StatusOK, gin.H{ + "status": string(pod.Status.Phase), + "ready": ready, + "podName": podName, + "createdAt": pod.CreationTimestamp.Format(time.RFC3339), + }) +} + +// DeleteContentPod removes temporary content pod +// DELETE /api/projects/:projectName/agentic-sessions/:sessionName/content-pod +func DeleteContentPod(c *gin.Context) { + // Get project from context (set by middleware) or param + project := c.GetString("project") + if project == "" { + project = c.Param("projectName") + } + sessionName := c.Param("sessionName") + + reqK8s, _ := GetK8sClientsForRequest(c) + if reqK8s == nil { + c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"}) + return + } + + podName := fmt.Sprintf("temp-content-%s", sessionName) + err := reqK8s.CoreV1().Pods(project).Delete(c.Request.Context(), podName, v1.DeleteOptions{}) + if err != nil && !errors.IsNotFound(err) { + c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to delete pod"}) + return + } + + c.JSON(http.StatusOK, gin.H{"message": "content pod deleted"}) +} + +// GetSessionK8sResources returns job, pod, and PVC information for a session +// GET /api/projects/:projectName/agentic-sessions/:sessionName/k8s-resources +func GetSessionK8sResources(c *gin.Context) { + // Get project from context (set by middleware) or param + project := c.GetString("project") + if project == "" { + project = c.Param("projectName") + } + sessionName := c.Param("sessionName") + + reqK8s, reqDyn := GetK8sClientsForRequest(c) + if reqK8s == nil { + c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"}) + return + } + + // Get session to find job name + gvr := GetAgenticSessionV1Alpha1Resource() + session, err := reqDyn.Resource(gvr).Namespace(project).Get(c.Request.Context(), sessionName, v1.GetOptions{}) + if err != nil { + c.JSON(http.StatusNotFound, gin.H{"error": "session not found"}) + return + } + + status, _ := session.Object["status"].(map[string]interface{}) + jobName, _ := status["jobName"].(string) + if jobName == "" { + jobName = fmt.Sprintf("%s-job", sessionName) + } + + result := map[string]interface{}{} + + // Get Job status + job, err := reqK8s.BatchV1().Jobs(project).Get(c.Request.Context(), jobName, v1.GetOptions{}) + jobExists := err == nil + + if jobExists { + result["jobName"] = jobName + jobStatus := "Unknown" + if job.Status.Active > 0 { + jobStatus = "Active" + } else if job.Status.Succeeded > 0 { + jobStatus = "Succeeded" + } else if job.Status.Failed > 0 { + jobStatus = "Failed" + } + result["jobStatus"] = jobStatus + result["jobConditions"] = job.Status.Conditions + } else if errors.IsNotFound(err) { + // Job not found - don't return job info at all + log.Printf("GetSessionK8sResources: Job %s not found, omitting from response", jobName) + // Don't include jobName or jobStatus in result + } else { + // Other error - still show job name but with error status + result["jobName"] = jobName + result["jobStatus"] = "Error" + log.Printf("GetSessionK8sResources: Error getting job %s: %v", jobName, err) + } + + // Get Pods for this job (only if job exists) + podInfos := []map[string]interface{}{} + if jobExists { + pods, err := reqK8s.CoreV1().Pods(project).List(c.Request.Context(), v1.ListOptions{ + LabelSelector: fmt.Sprintf("job-name=%s", jobName), + }) + if err == nil { + for _, pod := range pods.Items { + // Check if pod is terminating (has DeletionTimestamp) + podPhase := string(pod.Status.Phase) + if pod.DeletionTimestamp != nil { + podPhase = "Terminating" + } + + containerInfos := []map[string]interface{}{} + for _, cs := range pod.Status.ContainerStatuses { + state := "Unknown" + var exitCode *int32 + var reason string + if cs.State.Running != nil { + state = "Running" + // If pod is terminating but container still shows running, mark it as terminating + if pod.DeletionTimestamp != nil { + state = "Terminating" + } + } else if cs.State.Terminated != nil { + state = "Terminated" + exitCode = &cs.State.Terminated.ExitCode + reason = cs.State.Terminated.Reason + } else if cs.State.Waiting != nil { + state = "Waiting" + reason = cs.State.Waiting.Reason + } + containerInfos = append(containerInfos, map[string]interface{}{ + "name": cs.Name, + "state": state, + "exitCode": exitCode, + "reason": reason, + }) + } + podInfos = append(podInfos, map[string]interface{}{ + "name": pod.Name, + "phase": podPhase, + "containers": containerInfos, + }) + } + } + } + + // Check for temp-content pod + tempPodName := fmt.Sprintf("temp-content-%s", sessionName) + tempPod, err := reqK8s.CoreV1().Pods(project).Get(c.Request.Context(), tempPodName, v1.GetOptions{}) + if err == nil { + tempPodPhase := string(tempPod.Status.Phase) + if tempPod.DeletionTimestamp != nil { + tempPodPhase = "Terminating" + } + + containerInfos := []map[string]interface{}{} + for _, cs := range tempPod.Status.ContainerStatuses { + state := "Unknown" + var exitCode *int32 + var reason string + if cs.State.Running != nil { + state = "Running" + // If pod is terminating but container still shows running, mark as terminating + if tempPod.DeletionTimestamp != nil { + state = "Terminating" + } + } else if cs.State.Terminated != nil { + state = "Terminated" + exitCode = &cs.State.Terminated.ExitCode + reason = cs.State.Terminated.Reason + } else if cs.State.Waiting != nil { + state = "Waiting" + reason = cs.State.Waiting.Reason + } + containerInfos = append(containerInfos, map[string]interface{}{ + "name": cs.Name, + "state": state, + "exitCode": exitCode, + "reason": reason, + }) + } + podInfos = append(podInfos, map[string]interface{}{ + "name": tempPod.Name, + "phase": tempPodPhase, + "containers": containerInfos, + "isTempPod": true, + }) + } + + result["pods"] = podInfos + + // Get PVC info - always use session's own PVC name + // Note: If session was created with parent_session_id (via API), the operator handles PVC reuse + pvcName := fmt.Sprintf("ambient-workspace-%s", sessionName) + pvc, err := reqK8s.CoreV1().PersistentVolumeClaims(project).Get(c.Request.Context(), pvcName, v1.GetOptions{}) + result["pvcName"] = pvcName + if err == nil { + result["pvcExists"] = true + if storage, ok := pvc.Status.Capacity[corev1.ResourceStorage]; ok { + result["pvcSize"] = storage.String() + } + } else { + result["pvcExists"] = false + } + + c.JSON(http.StatusOK, result) +} + +// setRepoStatus updates status.repos[idx] with status and diff info func setRepoStatus(dyn dynamic.Interface, project, sessionName string, repoIndex int, newStatus string) error { gvr := GetAgenticSessionV1Alpha1Resource() item, err := dyn.Resource(gvr).Namespace(project).Get(context.TODO(), sessionName, v1.GetOptions{}) if err != nil { return err } + + // Get repo name from spec.repos[repoIndex] spec, _ := item.Object["spec"].(map[string]interface{}) - if spec == nil { - spec = map[string]interface{}{} - } - repos, _ := spec["repos"].([]interface{}) - if repoIndex < 0 || repoIndex >= len(repos) { + specRepos, _ := spec["repos"].([]interface{}) + if repoIndex < 0 || repoIndex >= len(specRepos) { return fmt.Errorf("repo index out of range") } - rm, _ := repos[repoIndex].(map[string]interface{}) - if rm == nil { - rm = map[string]interface{}{} + specRepo, _ := specRepos[repoIndex].(map[string]interface{}) + repoName := "" + if name, ok := specRepo["name"].(string); ok { + repoName = name + } else if input, ok := specRepo["input"].(map[string]interface{}); ok { + if url, ok := input["url"].(string); ok { + repoName = DeriveRepoFolderFromURL(url) + } } - rm["status"] = newStatus - repos[repoIndex] = rm - spec["repos"] = repos - item.Object["spec"] = spec - updated, err := dyn.Resource(gvr).Namespace(project).Update(context.TODO(), item, v1.UpdateOptions{}) + if repoName == "" { + repoName = fmt.Sprintf("repo-%d", repoIndex) + } + + // Ensure status.repos exists + if item.Object["status"] == nil { + item.Object["status"] = make(map[string]interface{}) + } + status := item.Object["status"].(map[string]interface{}) + statusRepos, _ := status["repos"].([]interface{}) + if statusRepos == nil { + statusRepos = []interface{}{} + } + + // Find or create status entry for this repo + repoStatus := map[string]interface{}{ + "name": repoName, + "status": newStatus, + "last_updated": time.Now().Format(time.RFC3339), + } + + // Update existing or append new + found := false + for i, r := range statusRepos { + if rm, ok := r.(map[string]interface{}); ok { + if n, ok := rm["name"].(string); ok && n == repoName { + rm["status"] = newStatus + rm["last_updated"] = time.Now().Format(time.RFC3339) + statusRepos[i] = rm + found = true + break + } + } + } + if !found { + statusRepos = append(statusRepos, repoStatus) + } + + status["repos"] = statusRepos + item.Object["status"] = status + + updated, err := dyn.Resource(gvr).Namespace(project).UpdateStatus(context.TODO(), item, v1.UpdateOptions{}) if err != nil { log.Printf("setRepoStatus: update failed project=%s session=%s repoIndex=%d status=%s err=%v", project, sessionName, repoIndex, newStatus, err) return err } if updated != nil { - log.Printf("setRepoStatus: update ok project=%s session=%s repoIndex=%d status=%s", project, sessionName, repoIndex, newStatus) + log.Printf("setRepoStatus: update ok project=%s session=%s repo=%s status=%s", project, sessionName, repoName, newStatus) } return nil } // listSessionWorkspace proxies to per-job content service for directory listing func ListSessionWorkspace(c *gin.Context) { - project := c.Param("projectName") + // Get project from context (set by middleware) or param + project := c.GetString("project") + if project == "" { + project = c.Param("projectName") + } session := c.Param("sessionName") + + if project == "" { + log.Printf("ListSessionWorkspace: project is empty, session=%s", session) + c.JSON(http.StatusBadRequest, gin.H{"error": "Project namespace required"}) + return + } + rel := strings.TrimSpace(c.Query("path")) // Build absolute workspace path using plain session (no url.PathEscape to match FS paths) absPath := "/sessions/" + session + "/workspace" @@ -1365,18 +2021,27 @@ func ListSessionWorkspace(c *gin.Context) { absPath += "/" + rel } - // Call per-job service directly to avoid any default base that targets per-namespace service + // Call per-job service or temp service for completed sessions token := c.GetHeader("Authorization") if strings.TrimSpace(token) == "" { token = c.GetHeader("X-Forwarded-Access-Token") } - base := os.Getenv("SESSION_CONTENT_SERVICE_BASE") - if base == "" { - // Per-job Service name created by operator: ambient-content- in project namespace - base = "http://ambient-content-%s.%s.svc:8080" + + // Try temp service first (for completed sessions), then regular service + serviceName := fmt.Sprintf("temp-content-%s", session) + reqK8s, _ := GetK8sClientsForRequest(c) + if reqK8s != nil { + if _, err := reqK8s.CoreV1().Services(project).Get(c.Request.Context(), serviceName, v1.GetOptions{}); err != nil { + // Temp service doesn't exist, use regular service + serviceName = fmt.Sprintf("ambient-content-%s", session) + } + } else { + serviceName = fmt.Sprintf("ambient-content-%s", session) } - endpoint := fmt.Sprintf(base, session, project) + + endpoint := fmt.Sprintf("http://%s.%s.svc:8080", serviceName, project) u := fmt.Sprintf("%s/content/list?path=%s", endpoint, url.QueryEscape(absPath)) + log.Printf("ListSessionWorkspace: project=%s session=%s endpoint=%s", project, session, endpoint) req, _ := http.NewRequestWithContext(c.Request.Context(), http.MethodGet, u, nil) if strings.TrimSpace(token) != "" { req.Header.Set("Authorization", token) @@ -1384,30 +2049,59 @@ func ListSessionWorkspace(c *gin.Context) { client := &http.Client{Timeout: 4 * time.Second} resp, err := client.Do(req) if err != nil { + log.Printf("ListSessionWorkspace: content service request failed: %v", err) // Soften error to 200 with empty list so UI doesn't spam c.JSON(http.StatusOK, gin.H{"items": []any{}}) return } defer resp.Body.Close() b, _ := io.ReadAll(resp.Body) + + // If content service returns 404, check if it's because workspace doesn't exist yet + if resp.StatusCode == http.StatusNotFound { + log.Printf("ListSessionWorkspace: workspace not found (may not be created yet by runner)") + // Return empty list instead of error for better UX during session startup + c.JSON(http.StatusOK, gin.H{"items": []any{}}) + return + } + c.Data(resp.StatusCode, resp.Header.Get("Content-Type"), b) } // getSessionWorkspaceFile reads a file via content service func GetSessionWorkspaceFile(c *gin.Context) { - project := c.Param("projectName") + // Get project from context (set by middleware) or param + project := c.GetString("project") + if project == "" { + project = c.Param("projectName") + } session := c.Param("sessionName") + + if project == "" { + log.Printf("GetSessionWorkspaceFile: project is empty, session=%s", session) + c.JSON(http.StatusBadRequest, gin.H{"error": "Project namespace required"}) + return + } + sub := strings.TrimPrefix(c.Param("path"), "/") absPath := "/sessions/" + session + "/workspace/" + sub token := c.GetHeader("Authorization") if strings.TrimSpace(token) == "" { token = c.GetHeader("X-Forwarded-Access-Token") } - base := os.Getenv("SESSION_CONTENT_SERVICE_BASE") - if base == "" { - base = "http://ambient-content-%s.%s.svc:8080" + + // Try temp service first (for completed sessions), then regular service + serviceName := fmt.Sprintf("temp-content-%s", session) + reqK8s, _ := GetK8sClientsForRequest(c) + if reqK8s != nil { + if _, err := reqK8s.CoreV1().Services(project).Get(c.Request.Context(), serviceName, v1.GetOptions{}); err != nil { + serviceName = fmt.Sprintf("ambient-content-%s", session) + } + } else { + serviceName = fmt.Sprintf("ambient-content-%s", session) } - endpoint := fmt.Sprintf(base, session, project) + + endpoint := fmt.Sprintf("http://%s.%s.svc:8080", serviceName, project) u := fmt.Sprintf("%s/content/file?path=%s", endpoint, url.QueryEscape(absPath)) req, _ := http.NewRequestWithContext(c.Request.Context(), http.MethodGet, u, nil) if strings.TrimSpace(token) != "" { @@ -1426,19 +2120,39 @@ func GetSessionWorkspaceFile(c *gin.Context) { // putSessionWorkspaceFile writes a file via content service func PutSessionWorkspaceFile(c *gin.Context) { - project := c.Param("projectName") + // Get project from context (set by middleware) or param + project := c.GetString("project") + if project == "" { + project = c.Param("projectName") + } session := c.Param("sessionName") + + if project == "" { + log.Printf("PutSessionWorkspaceFile: project is empty, session=%s", session) + c.JSON(http.StatusBadRequest, gin.H{"error": "Project namespace required"}) + return + } sub := strings.TrimPrefix(c.Param("path"), "/") absPath := "/sessions/" + session + "/workspace/" + sub token := c.GetHeader("Authorization") if strings.TrimSpace(token) == "" { token = c.GetHeader("X-Forwarded-Access-Token") } - base := os.Getenv("SESSION_CONTENT_SERVICE_BASE") - if base == "" { - base = "http://ambient-content-%s.%s.svc:8080" + + // Try temp service first (for completed sessions), then regular service + serviceName := fmt.Sprintf("temp-content-%s", session) + reqK8s, _ := GetK8sClientsForRequest(c) + if reqK8s != nil { + if _, err := reqK8s.CoreV1().Services(project).Get(c.Request.Context(), serviceName, v1.GetOptions{}); err != nil { + // Temp service doesn't exist, use regular service + serviceName = fmt.Sprintf("ambient-content-%s", session) + } + } else { + serviceName = fmt.Sprintf("ambient-content-%s", session) } - endpoint := fmt.Sprintf(base, session, project) + + endpoint := fmt.Sprintf("http://%s.%s.svc:8080", serviceName, project) + log.Printf("PutSessionWorkspaceFile: using service %s for session %s", serviceName, session) payload, _ := io.ReadAll(c.Request.Body) wreq := struct { Path string `json:"path"` @@ -1479,12 +2193,18 @@ func PushSessionRepo(c *gin.Context) { } log.Printf("pushSessionRepo: request project=%s session=%s repoIndex=%d commitLen=%d", project, session, body.RepoIndex, len(strings.TrimSpace(body.CommitMessage))) - base := os.Getenv("SESSION_CONTENT_SERVICE_BASE") - if base == "" { - // Default: per-job service name ambient-content- in the project namespace - base = "http://ambient-content-%s.%s.svc:8080" + // Try temp service first (for completed sessions), then regular service + serviceName := fmt.Sprintf("temp-content-%s", session) + reqK8s, _ := GetK8sClientsForRequest(c) + if reqK8s != nil { + if _, err := reqK8s.CoreV1().Services(project).Get(c.Request.Context(), serviceName, v1.GetOptions{}); err != nil { + serviceName = fmt.Sprintf("ambient-content-%s", session) + } + } else { + serviceName = fmt.Sprintf("ambient-content-%s", session) } - endpoint := fmt.Sprintf(base, session, project) + endpoint := fmt.Sprintf("http://%s.%s.svc:8080", serviceName, project) + log.Printf("pushSessionRepo: using service %s", serviceName) // Simplified: 1) get session; 2) compute repoPath from INPUT repo folder; 3) get output url/branch; 4) proxy resolvedRepoPath := "" @@ -1631,11 +2351,19 @@ func AbandonSessionRepo(c *gin.Context) { c.JSON(http.StatusBadRequest, gin.H{"error": "invalid JSON body"}) return } - base := os.Getenv("SESSION_CONTENT_SERVICE_BASE") - if base == "" { - base = "http://ambient-content-%s.%s.svc:8080" + + // Try temp service first (for completed sessions), then regular service + serviceName := fmt.Sprintf("temp-content-%s", session) + reqK8s, _ := GetK8sClientsForRequest(c) + if reqK8s != nil { + if _, err := reqK8s.CoreV1().Services(project).Get(c.Request.Context(), serviceName, v1.GetOptions{}); err != nil { + serviceName = fmt.Sprintf("ambient-content-%s", session) + } + } else { + serviceName = fmt.Sprintf("ambient-content-%s", session) } - endpoint := fmt.Sprintf(base, session, project) + endpoint := fmt.Sprintf("http://%s.%s.svc:8080", serviceName, project) + log.Printf("AbandonSessionRepo: using service %s", serviceName) repoPath := strings.TrimSpace(body.RepoPath) if repoPath == "" { if body.RepoIndex >= 0 { @@ -1693,11 +2421,19 @@ func DiffSessionRepo(c *gin.Context) { c.JSON(http.StatusBadRequest, gin.H{"error": "missing repoPath/repoIndex"}) return } - base := os.Getenv("SESSION_CONTENT_SERVICE_BASE") - if base == "" { - base = "http://ambient-content-%s.%s.svc:8080" + + // Try temp service first (for completed sessions), then regular service + serviceName := fmt.Sprintf("temp-content-%s", session) + reqK8s, _ := GetK8sClientsForRequest(c) + if reqK8s != nil { + if _, err := reqK8s.CoreV1().Services(project).Get(c.Request.Context(), serviceName, v1.GetOptions{}); err != nil { + serviceName = fmt.Sprintf("ambient-content-%s", session) + } + } else { + serviceName = fmt.Sprintf("ambient-content-%s", session) } - endpoint := fmt.Sprintf(base, session, project) + endpoint := fmt.Sprintf("http://%s.%s.svc:8080", serviceName, project) + log.Printf("DiffSessionRepo: using service %s", serviceName) url := fmt.Sprintf("%s/content/github/diff?repoPath=%s", endpoint, url.QueryEscape(repoPath)) req, _ := http.NewRequestWithContext(c.Request.Context(), http.MethodGet, url, nil) if v := c.GetHeader("Authorization"); v != "" { diff --git a/components/backend/main.go b/components/backend/main.go index cd43f357c..1d6168a7d 100644 --- a/components/backend/main.go +++ b/components/backend/main.go @@ -24,6 +24,30 @@ func main() { _ = godotenv.Overload(".env.local") _ = godotenv.Overload(".env") + // Content service mode - minimal initialization, no K8s access needed + if os.Getenv("CONTENT_SERVICE_MODE") == "true" { + log.Println("Starting in CONTENT_SERVICE_MODE (no K8s client initialization)") + + // Initialize config to set StateBaseDir from environment + server.InitConfig() + + // Only initialize what content service needs + handlers.StateBaseDir = server.StateBaseDir + handlers.GitPushRepo = git.PushRepo + handlers.GitAbandonRepo = git.AbandonRepo + handlers.GitDiffRepo = git.DiffRepo + + log.Printf("Content service using StateBaseDir: %s", server.StateBaseDir) + + if err := server.RunContentService(registerContentRoutes); err != nil { + log.Fatalf("Content service error: %v", err) + } + return + } + + // Normal server mode - full initialization + log.Println("Starting in normal server mode with K8s client initialization") + // Initialize components github.InitializeTokenManager() @@ -91,14 +115,6 @@ func main() { // Initialize websocket package websocket.StateBaseDir = server.StateBaseDir - // Content service mode - if os.Getenv("CONTENT_SERVICE_MODE") == "true" { - if err := server.RunContentService(registerContentRoutes); err != nil { - log.Fatalf("Content service error: %v", err) - } - return - } - // Normal server mode - create closure to capture jiraHandler registerRoutesWithJira := func(r *gin.Engine) { registerRoutes(r, jiraHandler) diff --git a/components/backend/routes.go b/components/backend/routes.go index d0894de56..17d85e477 100644 --- a/components/backend/routes.go +++ b/components/backend/routes.go @@ -47,6 +47,10 @@ func registerRoutes(r *gin.Engine, jiraHandler *jira.Handler) { projectGroup.POST("/agentic-sessions/:sessionName/github/push", handlers.PushSessionRepo) projectGroup.POST("/agentic-sessions/:sessionName/github/abandon", handlers.AbandonSessionRepo) projectGroup.GET("/agentic-sessions/:sessionName/github/diff", handlers.DiffSessionRepo) + projectGroup.GET("/agentic-sessions/:sessionName/k8s-resources", handlers.GetSessionK8sResources) + projectGroup.POST("/agentic-sessions/:sessionName/spawn-content-pod", handlers.SpawnContentPod) + projectGroup.GET("/agentic-sessions/:sessionName/content-pod-status", handlers.GetContentPodStatus) + projectGroup.DELETE("/agentic-sessions/:sessionName/content-pod", handlers.DeleteContentPod) projectGroup.GET("/rfe-workflows", handlers.ListProjectRFEWorkflows) projectGroup.POST("/rfe-workflows", handlers.CreateProjectRFEWorkflow) @@ -60,6 +64,7 @@ func registerRoutes(r *gin.Engine, jiraHandler *jira.Handler) { projectGroup.GET("/sessions/:sessionId/ws", websocket.HandleSessionWebSocket) projectGroup.GET("/sessions/:sessionId/messages", websocket.GetSessionMessagesWS) + projectGroup.GET("/sessions/:sessionId/messages/claude-format", websocket.GetSessionMessagesClaudeFormat) projectGroup.POST("/sessions/:sessionId/messages", websocket.PostSessionMessageWS) projectGroup.POST("/rfe-workflows/:id/jira", jiraHandler.PublishWorkflowFileToJira) projectGroup.GET("/rfe-workflows/:id/jira", handlers.GetWorkflowJira) diff --git a/components/backend/types/common.go b/components/backend/types/common.go index ab81280e4..ea1ca771b 100644 --- a/components/backend/types/common.go +++ b/components/backend/types/common.go @@ -39,3 +39,16 @@ type Paths struct { Messages string `json:"messages,omitempty"` Inbox string `json:"inbox,omitempty"` } + +// Helper functions for pointer types +func BoolPtr(b bool) *bool { + return &b +} + +func StringPtr(s string) *string { + return &s +} + +func IntPtr(i int) *int { + return &i +} diff --git a/components/backend/types/session.go b/components/backend/types/session.go index e7a5de5b6..be275ce7a 100644 --- a/components/backend/types/session.go +++ b/components/backend/types/session.go @@ -61,12 +61,13 @@ type AgenticSessionStatus struct { } type CreateAgenticSessionRequest struct { - Prompt string `json:"prompt" binding:"required"` - DisplayName string `json:"displayName,omitempty"` - LLMSettings *LLMSettings `json:"llmSettings,omitempty"` - Timeout *int `json:"timeout,omitempty"` - Interactive *bool `json:"interactive,omitempty"` - WorkspacePath string `json:"workspacePath,omitempty"` + Prompt string `json:"prompt" binding:"required"` + DisplayName string `json:"displayName,omitempty"` + LLMSettings *LLMSettings `json:"llmSettings,omitempty"` + Timeout *int `json:"timeout,omitempty"` + Interactive *bool `json:"interactive,omitempty"` + WorkspacePath string `json:"workspacePath,omitempty"` + ParentSessionID string `json:"parent_session_id,omitempty"` // Multi-repo support (unified mapping) Repos []SessionRepoMapping `json:"repos,omitempty"` MainRepoIndex *int `json:"mainRepoIndex,omitempty"` diff --git a/components/backend/websocket/handlers.go b/components/backend/websocket/handlers.go index b504bdcd3..d8b00d211 100644 --- a/components/backend/websocket/handlers.go +++ b/components/backend/websocket/handlers.go @@ -217,3 +217,313 @@ func PostSessionMessageWS(c *gin.Context) { c.JSON(http.StatusAccepted, gin.H{"status": "queued"}) } + +// GetSessionMessagesClaudeFormat handles GET /projects/:projectName/sessions/:sessionId/messages/claude-format +// Transforms stored messages into Claude SDK format for session continuation +func GetSessionMessagesClaudeFormat(c *gin.Context) { + sessionID := c.Param("sessionId") + + messages, err := retrieveMessagesFromS3(sessionID) + if err != nil { + log.Printf("GetSessionMessagesClaudeFormat: retrieve failed: %v", err) + c.JSON(http.StatusInternalServerError, gin.H{ + "error": fmt.Sprintf("failed to retrieve messages: %v", err), + }) + return + } + + log.Printf("GetSessionMessagesClaudeFormat: retrieved %d messages for session %s", len(messages), sessionID) + + // Filter to only conversational messages (user and agent) + // Exclude: system.message, agent.waiting, agent.running, etc. + conversationalMessages := []SessionMessage{} + for _, msg := range messages { + msgType := strings.ToLower(strings.TrimSpace(msg.Type)) + // Normalize dots to underscores for comparison (stored as "agent.message" but we check "agent_message") + normalizedType := strings.ReplaceAll(msgType, ".", "_") + + // Only include actual conversation messages + if normalizedType == "user_message" || normalizedType == "agent_message" { + // Additional validation - ensure payload is not empty + if len(msg.Payload) == 0 { + log.Printf("GetSessionMessagesClaudeFormat: filtering out %s with empty payload", msg.Type) + continue + } + conversationalMessages = append(conversationalMessages, msg) + log.Printf("GetSessionMessagesClaudeFormat: keeping message type=%s", msg.Type) + } else { + log.Printf("GetSessionMessagesClaudeFormat: filtering out non-conversational message type=%s", msg.Type) + } + } + + log.Printf("GetSessionMessagesClaudeFormat: filtered to %d conversational messages", len(conversationalMessages)) + + claudeMessages := transformToClaudeFormat(conversationalMessages) + + c.JSON(http.StatusOK, gin.H{ + "session_id": sessionID, + "messages": claudeMessages, + }) +} + +// transformToClaudeFormat converts SessionMessage to Claude SDK control protocol format +// The SDK's connect() uses a control protocol that wraps API messages in an envelope +// +// Required format (from client.py lines 184-190): +// +// { +// "type": "user", +// "message": {"role": "user", "content": "..."}, +// "parent_tool_use_id": "tool_abc123" // optional: links message to tool call chain +// } +// +// For tool results, there are TWO levels of tool linking: +// 1. Message-level: "parent_tool_use_id" - links this message to the tool call chain +// 2. Content-level: Inside tool_result blocks - {"type": "tool_result", "tool_use_id": "...", "content": "..."} +// +// Only user messages are accepted - the SDK replays the conversation from user inputs +// This is different from the standard Messages API which uses simpler {"role": "user", "content": "..."} +func transformToClaudeFormat(messages []SessionMessage) []map[string]interface{} { + result := []map[string]interface{}{} + + for _, msg := range messages { + log.Printf("transformToClaudeFormat: processing message type=%s", msg.Type) + + // Normalize message type (stored as "agent.message" but we check "agent_message") + msgType := strings.ToLower(strings.TrimSpace(msg.Type)) + normalizedType := strings.ReplaceAll(msgType, ".", "_") + + switch normalizedType { + case "user_message": + // Extract user content - can be text or tool_result blocks + content := extractUserMessageContent(msg.Payload) + if content != nil { + // Extract parent_tool_use_id if present (for tool result chaining) + parentToolUseID := extractParentToolUseID(msg.Payload) + + // SDK control protocol format: wraps API messages in envelope + message := map[string]interface{}{ + "type": "user", + "message": map[string]interface{}{ + "role": "user", + "content": content, + }, + } + // Only include parent_tool_use_id if it exists + if parentToolUseID != "" { + message["parent_tool_use_id"] = parentToolUseID + } + + result = append(result, message) + log.Printf("transformToClaudeFormat: added user message (parent_tool_use_id=%s)", parentToolUseID) + } else { + log.Printf("transformToClaudeFormat: skipping user_message with empty content") + } + + case "agent_message": + // Skip assistant messages - SDK connect() only accepts user messages + // The SDK will replay the conversation and regenerate assistant responses + log.Printf("transformToClaudeFormat: skipping agent_message (SDK only accepts user messages)") + + default: + log.Printf("transformToClaudeFormat: skipping message with unknown type=%s (normalized=%s)", msg.Type, normalizedType) + } + } + + // Validate all messages have proper structure + // SDK connect() only accepts type: "user" or "control" + validated := []map[string]interface{}{} + for i, msg := range result { + msgType, hasType := msg["type"].(string) + if !hasType || (msgType != "user" && msgType != "control") { + log.Printf("transformToClaudeFormat: INVALID message at index %d - type must be 'user' or 'control', got: %v", i, msgType) + continue + } + if msg["content"] == nil { + log.Printf("transformToClaudeFormat: INVALID message at index %d - missing content", i) + continue + } + validated = append(validated, msg) + } + + log.Printf("transformToClaudeFormat: returning %d validated user messages (SDK will replay conversation)", len(validated)) + return validated +} + +// extractUserMessageContent extracts content from user message payload +// Returns content as string (simple text) or []interface{} (content blocks with tool_result) +func extractUserMessageContent(payload map[string]interface{}) interface{} { + // Check if payload already has properly formatted content + if content, ok := payload["content"]; ok { + // Content is already in correct format (string or array of blocks) + switch v := content.(type) { + case string: + if v != "" { + return v + } + case []interface{}: + if len(v) > 0 { + return v + } + } + } + + // Check for tool_result block - must be in array format + if toolResult := extractToolResult(payload); toolResult != nil { + return []interface{}{toolResult} + } + + // Try to extract simple text content + if text, ok := payload["text"].(string); ok && text != "" { + return text + } + + // Check for text_block format from runner + if msgType, ok := payload["type"].(string); ok && msgType == "text_block" { + if text, ok := payload["text"].(string); ok && text != "" { + return text + } + } + + return nil +} + +// extractAssistantMessageContent extracts content from assistant message payload +// Returns content as array of content blocks (text, thinking, tool_use, etc.) +// Assistant messages must have content as an array, not a simple string +// Supports SDK types: TextBlock, ThinkingBlock, ToolUseBlock +func extractAssistantMessageContent(payload map[string]interface{}) interface{} { + // Check if payload already has properly formatted content array + if content, ok := payload["content"].([]interface{}); ok && len(content) > 0 { + return content + } + + // Build content blocks array from various payload formats + var contentBlocks []map[string]interface{} + + // Check for thinking block (extended thinking) + if thinking, signature := extractThinkingBlock(payload); thinking != "" { + block := map[string]interface{}{ + "type": "thinking", + "thinking": thinking, + } + if signature != "" { + block["signature"] = signature + } + contentBlocks = append(contentBlocks, block) + } + + // Check for text block + if text := extractTextBlock(payload); text != "" { + contentBlocks = append(contentBlocks, map[string]interface{}{ + "type": "text", + "text": text, + }) + } + + // Check for tool_use block + if tool, input, id := extractToolUse(payload); tool != "" { + block := map[string]interface{}{ + "type": "tool_use", + "name": tool, + "input": input, + } + if id != "" { + block["id"] = id + } + contentBlocks = append(contentBlocks, block) + } + + if len(contentBlocks) > 0 { + // Convert to []interface{} for JSON marshaling + result := make([]interface{}, len(contentBlocks)) + for i, block := range contentBlocks { + result[i] = block + } + return result + } + + return nil +} + +func extractParentToolUseID(payload map[string]interface{}) string { + // Check for parent_tool_use_id at top level + if parentID, ok := payload["parent_tool_use_id"].(string); ok && parentID != "" { + return parentID + } + + // Check if this is a tool_result and extract the tool_use_id as parent + if toolResult, ok := payload["tool_result"].(map[string]interface{}); ok { + if toolUseID, ok := toolResult["tool_use_id"].(string); ok { + return toolUseID + } + } + + return "" +} + +func extractThinkingBlock(payload map[string]interface{}) (thinking string, signature string) { + // Check if this is a thinking block + if msgType, ok := payload["type"].(string); ok && msgType == "thinking" { + thinking, _ = payload["thinking"].(string) + signature, _ = payload["signature"].(string) + return thinking, signature + } + + // Check nested content for thinking block + if content, ok := payload["content"].(map[string]interface{}); ok { + if thinking, ok := content["thinking"].(string); ok { + signature, _ := content["signature"].(string) + return thinking, signature + } + } + + return "", "" +} + +func extractTextBlock(payload map[string]interface{}) string { + if content, ok := payload["content"].(map[string]interface{}); ok { + if text, ok := content["text"].(string); ok { + return text + } + } + if text, ok := payload["text"].(string); ok { + return text + } + if msgType, ok := payload["type"].(string); ok && msgType == "text_block" { + if text, ok := payload["text"].(string); ok { + return text + } + } + return "" +} + +func extractToolUse(payload map[string]interface{}) (tool string, input map[string]interface{}, id string) { + toolName, hasTool := payload["tool"].(string) + toolInput, hasInput := payload["input"].(map[string]interface{}) + toolID, _ := payload["id"].(string) + + if hasTool && hasInput { + return toolName, toolInput, toolID + } + return "", nil, "" +} + +func extractToolResult(payload map[string]interface{}) map[string]interface{} { + if toolResult, ok := payload["tool_result"].(map[string]interface{}); ok { + result := map[string]interface{}{ + "type": "tool_result", + } + if toolUseID, ok := toolResult["tool_use_id"].(string); ok { + result["tool_use_id"] = toolUseID + } + if content := toolResult["content"]; content != nil { + result["content"] = content + } + if isError, ok := toolResult["is_error"].(bool); ok { + result["is_error"] = isError + } + return result + } + return nil +} diff --git a/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/content-pod-status/route.ts b/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/content-pod-status/route.ts new file mode 100644 index 000000000..4515e631b --- /dev/null +++ b/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/content-pod-status/route.ts @@ -0,0 +1,17 @@ +import { BACKEND_URL } from '@/lib/config'; +import { buildForwardHeadersAsync } from '@/lib/auth'; + +export async function GET( + request: Request, + { params }: { params: Promise<{ name: string; sessionName: string }> }, +) { + const { name, sessionName } = await params; + const headers = await buildForwardHeadersAsync(request); + const resp = await fetch( + `${BACKEND_URL}/projects/${encodeURIComponent(name)}/agentic-sessions/${encodeURIComponent(sessionName)}/content-pod-status`, + { headers } + ); + const data = await resp.text(); + return new Response(data, { status: resp.status, headers: { 'Content-Type': 'application/json' } }); +} + diff --git a/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/content-pod/route.ts b/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/content-pod/route.ts new file mode 100644 index 000000000..111aa2ba4 --- /dev/null +++ b/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/content-pod/route.ts @@ -0,0 +1,17 @@ +import { BACKEND_URL } from '@/lib/config'; +import { buildForwardHeadersAsync } from '@/lib/auth'; + +export async function DELETE( + request: Request, + { params }: { params: Promise<{ name: string; sessionName: string }> }, +) { + const { name, sessionName } = await params; + const headers = await buildForwardHeadersAsync(request); + const resp = await fetch( + `${BACKEND_URL}/projects/${encodeURIComponent(name)}/agentic-sessions/${encodeURIComponent(sessionName)}/content-pod`, + { method: 'DELETE', headers } + ); + const data = await resp.text(); + return new Response(data, { status: resp.status, headers: { 'Content-Type': 'application/json' } }); +} + diff --git a/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/k8s-resources/route.ts b/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/k8s-resources/route.ts new file mode 100644 index 000000000..ff61c5e5b --- /dev/null +++ b/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/k8s-resources/route.ts @@ -0,0 +1,17 @@ +import { BACKEND_URL } from '@/lib/config'; +import { buildForwardHeadersAsync } from '@/lib/auth'; + +export async function GET( + request: Request, + { params }: { params: Promise<{ name: string; sessionName: string }> }, +) { + const { name, sessionName } = await params; + const headers = await buildForwardHeadersAsync(request); + const resp = await fetch( + `${BACKEND_URL}/projects/${encodeURIComponent(name)}/agentic-sessions/${encodeURIComponent(sessionName)}/k8s-resources`, + { headers } + ); + const data = await resp.text(); + return new Response(data, { status: resp.status, headers: { 'Content-Type': 'application/json' } }); +} + diff --git a/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/spawn-content-pod/route.ts b/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/spawn-content-pod/route.ts new file mode 100644 index 000000000..61d141272 --- /dev/null +++ b/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/spawn-content-pod/route.ts @@ -0,0 +1,17 @@ +import { BACKEND_URL } from '@/lib/config'; +import { buildForwardHeadersAsync } from '@/lib/auth'; + +export async function POST( + request: Request, + { params }: { params: Promise<{ name: string; sessionName: string }> }, +) { + const { name, sessionName } = await params; + const headers = await buildForwardHeadersAsync(request); + const resp = await fetch( + `${BACKEND_URL}/projects/${encodeURIComponent(name)}/agentic-sessions/${encodeURIComponent(sessionName)}/spawn-content-pod`, + { method: 'POST', headers } + ); + const data = await resp.text(); + return new Response(data, { status: resp.status, headers: { 'Content-Type': 'application/json' } }); +} + diff --git a/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/start/route.ts b/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/start/route.ts new file mode 100644 index 000000000..f5a0bdb44 --- /dev/null +++ b/components/frontend/src/app/api/projects/[name]/agentic-sessions/[sessionName]/start/route.ts @@ -0,0 +1,17 @@ +import { BACKEND_URL } from '@/lib/config'; +import { buildForwardHeadersAsync } from '@/lib/auth'; + +export async function POST( + request: Request, + { params }: { params: Promise<{ name: string; sessionName: string }> }, +) { + const { name, sessionName } = await params; + const headers = await buildForwardHeadersAsync(request); + const resp = await fetch( + `${BACKEND_URL}/projects/${encodeURIComponent(name)}/agentic-sessions/${encodeURIComponent(sessionName)}/start`, + { method: 'POST', headers } + ); + const data = await resp.text(); + return new Response(data, { status: resp.status, headers: { 'Content-Type': 'application/json' } }); +} + diff --git a/components/frontend/src/app/projects/[name]/sessions/[sessionName]/components/k8s-resource-tree.tsx b/components/frontend/src/app/projects/[name]/sessions/[sessionName]/components/k8s-resource-tree.tsx new file mode 100644 index 000000000..c48585a61 --- /dev/null +++ b/components/frontend/src/app/projects/[name]/sessions/[sessionName]/components/k8s-resource-tree.tsx @@ -0,0 +1,217 @@ +'use client'; + +import { useState } from 'react'; +import { ChevronRight, ChevronDown, Box, Container, HardDrive, AlertCircle, CheckCircle2, Clock } from 'lucide-react'; +import { Button } from '@/components/ui/button'; +import { Badge } from '@/components/ui/badge'; +import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'; +import { + Dialog, + DialogContent, + DialogDescription, + DialogHeader, + DialogTitle, +} from '@/components/ui/dialog'; + +type K8sResourceTreeProps = { + jobName?: string; + jobStatus?: string; + pods?: Array<{ + name: string; + phase: string; + containers: Array<{ + name: string; + state: string; + exitCode?: number; + reason?: string; + }>; + events?: string[]; + }>; + pvcName?: string; + pvcExists?: boolean; + pvcSize?: string; + events?: string[]; +}; + +export function K8sResourceTree({ + jobName, + jobStatus = 'Unknown', + pods = [], + pvcName, + pvcExists, + pvcSize, + events = [], +}: K8sResourceTreeProps) { + const [expandedJob, setExpandedJob] = useState(true); + const [expandedPods, setExpandedPods] = useState>({}); + + const getStatusColor = (status: string) => { + const lower = status.toLowerCase(); + if (lower.includes('running') || lower.includes('active')) return 'bg-blue-100 text-blue-800 border-blue-300'; + if (lower.includes('succeeded') || lower.includes('completed')) return 'bg-green-100 text-green-800 border-green-300'; + if (lower.includes('failed') || lower.includes('error')) return 'bg-red-100 text-red-800 border-red-300'; + if (lower.includes('waiting') || lower.includes('pending')) return 'bg-yellow-100 text-yellow-800 border-yellow-300'; + return 'bg-gray-100 text-gray-800 border-gray-300'; + }; + + const getStatusIcon = (status: string) => { + const lower = status.toLowerCase(); + if (lower.includes('running') || lower.includes('active')) return ; + if (lower.includes('succeeded') || lower.includes('completed')) return ; + if (lower.includes('failed') || lower.includes('error')) return ; + return ; + }; + + const EventsDialog = ({ events, title }: { events: string[]; title: string }) => { + const [open, setOpen] = useState(false); + return ( + + + + + {title} + Kubernetes events for this resource + +
+ {events.length === 0 ? ( +

No events

+ ) : ( + events.map((event, idx) => ( +
+ {event} +
+ )) + )} +
+
+
+ ); + }; + + if (!jobName) { + return ( + + + Kubernetes Resources + + +

No job information available

+
+
+ ); + } + + return ( + + + Kubernetes Resources + + + {/* Job */} +
+
+ + + + Job + + {jobName} + + {getStatusIcon(jobStatus)} + {jobStatus} + + {events.length > 0 && } +
+ + {expandedJob && ( +
+ {/* Pods */} + {pods.map((pod) => ( +
+
+ + + + Pod + + + {pod.name} + + + {getStatusIcon(pod.phase)} + {pod.phase} + + {pod.events && pod.events.length > 0 && ( + + )} +
+ + {expandedPods[pod.name] && ( +
+ {/* Containers */} + {pod.containers.map((container) => ( +
+ + + Container + + {container.name} + + {getStatusIcon(container.state)} + {container.state} + + {container.exitCode !== undefined && ( + Exit: {container.exitCode} + )} + {container.reason && ( + ({container.reason}) + )} +
+ ))} +
+ )} +
+ ))} + + {/* PVC */} + {pvcName && ( +
+ + + PVC + + {pvcName} + + {pvcExists ? 'Exists' : 'Not Found'} + + {pvcSize && {pvcSize}} +
+ )} +
+ )} +
+
+
+ ); +} + diff --git a/components/frontend/src/app/projects/[name]/sessions/[sessionName]/not-found.tsx b/components/frontend/src/app/projects/[name]/sessions/[sessionName]/not-found.tsx index 9d5195449..48d756470 100644 --- a/components/frontend/src/app/projects/[name]/sessions/[sessionName]/not-found.tsx +++ b/components/frontend/src/app/projects/[name]/sessions/[sessionName]/not-found.tsx @@ -1,3 +1,5 @@ +'use client'; + import { Button } from '@/components/ui/button'; import { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card'; import { FileQuestion } from 'lucide-react'; diff --git a/components/frontend/src/app/projects/[name]/sessions/[sessionName]/page.tsx b/components/frontend/src/app/projects/[name]/sessions/[sessionName]/page.tsx index 461c9e9eb..757b85de3 100644 --- a/components/frontend/src/app/projects/[name]/sessions/[sessionName]/page.tsx +++ b/components/frontend/src/app/projects/[name]/sessions/[sessionName]/page.tsx @@ -3,7 +3,7 @@ import { useState, useEffect, useMemo, useCallback } from "react"; import Link from "next/link"; import { formatDistanceToNow } from "date-fns"; -import { ArrowLeft, Square, Trash2, Copy } from "lucide-react"; +import { ArrowLeft, Square, Trash2, Copy, Play, MoreVertical } from "lucide-react"; import { useRouter } from "next/navigation"; // Custom components @@ -18,6 +18,7 @@ import { Badge } from "@/components/ui/badge"; import { Tabs, TabsContent, TabsList, TabsTrigger } from "@/components/ui/tabs"; import { CloneSessionDialog } from "@/components/clone-session-dialog"; import { Breadcrumbs } from "@/components/breadcrumbs"; +import { DropdownMenu, DropdownMenuContent, DropdownMenuItem, DropdownMenuTrigger, DropdownMenuSeparator } from "@/components/ui/dropdown-menu"; import type { FileTreeNode } from "@/components/file-tree"; import type { SessionMessage } from "@/types"; @@ -37,6 +38,8 @@ import { useWorkspaceList, useWriteWorkspaceFile, useAllSessionGitHubDiffs, + useSessionK8sResources, + useContinueSession, workspaceKeys, } from "@/services/queries"; import { successToast, errorToast } from "@/hooks/use-toast"; @@ -57,6 +60,9 @@ export default function ProjectSessionDetailPage({ const [chatInput, setChatInput] = useState(""); const [backHref, setBackHref] = useState(null); const [backLabel, setBackLabel] = useState(null); + const [contentPodSpawning, setContentPodSpawning] = useState(false); + const [contentPodReady, setContentPodReady] = useState(false); + const [contentPodError, setContentPodError] = useState(null); // Extract params useEffect(() => { @@ -73,9 +79,11 @@ export default function ProjectSessionDetailPage({ // React Query hooks const { data: session, isLoading, error, refetch: refetchSession } = useSession(projectName, sessionName); - const { data: messages = [] } = useSessionMessages(projectName, sessionName); + const { data: messages = [] } = useSessionMessages(projectName, sessionName, session?.status?.phase); + const { data: k8sResources } = useSessionK8sResources(projectName, sessionName); const stopMutation = useStopSession(); const deleteMutation = useDeleteSession(); + const continueMutation = useContinueSession(); const sendChatMutation = useSendChatMessage(); const sendControlMutation = useSendControlMessage(); const pushToGitHubMutation = usePushSessionToGitHub(); @@ -158,7 +166,7 @@ export default function ProjectSessionDetailPage({ session?.spec?.repos as Array<{ input: { url: string; branch: string }; output?: { url: string; branch: string } }> | undefined, deriveRepoFolderFromUrl, { - enabled: !!session?.spec?.repos && activeTab === 'overview', + enabled: !!session?.spec?.repos, sessionPhase: session?.status?.phase } ); @@ -395,6 +403,18 @@ export default function ProjectSessionDetailPage({ ); }; + const handleContinue = () => { + continueMutation.mutate( + { projectName, parentSessionName: sessionName }, + { + onSuccess: () => { + successToast("Session restarted successfully"); + }, + onError: (err) => errorToast(err instanceof Error ? err.message : "Failed to restart session"), + } + ); + }; + const sendChat = () => { if (!chatInput.trim()) return; @@ -430,6 +450,88 @@ export default function ProjectSessionDetailPage({ ); }; + // Check if session is completed + const sessionCompleted = ( + session?.status?.phase === 'Completed' || + session?.status?.phase === 'Failed' || + session?.status?.phase === 'Stopped' + ); + + // Auto-spawn content pod when workspace tab clicked on completed session + // Don't auto-retry if we already encountered an error - user must explicitly retry + useEffect(() => { + if (activeTab === 'workspace' && sessionCompleted && !contentPodReady && !contentPodSpawning && !contentPodError) { + spawnContentPodAsync(); + } + // eslint-disable-next-line react-hooks/exhaustive-deps + }, [activeTab, sessionCompleted, contentPodReady, contentPodSpawning, contentPodError]); + + const spawnContentPodAsync = async () => { + if (!projectName || !sessionName) return; + + setContentPodSpawning(true); + setContentPodError(null); // Clear any previous errors + + try { + // Import API function + const { spawnContentPod, getContentPodStatus } = await import('@/services/api/sessions'); + + // Spawn pod + const spawnResult = await spawnContentPod(projectName, sessionName); + + // If already exists and ready, we're done + if (spawnResult.status === 'exists' && spawnResult.ready) { + setContentPodReady(true); + setContentPodSpawning(false); + setContentPodError(null); + return; + } + + // Poll for readiness + let attempts = 0; + const maxAttempts = 30; // 30 seconds + + const pollInterval = setInterval(async () => { + attempts++; + + try { + const status = await getContentPodStatus(projectName, sessionName); + + if (status.ready) { + clearInterval(pollInterval); + setContentPodReady(true); + setContentPodSpawning(false); + setContentPodError(null); + successToast('Workspace viewer ready'); + } + + if (attempts >= maxAttempts) { + clearInterval(pollInterval); + setContentPodSpawning(false); + const errorMsg = 'Workspace viewer failed to start within 30 seconds'; + setContentPodError(errorMsg); + errorToast(errorMsg); + } + } catch { + // Not found yet, keep polling + if (attempts >= maxAttempts) { + clearInterval(pollInterval); + setContentPodSpawning(false); + const errorMsg = 'Workspace viewer failed to start'; + setContentPodError(errorMsg); + errorToast(errorMsg); + } + } + }, 1000); + + } catch (error) { + setContentPodSpawning(false); + const errorMsg = error instanceof Error ? error.message : 'Failed to spawn workspace viewer'; + setContentPodError(errorMsg); + errorToast(errorMsg); + } + }; + // Workspace operations - using React Query with queryClient for imperative fetching const onWsToggle = useCallback(async (node: FileTreeNode) => { if (node.type !== "folder") return; @@ -600,38 +702,58 @@ export default function ProjectSessionDetailPage({
- refetchSession()} - trigger={ - - } - /> - - {session.status?.phase !== "Running" && session.status?.phase !== "Creating" && ( + {/* Continue button for completed interactive sessions only */} + {session.spec?.interactive && (session.status?.phase === "Completed" || session.status?.phase === "Failed" || session.status?.phase === "Stopped") && ( )} + {/* Stop button for active sessions */} {(session.status?.phase === "Pending" || session.status?.phase === "Creating" || session.status?.phase === "Running") && ( + )} + + {/* Actions dropdown menu */} + + + + + + refetchSession()} + trigger={ + e.preventDefault()}> + + Clone + + } + /> + + + + {deleteMutation.isPending ? "Deleting..." : "Delete"} + + +
@@ -673,6 +795,7 @@ export default function ProjectSessionDetailPage({ setPromptExpanded={setPromptExpanded} latestLiveMessage={latestLiveMessage as SessionMessage | null} diffTotals={diffTotals} + k8sResources={k8sResources} onPush={async (idx) => { const repo = session.spec.repos?.[idx]; if (!repo) return; @@ -729,24 +852,52 @@ export default function ProjectSessionDetailPage({ onInterrupt={() => Promise.resolve(handleInterrupt())} onEndSession={() => Promise.resolve(handleEndSession())} onGoToResults={() => setActiveTab('results')} - isEndingSession={sendControlMutation.isPending} + onContinue={handleContinue} /> - + {sessionCompleted && !contentPodReady ? ( + +
+ {contentPodSpawning ? ( + <> +
+
+
+

Starting workspace viewer...

+

This may take up to 30 seconds

+ + ) : ( + <> +

+ Session has completed. To view and edit your workspace files, please start a workspace viewer. +

+ + + )} +
+ + ) : ( + + )} diff --git a/components/frontend/src/app/projects/[name]/sessions/page.tsx b/components/frontend/src/app/projects/[name]/sessions/page.tsx index fd5393e4a..2b946fbd0 100644 --- a/components/frontend/src/app/projects/[name]/sessions/page.tsx +++ b/components/frontend/src/app/projects/[name]/sessions/page.tsx @@ -3,7 +3,7 @@ import Link from 'next/link'; import { useParams } from 'next/navigation'; import { formatDistanceToNow } from 'date-fns'; -import { Plus, RefreshCw, MoreVertical, Square, RefreshCcw, Trash2, Play } from 'lucide-react'; +import { Plus, RefreshCw, MoreVertical, Square, Trash2, ArrowRight, Brain } from 'lucide-react'; import { Button } from '@/components/ui/button'; import { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card'; @@ -15,7 +15,7 @@ import { ErrorMessage } from '@/components/error-message'; import { SessionPhaseBadge } from '@/components/status-badge'; import { Breadcrumbs } from '@/components/breadcrumbs'; -import { useSessions, useStopSession, useStartSession, useDeleteSession } from '@/services/queries'; +import { useSessions, useStopSession, useDeleteSession, useContinueSession } from '@/services/queries'; import { successToast, errorToast } from '@/hooks/use-toast'; export default function ProjectSessionsListPage() { @@ -25,8 +25,8 @@ export default function ProjectSessionsListPage() { // React Query hooks replace all manual state management const { data: sessions = [], isLoading, error, refetch } = useSessions(projectName); const stopSessionMutation = useStopSession(); - const startSessionMutation = useStartSession(); const deleteSessionMutation = useDeleteSession(); + const continueSessionMutation = useContinueSession(); const handleStop = async (sessionName: string) => { stopSessionMutation.mutate( @@ -42,30 +42,31 @@ export default function ProjectSessionsListPage() { ); }; - const handleRestart = async (sessionName: string) => { - startSessionMutation.mutate( + + const handleDelete = async (sessionName: string) => { + if (!confirm(`Delete agentic session "${sessionName}"? This action cannot be undone.`)) return; + deleteSessionMutation.mutate( { projectName, sessionName }, { onSuccess: () => { - successToast(`Session "${sessionName}" restarted successfully`); + successToast(`Session "${sessionName}" deleted successfully`); }, onError: (error) => { - errorToast(error instanceof Error ? error.message : 'Failed to restart session'); + errorToast(error instanceof Error ? error.message : 'Failed to delete session'); }, } ); }; - const handleDelete = async (sessionName: string) => { - if (!confirm(`Delete agentic session "${sessionName}"? This action cannot be undone.`)) return; - deleteSessionMutation.mutate( - { projectName, sessionName }, + const handleContinue = async (sessionName: string) => { + continueSessionMutation.mutate( + { projectName, parentSessionName: sessionName }, { onSuccess: () => { - successToast(`Session "${sessionName}" deleted successfully`); + successToast(`Session "${sessionName}" restarted successfully`); }, onError: (error) => { - errorToast(error instanceof Error ? error.message : 'Failed to delete session'); + errorToast(error instanceof Error ? error.message : 'Failed to restart session'); }, } ); @@ -130,7 +131,7 @@ export default function ProjectSessionsListPage() { {sessions.length === 0 ? ( )} @@ -231,12 +233,13 @@ export default function ProjectSessionsListPage() { type SessionActionsProps = { sessionName: string; phase: string; + interactive: boolean; onStop: (sessionName: string) => void; - onRestart: (sessionName: string) => void; + onContinue: (sessionName: string) => void; onDelete: (sessionName: string) => void; }; -function SessionActions({ sessionName, phase, onStop, onRestart, onDelete }: SessionActionsProps) { +function SessionActions({ sessionName, phase, interactive, onStop, onContinue, onDelete }: SessionActionsProps) { type RowAction = { key: string; label: string; @@ -257,17 +260,19 @@ function SessionActions({ sessionName, phase, onStop, onRestart, onDelete }: Ses }); } - if (phase === 'Completed' || phase === 'Failed' || phase === 'Stopped' || phase === 'Error') { + // Only allow continue for interactive sessions + if ((phase === 'Completed' || phase === 'Failed' || phase === 'Stopped' || phase === 'Error') && interactive) { actions.push({ - key: 'restart', - label: 'Restart', - onClick: () => onRestart(sessionName), - icon: , - className: 'text-blue-600', + key: 'continue', + label: 'Continue', + onClick: () => onContinue(sessionName), + icon: , + className: 'text-green-600', }); } - if (phase !== 'Running' && phase !== 'Creating') { + // Delete is always available except when Creating + if (phase !== 'Creating') { actions.push({ key: 'delete', label: 'Delete', diff --git a/components/frontend/src/components/session/MessagesTab.tsx b/components/frontend/src/components/session/MessagesTab.tsx index c4afaac9a..1f933b8ce 100644 --- a/components/frontend/src/components/session/MessagesTab.tsx +++ b/components/frontend/src/components/session/MessagesTab.tsx @@ -1,6 +1,6 @@ "use client"; -import React from "react"; +import React, { useState } from "react"; import { Button } from "@/components/ui/button"; import { Brain, Loader2 } from "lucide-react"; import { StreamMessage } from "@/components/ui/stream-message"; @@ -15,10 +15,52 @@ export type MessagesTabProps = { onInterrupt: () => Promise; onEndSession: () => Promise; onGoToResults?: () => void; - isEndingSession?: boolean; + onContinue: () => void; }; -const MessagesTab: React.FC = ({ session, streamMessages, chatInput, setChatInput, onSendChat, onInterrupt, onEndSession, onGoToResults, isEndingSession }) => { + +const MessagesTab: React.FC = ({ session, streamMessages, chatInput, setChatInput, onSendChat, onInterrupt, onEndSession, onGoToResults, onContinue}) => { + const [sendingChat, setSendingChat] = useState(false); + const [interrupting, setInterrupting] = useState(false); + const [ending, setEnding] = useState(false); + + const phase = session?.status?.phase || ""; + const isInteractive = session?.spec?.interactive; + + // Only show chat interface when session is interactive AND in Running state + const showChatInterface = isInteractive && phase === "Running"; + + // Determine if session is in a terminal state + const isTerminalState = ["Completed", "Failed", "Stopped"].includes(phase); + const isCreating = ["Creating", "Pending"].includes(phase); + + const handleSendChat = async () => { + setSendingChat(true); + try { + await onSendChat(); + } finally { + setSendingChat(false); + } + }; + + const handleInterrupt = async () => { + setInterrupting(true); + try { + await onInterrupt(); + } finally { + setInterrupting(false); + } + }; + + const handleEndSession = async () => { + setEnding(true); + try { + await onEndSession(); + } finally { + setEnding(false); + } + }; + return (
{streamMessages.map((m, idx) => ( @@ -30,12 +72,18 @@ const MessagesTab: React.FC = ({ session, streamMessages, chat

No messages yet

- {session.spec?.interactive ? "Start by sending a message below." : "This session is not interactive."} + {isInteractive + ? isCreating + ? "Session is starting..." + : isTerminalState + ? `Session has ${phase.toLowerCase()}.` + : "Start by sending a message below." + : "This session is not interactive."}

)} - {session.spec?.interactive && ( + {showChatInterface && (
@@ -47,28 +95,76 @@ const MessagesTab: React.FC = ({ session, streamMessages, chat onKeyDown={(e) => { if (e.key === "Enter" && !e.shiftKey) { e.preventDefault(); - if (chatInput.trim()) { - onSendChat(); + if (chatInput.trim() && !sendingChat) { + handleSendChat(); } } }} rows={3} + disabled={sendingChat} />
Interactive session
- - + + -
)} + + {isInteractive && !showChatInterface && streamMessages.length > 0 && ( +
+
+

+ {isCreating && "Chat will be available once the session is running..."} + {isTerminalState && ( + <> + This session has {phase.toLowerCase()}. Chat is no longer available. + {onContinue && ( + <> + {" "} + + {" "}to restart it. + + )} + + )} +

+
+
+ )}
); }; diff --git a/components/frontend/src/components/session/OverviewTab.tsx b/components/frontend/src/components/session/OverviewTab.tsx index 4b630bd84..b6ecb2a83 100644 --- a/components/frontend/src/components/session/OverviewTab.tsx +++ b/components/frontend/src/components/session/OverviewTab.tsx @@ -4,7 +4,7 @@ import React from "react"; import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card"; import { Badge } from "@/components/ui/badge"; import { Button } from "@/components/ui/button"; -import { Brain, Clock, RefreshCw, Sparkle, ExternalLink } from "lucide-react"; +import { Brain, Clock, RefreshCw, Sparkle, ExternalLink, ChevronRight, ChevronDown, Box, Container, HardDrive } from "lucide-react"; import { format } from "date-fns"; import { cn } from "@/lib/utils"; import type { AgenticSession } from "@/types/agentic-session"; @@ -21,10 +21,68 @@ type Props = { busyRepo: Record; buildGithubCompareUrl: (inUrl: string, inBranch?: string, outUrl?: string, outBranch?: string) => string | null; onRefreshDiff: () => Promise; + k8sResources?: { + jobName?: string; + jobStatus?: string; + pods?: Array<{ + name: string; + phase: string; + containers: Array<{ + name: string; + state: string; + exitCode?: number; + reason?: string; + }>; + isTempPod?: boolean; + }>; + pvcName?: string; + pvcExists?: boolean; + pvcSize?: string; + }; }; -export const OverviewTab: React.FC = ({ session, promptExpanded, setPromptExpanded, latestLiveMessage, diffTotals, onPush, onAbandon, busyRepo, buildGithubCompareUrl, onRefreshDiff }) => { +// Utility to generate OpenShift console URLs +const getOpenShiftConsoleUrl = (namespace: string, resourceType: 'Job' | 'Pod' | 'PVC', resourceName: string): string | null => { + // Try to derive console URL from current window location + // OpenShift console is typically at console-openshift-console.apps. + const hostname = window.location.hostname; + + // Check if we're on an OpenShift route (apps.*) + if (hostname.includes('.apps.')) { + const clusterDomain = hostname.split('.apps.')[1]; + const consoleUrl = `https://console-openshift-console.apps.${clusterDomain}`; + + const resourceMap = { + 'Job': 'batch~v1~Job', + 'Pod': 'core~v1~Pod', + 'PVC': 'core~v1~PersistentVolumeClaim', + }; + + return `${consoleUrl}/k8s/ns/${namespace}/${resourceMap[resourceType]}/${resourceName}`; + } + + // Fallback: For local development or non-standard setups, return null + return null; +}; + +export const OverviewTab: React.FC = ({ session, promptExpanded, setPromptExpanded, latestLiveMessage, diffTotals, onPush, onAbandon, busyRepo, buildGithubCompareUrl, onRefreshDiff, k8sResources }) => { const [refreshingDiff, setRefreshingDiff] = React.useState(false); + const [expandedPods, setExpandedPods] = React.useState>({}); + + const projectNamespace = session.metadata?.namespace || ''; + + const getStatusColor = (status: string) => { + const lower = status.toLowerCase(); + if (lower.includes('running') || lower.includes('active')) return 'bg-blue-100 text-blue-800 border-blue-300'; + if (lower.includes('succeeded') || lower.includes('completed')) return 'bg-green-100 text-green-800 border-green-300'; + if (lower.includes('failed') || lower.includes('error')) return 'bg-red-100 text-red-800 border-red-300'; + if (lower.includes('waiting') || lower.includes('pending')) return 'bg-yellow-100 text-yellow-800 border-yellow-300'; + if (lower.includes('terminating')) return 'bg-purple-100 text-purple-800 border-purple-300'; + if (lower.includes('notfound') || lower.includes('not found')) return 'bg-orange-100 text-orange-800 border-orange-300'; + if (lower.includes('terminated')) return 'bg-gray-100 text-gray-800 border-gray-300'; + return 'bg-gray-100 text-gray-800 border-gray-300'; + }; + return (
@@ -155,6 +213,219 @@ export const OverviewTab: React.FC = ({ session, promptExpanded, setPromp
+ {k8sResources && ( +
+
Kubernetes Resources
+
+ {/* PVC - Always shown at root level (owned by AgenticSession CR) */} + {k8sResources.pvcName && ( +
+ + + PVC + + {(() => { + const consoleUrl = getOpenShiftConsoleUrl(projectNamespace, 'PVC', k8sResources.pvcName); + return consoleUrl ? ( + + {k8sResources.pvcName} + + + ) : ( + {k8sResources.pvcName} + ); + })()} + + {k8sResources.pvcExists ? 'Exists' : 'Not Found'} + + {k8sResources.pvcSize && {k8sResources.pvcSize}} +
+ )} + + {/* Temp Content Pods - Always at root level (for completed sessions) */} + {k8sResources.pods && k8sResources.pods.filter(p => p.isTempPod).map((pod) => ( +
+
+ + + + Temp Pod + + {(() => { + const consoleUrl = getOpenShiftConsoleUrl(projectNamespace, 'Pod', pod.name); + return consoleUrl ? ( + + {pod.name} + + + ) : ( + + {pod.name} + + ); + })()} + + {pod.phase} + + + Workspace viewer + +
+ + {/* Temp pod containers */} + {expandedPods[pod.name] && pod.containers && pod.containers.length > 0 && ( +
+ {pod.containers.map((container) => ( +
+ + + {container.name} + + + {container.state} + + {container.exitCode !== undefined && ( + Exit: {container.exitCode} + )} + {container.reason && ( + ({container.reason}) + )} +
+ ))} +
+ )} +
+ ))} + + {/* Job - Only shown when job exists */} + {k8sResources.jobName && ( +
+
+ + + Job + + {(() => { + const consoleUrl = getOpenShiftConsoleUrl(projectNamespace, 'Job', k8sResources.jobName); + return consoleUrl ? ( + + {k8sResources.jobName} + + + ) : ( + {k8sResources.jobName} + ); + })()} + + {k8sResources.jobStatus || 'Unknown'} + +
+ + {/* Job Pods - Only non-temp pods */} + {k8sResources.pods && k8sResources.pods.filter(p => !p.isTempPod).length > 0 && ( +
+ {k8sResources.pods.filter(p => !p.isTempPod).map((pod) => ( +
+
+ + + + Pod + + {(() => { + const consoleUrl = getOpenShiftConsoleUrl(projectNamespace, 'Pod', pod.name); + return consoleUrl ? ( + + {pod.name} + + + ) : ( + + {pod.name} + + ); + })()} + + {pod.phase} + + {pod.isTempPod && ( + + Workspace viewer + + )} +
+ + {expandedPods[pod.name] && pod.containers && ( +
+ {pod.containers.map((container) => ( +
+ + + {container.name} + + + {container.state} + + {container.exitCode !== undefined && ( + Exit: {container.exitCode} + )} + {container.reason && ( + ({container.reason}) + )} +
+ ))} +
+ )} +
+ ))} +
+ )} +
+ )} +
+
+ )} +
Repositories
diff --git a/components/frontend/src/components/session/WorkspaceTab.tsx b/components/frontend/src/components/session/WorkspaceTab.tsx index 426ad3eb5..dbefb3e02 100644 --- a/components/frontend/src/components/session/WorkspaceTab.tsx +++ b/components/frontend/src/components/session/WorkspaceTab.tsx @@ -2,7 +2,7 @@ import React from "react"; import { Button } from "@/components/ui/button"; -import { RefreshCw, FolderOpen, FileText } from "lucide-react"; +import { RefreshCw, FolderOpen, FileText, HardDrive } from "lucide-react"; import { Card, CardContent } from "@/components/ui/card"; import { Badge } from "@/components/ui/badge"; import { FileTree, type FileTreeNode } from "@/components/file-tree"; @@ -21,9 +21,16 @@ export type WorkspaceTabProps = { onToggle: (node: FileTreeNode) => void; onSave: (path: string, content: string) => Promise; setWsFileContent: (v: string) => void; + k8sResources?: { + pvcName?: string; + pvcExists?: boolean; + pvcSize?: string; + }; + contentPodError?: string | null; + onRetrySpawn?: () => void; }; -const WorkspaceTab: React.FC = ({ session, wsLoading, wsUnavailable, wsTree, wsSelectedPath, wsFileContent, onRefresh, onSelect, onToggle, onSave, setWsFileContent }) => { +const WorkspaceTab: React.FC = ({ session, wsLoading, wsUnavailable, wsTree, wsSelectedPath, wsFileContent, onRefresh, onSelect, onToggle, onSave, setWsFileContent, k8sResources, contentPodError, onRetrySpawn }) => { if (wsLoading) { return (
@@ -31,6 +38,22 @@ const WorkspaceTab: React.FC = ({ session, wsLoading, wsUnava
); } + + // Show error with retry button if content pod failed to spawn + if (contentPodError) { + return ( +
+
Workspace Viewer Error
+
{contentPodError}
+ {onRetrySpawn && ( + + )} +
+ ); + } + if (wsUnavailable) { return (
@@ -52,9 +75,24 @@ const WorkspaceTab: React.FC = ({ session, wsLoading, wsUnava
-
-

Files

-

{wsTree.length} items

+
+ {k8sResources?.pvcName ? ( +
+ + + PVC + + {k8sResources.pvcName} + + {k8sResources.pvcExists ? 'Exists' : 'Not Found'} + + {k8sResources.pvcSize && ( + {k8sResources.pvcSize} + )} +
+ ) : ( +

{wsTree.length} items

+ )}
+ + + + Show debug messages + + + +
+ +
+ {filteredMessages.map((m, idx) => ( + + ))} + + {filteredMessages.length === 0 && ( +
+ +

No messages yet

+

+ {isInteractive + ? isCreating + ? "Session is starting..." + : isTerminalState + ? `Session has ${phase.toLowerCase()}.` + : "Start by sending a message below." + : "This session is not interactive."} +

+
+ )} +
{showChatInterface && (
diff --git a/components/frontend/src/components/session/OverviewTab.tsx b/components/frontend/src/components/session/OverviewTab.tsx index b6ecb2a83..2e6306947 100644 --- a/components/frontend/src/components/session/OverviewTab.tsx +++ b/components/frontend/src/components/session/OverviewTab.tsx @@ -455,8 +455,13 @@ export const OverviewTab: React.FC = ({ session, promptExpanded, setPromp ? repo.output.branch : `sessions/${session.metadata.name}`; const compareUrl = buildGithubCompareUrl(repo.input.url, repo.input.branch || 'main', repo.output?.url, outBranch); + + // Check if temp pod is running and ready + const tempPod = k8sResources?.pods?.find(p => p.isTempPod); + const tempPodReady = tempPod?.phase === 'Running'; + const br = diffTotals[idx] || { total_added: 0, total_removed: 0 }; - const hasChanges = br.total_added > 0 || br.total_removed > 0; + const hasChanges = tempPodReady && (br.total_added > 0 || br.total_removed > 0); return (
{isMain && MAIN} @@ -480,7 +485,12 @@ export const OverviewTab: React.FC = ({ session, promptExpanded, setPromp )} - {!hasChanges ? ( + + {!tempPodReady ? ( + + (read-only - temp service not running) + + ) : !hasChanges ? ( repo.status === 'pushed' && compareUrl ? ( = ({ session, promptExpanded, setPromp ) : null} - {hasChanges && ( + {hasChanges && tempPodReady && ( repo.output?.url ? (
- - + +
) : ( - + ) )}
diff --git a/components/runners/claude-code-runner/wrapper.py b/components/runners/claude-code-runner/wrapper.py index 46c65855d..f19cd8741 100644 --- a/components/runners/claude-code-runner/wrapper.py +++ b/components/runners/claude-code-runner/wrapper.py @@ -54,7 +54,7 @@ async def run(self): prompt = self.context.get_metadata("prompt", "Hello! How can I help you today?") # Send progress update - await self._send_log("Starting Claude Code session...") + await self._send_log("Starting Claude Code session...", debug=True) # Mark CR Running (best-effort) try: @@ -81,7 +81,7 @@ async def run(self): result = await self._run_claude_agent_sdk(prompt) # Send completion - await self._send_log("Claude Code session completed") + await self._send_log("Claude Code session completed", debug=True) # Optional auto-push on completion (default: disabled) try: @@ -226,13 +226,13 @@ async def _run_claude_agent_sdk(self, prompt: str): options.resume = sdk_resume_id # type: ignore[attr-defined] options.fork_session = False # type: ignore[attr-defined] logging.info(f"Enabled SDK session resumption: resume={sdk_resume_id[:8]}, fork=False") - await self._send_log(f"🔄 Resuming SDK session {sdk_resume_id[:8]}") + await self._send_log(f"🔄 Resuming SDK session {sdk_resume_id[:8]}", debug=True) else: logging.warning(f"Parent session {parent_session_id} has no stored SDK session ID, starting fresh") - await self._send_log("⚠️ No SDK session ID found, starting fresh") + await self._send_log("⚠️ No SDK session ID found, starting fresh", debug=True) except Exception as e: logging.warning(f"Failed to set resume options: {e}") - await self._send_log(f"⚠️ SDK resume failed: {e}") + await self._send_log(f"⚠️ SDK resume failed: {e}", debug=True) # Best-effort set add_dirs if supported by SDK version try: @@ -340,11 +340,11 @@ async def process_response_stream(client_obj): await self.shell._send_message(MessageType.WAITING_FOR_INPUT, {}) self._turn_count += 1 elif isinstance(block, ThinkingBlock): - await self._send_log({"level": "debug", "message": "Model is reasoning..."}) + await self._send_log({"level": "debug", "message": "Model is reasoning..."}, debug=True) elif isinstance(message, (SystemMessage)): text = getattr(message, 'text', None) if text: - await self._send_log({"level": "debug", "message": str(text)}) + await self._send_log({"level": "debug", "message": str(text)}, debug=True) elif isinstance(message, (ResultMessage)): # Only surface result envelope to UI in non-interactive mode result_payload = { @@ -367,7 +367,7 @@ async def process_response_stream(client_obj): # Use async with - SDK will automatically resume if options.resume is set async with ClaudeSDKClient(options=options) as client: if is_continuation and parent_session_id: - await self._send_log("✅ SDK resuming session with full context") + await self._send_log("✅ SDK resuming session with full context", debug=True) logging.info(f"SDK is handling session resumption for {parent_session_id}") async def process_one_prompt(text: str): @@ -388,7 +388,7 @@ async def process_one_prompt(text: str): logging.info("Skipping initial prompt - SDK resuming with full context") if interactive: - await self._send_log({"level": "system", "message": "Chat ready"}) + await self._send_log({"level": "system", "message": "Chat ready"}, debug=True) # Consume incoming user messages until end_session while True: incoming = await self._incoming_queue.get() @@ -401,16 +401,16 @@ async def process_one_prompt(text: str): if text: await process_one_prompt(text) elif mtype in ('end_session', 'terminate', 'stop'): - await self._send_log({"level": "system", "message": "interactive.ended"}) + await self._send_log({"level": "system", "message": "interactive.ended"}, debug=True) break elif mtype == 'interrupt': try: await client.interrupt() # type: ignore[attr-defined] - await self._send_log({"level": "info", "message": "interrupt.sent"}) + await self._send_log({"level": "info", "message": "interrupt.sent"}, debug=True) except Exception as e: - await self._send_log({"level": "warn", "message": f"interrupt.failed: {e}"}) + await self._send_log({"level": "warn", "message": f"interrupt.failed: {e}"}, debug=True) else: - await self._send_log({"level": "debug", "message": f"ignored.message: {mtype_raw}"}) + await self._send_log({"level": "debug", "message": f"ignored.message: {mtype_raw}"}, debug=True) # Note: All output is streamed via WebSocket, not collected here await self._check_pr_intent("") @@ -443,7 +443,7 @@ async def _prepare_workspace(self): logging.info(f"Workspace preparation: parent_session_id={parent_session_id[:8] if parent_session_id else 'None'}, reusing={reusing_workspace}") if reusing_workspace: - await self._send_log(f"♻️ Reusing workspace from session {parent_session_id[:8]}") + await self._send_log(f"♻️ Reusing workspace from session {parent_session_id[:8]}", debug=True) logging.info("Preserving existing workspace state for continuation") repos_cfg = self._get_repos_config() @@ -471,14 +471,14 @@ async def _prepare_workspace(self): logging.info(f"Successfully cloned {name}") elif reusing_workspace: # Reusing workspace - preserve local changes from previous session - await self._send_log(f"✓ Preserving {name} (continuation)") + await self._send_log(f"✓ Preserving {name} (continuation)", debug=True) logging.info(f"Repo {name} exists and reusing workspace - preserving all local changes") # Update remote URL in case credentials changed await self._run_cmd(["git", "remote", "set-url", "origin", self._url_with_token(url, token) if token else url], cwd=str(repo_dir), ignore_errors=True) # Don't fetch, don't reset - keep all changes! else: # Repo exists but NOT reusing - reset to clean state - await self._send_log(f"🔄 Resetting {name} to clean state") + await self._send_log(f"🔄 Resetting {name} to clean state", debug=True) logging.info(f"Repo {name} exists but not reusing - resetting to clean state") await self._run_cmd(["git", "remote", "set-url", "origin", self._url_with_token(url, token) if token else url], cwd=str(repo_dir), ignore_errors=True) await self._run_cmd(["git", "fetch", "origin", branch], cwd=str(repo_dir)) @@ -502,7 +502,7 @@ async def _prepare_workspace(self): await self._run_cmd(["git", "remote", "add", "output", out_url], cwd=str(repo_dir)) except Exception as e: logging.error(f"Failed to prepare multi-repo workspace: {e}") - await self._send_log(f"Workspace preparation failed: {e}") + await self._send_log(f"Workspace preparation failed: {e}", debug=True) return # Single-repo legacy flow @@ -526,13 +526,13 @@ async def _prepare_workspace(self): logging.info("Successfully cloned repository") elif reusing_workspace: # Reusing workspace - preserve local changes from previous session - await self._send_log("✓ Preserving workspace (continuation)") + await self._send_log("✓ Preserving workspace (continuation)", debug=True) logging.info("Workspace exists and reusing - preserving all local changes") await self._run_cmd(["git", "remote", "set-url", "origin", self._url_with_token(input_repo, token) if token else input_repo], cwd=str(workspace), ignore_errors=True) # Don't fetch, don't reset - keep all changes! else: # Reset to clean state - await self._send_log("🔄 Resetting workspace to clean state") + await self._send_log("🔄 Resetting workspace to clean state", debug=True) logging.info("Workspace exists but not reusing - resetting to clean state") await self._run_cmd(["git", "remote", "set-url", "origin", self._url_with_token(input_repo, token) if token else input_repo], cwd=str(workspace)) await self._run_cmd(["git", "fetch", "origin", input_branch], cwd=str(workspace)) @@ -548,14 +548,14 @@ async def _prepare_workspace(self): logging.info(f"Git identity configured: {user_name} <{user_email}>") if output_repo: - await self._send_log("Configuring output remote...") + await self._send_log("Configuring output remote...", debug=True) out_url = self._url_with_token(output_repo, token) if token else output_repo await self._run_cmd(["git", "remote", "remove", "output"], cwd=str(workspace), ignore_errors=True) await self._run_cmd(["git", "remote", "add", "output", out_url], cwd=str(workspace)) except Exception as e: logging.error(f"Failed to prepare workspace: {e}") - await self._send_log(f"Workspace preparation failed: {e}") + await self._send_log(f"Workspace preparation failed: {e}", debug=True) async def _validate_prerequisites(self): """Validate prerequisite files exist for phase-based slash commands.""" @@ -593,7 +593,7 @@ async def _validate_prerequisites(self): if not found: error_message = f"❌ {error_msg}" - await self._send_log(error_message) + await self._send_log(error_message, debug=True) # Mark session as failed try: await self._update_cr_status({ @@ -645,7 +645,7 @@ async def _push_results_if_any(self): in_branch = (in_.get('branch') or '').strip() out_branch = (out.get('branch') or '').strip() or f"sessions/{self.context.session_id}" - await self._send_log(f"Pushing changes for {name}...") + await self._send_log(f"Pushing changes for {name}...", debug=True) logging.info(f"Configuring output remote with authentication for {name}") # Reconfigure output remote with token before push @@ -678,11 +678,11 @@ async def _push_results_if_any(self): raise RuntimeError(f"Output remote not configured for {name}") logging.info(f"Pushing to output remote: {out_branch} for {name}") - await self._send_log(f"Pushing {name} to {out_branch}...") + await self._send_log(f"Pushing {name} to {out_branch}...", debug=True) await self._run_cmd(["git", "push", "-u", "output", f"HEAD:{out_branch}"], cwd=str(repo_dir)) logging.info(f"Push completed for {name}") - await self._send_log(f"✓ Push completed for {name}") + await self._send_log(f"✓ Push completed for {name}", debug=True) create_pr_flag = (os.getenv("CREATE_PR", "").strip().lower() == "true") if create_pr_flag and in_branch and out_branch and out_branch != in_branch and out_url: @@ -691,12 +691,12 @@ async def _push_results_if_any(self): try: pr_url = await self._create_pull_request(upstream_repo=upstream_url, fork_repo=out_url, head_branch=out_branch, base_branch=target_branch) if pr_url: - await self._send_log({"level": "info", "message": f"Pull request created for {name}: {pr_url}"}) + await self._send_log({"level": "info", "message": f"Pull request created for {name}: {pr_url}"}, debug=True) except Exception as e: - await self._send_log({"level": "error", "message": f"PR creation failed for {name}: {e}"}) + await self._send_log({"level": "error", "message": f"PR creation failed for {name}: {e}"}, debug=True) except Exception as e: logging.error(f"Failed to push results: {e}") - await self._send_log(f"Push failed: {e}") + await self._send_log(f"Push failed: {e}", debug=True) return # Single-repo legacy flow @@ -715,10 +715,10 @@ async def _push_results_if_any(self): try: status = await self._run_cmd(["git", "status", "--porcelain"], cwd=str(workspace), capture_stdout=True) if not status.strip(): - await self._send_log({"level": "system", "message": "No changes to push."}) + await self._send_log({"level": "system", "message": "No changes to push."}, debug=True) return - await self._send_log("Committing and pushing changes...") + await self._send_log("Committing and pushing changes...", debug=True) logging.info("Configuring output remote with authentication") # Reconfigure output remote with token before push @@ -737,7 +737,7 @@ async def _push_results_if_any(self): except RuntimeError as e: if "nothing to commit" in str(e).lower(): logging.info("No changes to commit") - await self._send_log({"level": "system", "message": "No new changes to commit."}) + await self._send_log({"level": "system", "message": "No new changes to commit."}, debug=True) return else: logging.error(f"Commit failed: {e}") @@ -752,11 +752,11 @@ async def _push_results_if_any(self): raise RuntimeError("Output remote not configured") logging.info(f"Pushing to output remote: {output_branch}") - await self._send_log(f"Pushing to {output_branch}...") + await self._send_log(f"Pushing to {output_branch}...", debug=True) await self._run_cmd(["git", "push", "-u", "output", f"HEAD:{output_branch}"], cwd=str(workspace)) logging.info("Push completed") - await self._send_log("✓ Push completed") + await self._send_log("✓ Push completed", debug=True) create_pr_flag = (os.getenv("CREATE_PR", "").strip().lower() == "true") if create_pr_flag and input_branch and output_branch and output_branch != input_branch: @@ -764,12 +764,12 @@ async def _push_results_if_any(self): try: pr_url = await self._create_pull_request(upstream_repo=input_repo or output_repo, fork_repo=output_repo, head_branch=output_branch, base_branch=target_branch) if pr_url: - await self._send_log({"level": "info", "message": f"Pull request created: {pr_url}"}) + await self._send_log({"level": "info", "message": f"Pull request created: {pr_url}"}, debug=True) except Exception as e: - await self._send_log({"level": "error", "message": f"PR creation failed: {e}"}) + await self._send_log({"level": "error", "message": f"PR creation failed: {e}"}, debug=True) except Exception as e: logging.error(f"Failed to push results: {e}") - await self._send_log(f"Push failed: {e}") + await self._send_log(f"Push failed: {e}", debug=True) async def _create_pull_request(self, upstream_repo: str, fork_repo: str, head_branch: str, base_branch: str) -> str | None: """Create a GitHub Pull Request from fork_repo:head_branch into upstream_repo:base_branch. @@ -1051,8 +1051,13 @@ async def _wait_for_ws_connection(self, timeout_seconds: int = 10): # Wait 200ms before retry await asyncio.sleep(0.2) - async def _send_log(self, payload): - """Send a system-level message. Accepts either a string or a dict payload.""" + async def _send_log(self, payload, debug: bool = False): + """Send a system-level message. Accepts either a string or a dict payload. + + Args: + payload: String message or dict with 'message' key + debug: If True, marks this as a debug-level system message (hidden by default in UI) + """ if not self.shell: return text: str @@ -1062,9 +1067,16 @@ async def _send_log(self, payload): text = str(payload.get("message", "")) else: text = str(payload) + + # Create payload dict with debug flag + message_payload = { + "message": text, + "debug": debug + } + await self.shell._send_message( MessageType.SYSTEM_MESSAGE, - text, + message_payload, ) def _url_with_token(self, url: str, token: str) -> str: From 6f6ecf1497556dbdd49518d8db994ebe1340f15e Mon Sep 17 00:00:00 2001 From: Gage Krumbach Date: Thu, 30 Oct 2025 09:48:43 -0500 Subject: [PATCH 21/30] Refactor MessagesTab to enhance debug message settings and UI layout - Removed the debug message toggle from the top of the MessagesTab and repositioned it for non-interactive sessions, improving accessibility. - Updated the layout to include debug message settings for both interactive and non-interactive sessions, ensuring consistent user experience. - Enhanced the visual structure of the component by organizing settings and messages more effectively. --- .../src/components/session/MessagesTab.tsx | 119 ++++++++++++------ 1 file changed, 81 insertions(+), 38 deletions(-) diff --git a/components/frontend/src/components/session/MessagesTab.tsx b/components/frontend/src/components/session/MessagesTab.tsx index 79c3b5e3c..e213ca36f 100644 --- a/components/frontend/src/components/session/MessagesTab.tsx +++ b/components/frontend/src/components/session/MessagesTab.tsx @@ -87,25 +87,6 @@ const MessagesTab: React.FC = ({ session, streamMessages, chat return (
- {/* Settings dropdown for debug toggle - positioned in bottom left */} -
- - - - - - - Show debug messages - - - -
-
{filteredMessages.map((m, idx) => ( @@ -128,6 +109,32 @@ const MessagesTab: React.FC = ({ session, streamMessages, chat )}
+ {/* Settings for non-interactive sessions with messages */} + {!isInteractive && filteredMessages.length > 0 && ( +
+
+
+ + + + + + + Show debug messages + + + +

Non-interactive session

+
+
+
+ )} + {showChatInterface && (
@@ -149,7 +156,24 @@ const MessagesTab: React.FC = ({ session, streamMessages, chat disabled={sendingChat} />
-
Interactive session
+
+ + + + + + + Show debug messages + + + +
Interactive session
+
+ + + + Show debug messages + + + +

+ {isCreating && "Chat will be available once the session is running..."} + {isTerminalState && ( <> - {" "} - - {" "}to restart it. + This session has {phase.toLowerCase()}. Chat is no longer available. + {onContinue && ( + <> + {" "} + + {" "}to restart it. + + )} )} - - )} -

+

+
+
)} From 7388a2c3c59a261c63b2e486be1036d3206dd5d8 Mon Sep 17 00:00:00 2001 From: Gage Krumbach Date: Thu, 30 Oct 2025 14:31:21 -0500 Subject: [PATCH 22/30] Refactor MessagesTab and OverviewTab for improved message handling and UI consistency - Renamed the debug message toggle to system message toggle in MessagesTab, enhancing clarity in functionality. - Updated message filtering logic to hide system messages by default, improving user experience. - Simplified button interactions in OverviewTab for expanding pods, enhancing accessibility and visual consistency. - Removed unused imports in OverviewTab to streamline the codebase. --- .../src/components/session/MessagesTab.tsx | 33 +++---- .../src/components/session/OverviewTab.tsx | 22 ++--- .../runners/claude-code-runner/wrapper.py | 85 +++++++++---------- 3 files changed, 60 insertions(+), 80 deletions(-) diff --git a/components/frontend/src/components/session/MessagesTab.tsx b/components/frontend/src/components/session/MessagesTab.tsx index e213ca36f..8b93da27e 100644 --- a/components/frontend/src/components/session/MessagesTab.tsx +++ b/components/frontend/src/components/session/MessagesTab.tsx @@ -29,7 +29,7 @@ const MessagesTab: React.FC = ({ session, streamMessages, chat const [sendingChat, setSendingChat] = useState(false); const [interrupting, setInterrupting] = useState(false); const [ending, setEnding] = useState(false); - const [showDebugMessages, setShowDebugMessages] = useState(false); + const [showSystemMessages, setShowSystemMessages] = useState(false); const phase = session?.status?.phase || ""; const isInteractive = session?.spec?.interactive; @@ -41,18 +41,13 @@ const MessagesTab: React.FC = ({ session, streamMessages, chat const isTerminalState = ["Completed", "Failed", "Stopped"].includes(phase); const isCreating = ["Creating", "Pending"].includes(phase); - // Filter out debug messages unless showDebugMessages is true + // Filter out system messages unless showSystemMessages is true const filteredMessages = streamMessages.filter((msg) => { - if (showDebugMessages) return true; + if (showSystemMessages) return true; - // Filter out system messages with debug flag + // Hide system_message type by default if (msg.type === "system_message") { - type SystemMessageType = Extract; - const systemMsg = msg as SystemMessageType; - const debug = systemMsg.data?.debug as boolean | undefined; - if (debug === true) { - return false; - } + return false; } return true; @@ -122,10 +117,10 @@ const MessagesTab: React.FC = ({ session, streamMessages, chat - Show debug messages + {showSystemMessages ? 'Hide' : 'Show'} system messages @@ -165,10 +160,10 @@ const MessagesTab: React.FC = ({ session, streamMessages, chat - Show debug messages + {showSystemMessages ? 'Hide' : 'Show'} system messages @@ -221,10 +216,10 @@ const MessagesTab: React.FC = ({ session, streamMessages, chat - Show debug messages + {showSystemMessages ? 'Hide' : 'Show'} system messages diff --git a/components/frontend/src/components/session/OverviewTab.tsx b/components/frontend/src/components/session/OverviewTab.tsx index 2e6306947..f07cdf0dc 100644 --- a/components/frontend/src/components/session/OverviewTab.tsx +++ b/components/frontend/src/components/session/OverviewTab.tsx @@ -4,7 +4,7 @@ import React from "react"; import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card"; import { Badge } from "@/components/ui/badge"; import { Button } from "@/components/ui/button"; -import { Brain, Clock, RefreshCw, Sparkle, ExternalLink, ChevronRight, ChevronDown, Box, Container, HardDrive } from "lucide-react"; +import { Brain, Clock, RefreshCw, Sparkle, ExternalLink, Box, Container, HardDrive } from "lucide-react"; import { format } from "date-fns"; import { cn } from "@/lib/utils"; import type { AgenticSession } from "@/types/agentic-session"; @@ -252,14 +252,10 @@ export const OverviewTab: React.FC = ({ session, promptExpanded, setPromp
@@ -353,14 +349,10 @@ export const OverviewTab: React.FC = ({ session, promptExpanded, setPromp
diff --git a/components/runners/claude-code-runner/wrapper.py b/components/runners/claude-code-runner/wrapper.py index f19cd8741..d3117b1e5 100644 --- a/components/runners/claude-code-runner/wrapper.py +++ b/components/runners/claude-code-runner/wrapper.py @@ -54,7 +54,7 @@ async def run(self): prompt = self.context.get_metadata("prompt", "Hello! How can I help you today?") # Send progress update - await self._send_log("Starting Claude Code session...", debug=True) + await self._send_log("Starting Claude Code session...") # Mark CR Running (best-effort) try: @@ -81,7 +81,7 @@ async def run(self): result = await self._run_claude_agent_sdk(prompt) # Send completion - await self._send_log("Claude Code session completed", debug=True) + await self._send_log("Claude Code session completed") # Optional auto-push on completion (default: disabled) try: @@ -226,13 +226,13 @@ async def _run_claude_agent_sdk(self, prompt: str): options.resume = sdk_resume_id # type: ignore[attr-defined] options.fork_session = False # type: ignore[attr-defined] logging.info(f"Enabled SDK session resumption: resume={sdk_resume_id[:8]}, fork=False") - await self._send_log(f"🔄 Resuming SDK session {sdk_resume_id[:8]}", debug=True) + await self._send_log(f"🔄 Resuming SDK session {sdk_resume_id[:8]}") else: logging.warning(f"Parent session {parent_session_id} has no stored SDK session ID, starting fresh") - await self._send_log("⚠️ No SDK session ID found, starting fresh", debug=True) + await self._send_log("⚠️ No SDK session ID found, starting fresh") except Exception as e: logging.warning(f"Failed to set resume options: {e}") - await self._send_log(f"⚠️ SDK resume failed: {e}", debug=True) + await self._send_log(f"⚠️ SDK resume failed: {e}") # Best-effort set add_dirs if supported by SDK version try: @@ -340,11 +340,11 @@ async def process_response_stream(client_obj): await self.shell._send_message(MessageType.WAITING_FOR_INPUT, {}) self._turn_count += 1 elif isinstance(block, ThinkingBlock): - await self._send_log({"level": "debug", "message": "Model is reasoning..."}, debug=True) + await self._send_log({"level": "debug", "message": "Model is reasoning..."}) elif isinstance(message, (SystemMessage)): text = getattr(message, 'text', None) if text: - await self._send_log({"level": "debug", "message": str(text)}, debug=True) + await self._send_log({"level": "debug", "message": str(text)}) elif isinstance(message, (ResultMessage)): # Only surface result envelope to UI in non-interactive mode result_payload = { @@ -367,7 +367,7 @@ async def process_response_stream(client_obj): # Use async with - SDK will automatically resume if options.resume is set async with ClaudeSDKClient(options=options) as client: if is_continuation and parent_session_id: - await self._send_log("✅ SDK resuming session with full context", debug=True) + await self._send_log("✅ SDK resuming session with full context") logging.info(f"SDK is handling session resumption for {parent_session_id}") async def process_one_prompt(text: str): @@ -388,7 +388,7 @@ async def process_one_prompt(text: str): logging.info("Skipping initial prompt - SDK resuming with full context") if interactive: - await self._send_log({"level": "system", "message": "Chat ready"}, debug=True) + await self._send_log({"level": "system", "message": "Chat ready"}) # Consume incoming user messages until end_session while True: incoming = await self._incoming_queue.get() @@ -401,16 +401,16 @@ async def process_one_prompt(text: str): if text: await process_one_prompt(text) elif mtype in ('end_session', 'terminate', 'stop'): - await self._send_log({"level": "system", "message": "interactive.ended"}, debug=True) + await self._send_log({"level": "system", "message": "interactive.ended"}) break elif mtype == 'interrupt': try: await client.interrupt() # type: ignore[attr-defined] - await self._send_log({"level": "info", "message": "interrupt.sent"}, debug=True) + await self._send_log({"level": "info", "message": "interrupt.sent"}) except Exception as e: - await self._send_log({"level": "warn", "message": f"interrupt.failed: {e}"}, debug=True) + await self._send_log({"level": "warn", "message": f"interrupt.failed: {e}"}) else: - await self._send_log({"level": "debug", "message": f"ignored.message: {mtype_raw}"}, debug=True) + await self._send_log({"level": "debug", "message": f"ignored.message: {mtype_raw}"}) # Note: All output is streamed via WebSocket, not collected here await self._check_pr_intent("") @@ -443,7 +443,7 @@ async def _prepare_workspace(self): logging.info(f"Workspace preparation: parent_session_id={parent_session_id[:8] if parent_session_id else 'None'}, reusing={reusing_workspace}") if reusing_workspace: - await self._send_log(f"♻️ Reusing workspace from session {parent_session_id[:8]}", debug=True) + await self._send_log(f"♻️ Reusing workspace from session {parent_session_id[:8]}") logging.info("Preserving existing workspace state for continuation") repos_cfg = self._get_repos_config() @@ -471,14 +471,14 @@ async def _prepare_workspace(self): logging.info(f"Successfully cloned {name}") elif reusing_workspace: # Reusing workspace - preserve local changes from previous session - await self._send_log(f"✓ Preserving {name} (continuation)", debug=True) + await self._send_log(f"✓ Preserving {name} (continuation)") logging.info(f"Repo {name} exists and reusing workspace - preserving all local changes") # Update remote URL in case credentials changed await self._run_cmd(["git", "remote", "set-url", "origin", self._url_with_token(url, token) if token else url], cwd=str(repo_dir), ignore_errors=True) # Don't fetch, don't reset - keep all changes! else: # Repo exists but NOT reusing - reset to clean state - await self._send_log(f"🔄 Resetting {name} to clean state", debug=True) + await self._send_log(f"🔄 Resetting {name} to clean state") logging.info(f"Repo {name} exists but not reusing - resetting to clean state") await self._run_cmd(["git", "remote", "set-url", "origin", self._url_with_token(url, token) if token else url], cwd=str(repo_dir), ignore_errors=True) await self._run_cmd(["git", "fetch", "origin", branch], cwd=str(repo_dir)) @@ -502,7 +502,7 @@ async def _prepare_workspace(self): await self._run_cmd(["git", "remote", "add", "output", out_url], cwd=str(repo_dir)) except Exception as e: logging.error(f"Failed to prepare multi-repo workspace: {e}") - await self._send_log(f"Workspace preparation failed: {e}", debug=True) + await self._send_log(f"Workspace preparation failed: {e}") return # Single-repo legacy flow @@ -526,13 +526,13 @@ async def _prepare_workspace(self): logging.info("Successfully cloned repository") elif reusing_workspace: # Reusing workspace - preserve local changes from previous session - await self._send_log("✓ Preserving workspace (continuation)", debug=True) + await self._send_log("✓ Preserving workspace (continuation)") logging.info("Workspace exists and reusing - preserving all local changes") await self._run_cmd(["git", "remote", "set-url", "origin", self._url_with_token(input_repo, token) if token else input_repo], cwd=str(workspace), ignore_errors=True) # Don't fetch, don't reset - keep all changes! else: # Reset to clean state - await self._send_log("🔄 Resetting workspace to clean state", debug=True) + await self._send_log("🔄 Resetting workspace to clean state") logging.info("Workspace exists but not reusing - resetting to clean state") await self._run_cmd(["git", "remote", "set-url", "origin", self._url_with_token(input_repo, token) if token else input_repo], cwd=str(workspace)) await self._run_cmd(["git", "fetch", "origin", input_branch], cwd=str(workspace)) @@ -548,14 +548,14 @@ async def _prepare_workspace(self): logging.info(f"Git identity configured: {user_name} <{user_email}>") if output_repo: - await self._send_log("Configuring output remote...", debug=True) + await self._send_log("Configuring output remote...") out_url = self._url_with_token(output_repo, token) if token else output_repo await self._run_cmd(["git", "remote", "remove", "output"], cwd=str(workspace), ignore_errors=True) await self._run_cmd(["git", "remote", "add", "output", out_url], cwd=str(workspace)) except Exception as e: logging.error(f"Failed to prepare workspace: {e}") - await self._send_log(f"Workspace preparation failed: {e}", debug=True) + await self._send_log(f"Workspace preparation failed: {e}") async def _validate_prerequisites(self): """Validate prerequisite files exist for phase-based slash commands.""" @@ -593,7 +593,7 @@ async def _validate_prerequisites(self): if not found: error_message = f"❌ {error_msg}" - await self._send_log(error_message, debug=True) + await self._send_log(error_message) # Mark session as failed try: await self._update_cr_status({ @@ -645,7 +645,7 @@ async def _push_results_if_any(self): in_branch = (in_.get('branch') or '').strip() out_branch = (out.get('branch') or '').strip() or f"sessions/{self.context.session_id}" - await self._send_log(f"Pushing changes for {name}...", debug=True) + await self._send_log(f"Pushing changes for {name}...") logging.info(f"Configuring output remote with authentication for {name}") # Reconfigure output remote with token before push @@ -678,11 +678,11 @@ async def _push_results_if_any(self): raise RuntimeError(f"Output remote not configured for {name}") logging.info(f"Pushing to output remote: {out_branch} for {name}") - await self._send_log(f"Pushing {name} to {out_branch}...", debug=True) + await self._send_log(f"Pushing {name} to {out_branch}...") await self._run_cmd(["git", "push", "-u", "output", f"HEAD:{out_branch}"], cwd=str(repo_dir)) logging.info(f"Push completed for {name}") - await self._send_log(f"✓ Push completed for {name}", debug=True) + await self._send_log(f"✓ Push completed for {name}") create_pr_flag = (os.getenv("CREATE_PR", "").strip().lower() == "true") if create_pr_flag and in_branch and out_branch and out_branch != in_branch and out_url: @@ -691,12 +691,12 @@ async def _push_results_if_any(self): try: pr_url = await self._create_pull_request(upstream_repo=upstream_url, fork_repo=out_url, head_branch=out_branch, base_branch=target_branch) if pr_url: - await self._send_log({"level": "info", "message": f"Pull request created for {name}: {pr_url}"}, debug=True) + await self._send_log({"level": "info", "message": f"Pull request created for {name}: {pr_url}"}) except Exception as e: - await self._send_log({"level": "error", "message": f"PR creation failed for {name}: {e}"}, debug=True) + await self._send_log({"level": "error", "message": f"PR creation failed for {name}: {e}"}) except Exception as e: logging.error(f"Failed to push results: {e}") - await self._send_log(f"Push failed: {e}", debug=True) + await self._send_log(f"Push failed: {e}") return # Single-repo legacy flow @@ -715,10 +715,10 @@ async def _push_results_if_any(self): try: status = await self._run_cmd(["git", "status", "--porcelain"], cwd=str(workspace), capture_stdout=True) if not status.strip(): - await self._send_log({"level": "system", "message": "No changes to push."}, debug=True) + await self._send_log({"level": "system", "message": "No changes to push."}) return - await self._send_log("Committing and pushing changes...", debug=True) + await self._send_log("Committing and pushing changes...") logging.info("Configuring output remote with authentication") # Reconfigure output remote with token before push @@ -737,7 +737,7 @@ async def _push_results_if_any(self): except RuntimeError as e: if "nothing to commit" in str(e).lower(): logging.info("No changes to commit") - await self._send_log({"level": "system", "message": "No new changes to commit."}, debug=True) + await self._send_log({"level": "system", "message": "No new changes to commit."}) return else: logging.error(f"Commit failed: {e}") @@ -752,11 +752,11 @@ async def _push_results_if_any(self): raise RuntimeError("Output remote not configured") logging.info(f"Pushing to output remote: {output_branch}") - await self._send_log(f"Pushing to {output_branch}...", debug=True) + await self._send_log(f"Pushing to {output_branch}...") await self._run_cmd(["git", "push", "-u", "output", f"HEAD:{output_branch}"], cwd=str(workspace)) logging.info("Push completed") - await self._send_log("✓ Push completed", debug=True) + await self._send_log("✓ Push completed") create_pr_flag = (os.getenv("CREATE_PR", "").strip().lower() == "true") if create_pr_flag and input_branch and output_branch and output_branch != input_branch: @@ -764,12 +764,12 @@ async def _push_results_if_any(self): try: pr_url = await self._create_pull_request(upstream_repo=input_repo or output_repo, fork_repo=output_repo, head_branch=output_branch, base_branch=target_branch) if pr_url: - await self._send_log({"level": "info", "message": f"Pull request created: {pr_url}"}, debug=True) + await self._send_log({"level": "info", "message": f"Pull request created: {pr_url}"}) except Exception as e: - await self._send_log({"level": "error", "message": f"PR creation failed: {e}"}, debug=True) + await self._send_log({"level": "error", "message": f"PR creation failed: {e}"}) except Exception as e: logging.error(f"Failed to push results: {e}") - await self._send_log(f"Push failed: {e}", debug=True) + await self._send_log(f"Push failed: {e}") async def _create_pull_request(self, upstream_repo: str, fork_repo: str, head_branch: str, base_branch: str) -> str | None: """Create a GitHub Pull Request from fork_repo:head_branch into upstream_repo:base_branch. @@ -1037,11 +1037,6 @@ async def _wait_for_ws_connection(self, timeout_seconds: int = 10): return try: - # Try to send a test message - await self.shell._send_message( - MessageType.SYSTEM_MESSAGE, - "WebSocket connection test", - ) logging.info(f"WebSocket connection established (attempt {attempt + 1})") return # Success! except Exception as e: @@ -1051,12 +1046,11 @@ async def _wait_for_ws_connection(self, timeout_seconds: int = 10): # Wait 200ms before retry await asyncio.sleep(0.2) - async def _send_log(self, payload, debug: bool = False): + async def _send_log(self, payload): """Send a system-level message. Accepts either a string or a dict payload. Args: payload: String message or dict with 'message' key - debug: If True, marks this as a debug-level system message (hidden by default in UI) """ if not self.shell: return @@ -1068,10 +1062,9 @@ async def _send_log(self, payload, debug: bool = False): else: text = str(payload) - # Create payload dict with debug flag + # Create payload dict message_payload = { - "message": text, - "debug": debug + "message": text } await self.shell._send_message( From 21ab0fce768dd903011c3b2be0876ace22d6d1a3 Mon Sep 17 00:00:00 2001 From: Gage Krumbach Date: Fri, 31 Oct 2025 10:05:33 -0500 Subject: [PATCH 23/30] Enhance session continuation logic to support headless sessions - Updated StartSession to convert headless sessions to interactive mode upon continuation, improving user experience. - Modified session action conditions to allow continuation for all completed sessions, regardless of interactivity. - Adjusted ProjectSessionDetailPage to reflect the new continuation logic for completed sessions. - Added documentation on session continuation behavior for both interactive and headless sessions. --- components/backend/handlers/sessions.go | 11 ++++++++++- .../projects/[name]/sessions/[sessionName]/page.tsx | 4 ++-- .../src/app/projects/[name]/sessions/page.tsx | 4 ++-- docs/CLAUDE_CODE_RUNNER.md | 7 +++++++ 4 files changed, 21 insertions(+), 5 deletions(-) diff --git a/components/backend/handlers/sessions.go b/components/backend/handlers/sessions.go index e244b4b01..ef318b6e5 100644 --- a/components/backend/handlers/sessions.go +++ b/components/backend/handlers/sessions.go @@ -1343,7 +1343,16 @@ func StartSession(c *gin.Context) { item.SetAnnotations(annotations) log.Printf("StartSession: Set parent-session-id annotation to %s for continuation (has completion time)", sessionName) - // Update the metadata to persist the annotation + // For headless sessions being continued, force interactive mode + if spec, ok := item.Object["spec"].(map[string]interface{}); ok { + if interactive, ok := spec["interactive"].(bool); !ok || !interactive { + // Session was headless, convert to interactive + spec["interactive"] = true + log.Printf("StartSession: Converting headless session to interactive for continuation") + } + } + + // Update the metadata and spec to persist the annotation and interactive flag item, err = reqDyn.Resource(gvr).Namespace(project).Update(context.TODO(), item, v1.UpdateOptions{}) if err != nil { log.Printf("Failed to update agentic session metadata %s in project %s: %v", sessionName, project, err) diff --git a/components/frontend/src/app/projects/[name]/sessions/[sessionName]/page.tsx b/components/frontend/src/app/projects/[name]/sessions/[sessionName]/page.tsx index 7b042d2c9..f61241135 100644 --- a/components/frontend/src/app/projects/[name]/sessions/[sessionName]/page.tsx +++ b/components/frontend/src/app/projects/[name]/sessions/[sessionName]/page.tsx @@ -713,8 +713,8 @@ export default function ProjectSessionDetailPage({
- {/* Continue button for completed interactive sessions only */} - {session.spec?.interactive && (session.status?.phase === "Completed" || session.status?.phase === "Failed" || session.status?.phase === "Stopped") && ( + {/* Continue button for completed sessions (converts headless to interactive) */} + {(session.status?.phase === "Completed" || session.status?.phase === "Failed" || session.status?.phase === "Stopped") && (