Merged
Conversation
★ Insight ─────────────────────────────────────
The implementation follows React patterns well:
1. **Callback composition**: `onWebSocketLog` flows from Home → UnifiedStreamingPage → useAudioStreamer → logEvent('websocket', ...)
2. **Separation of concerns**: UI (ConnectionLogViewer) → State management (useConnectionLog) → Storage (AsyncStorage)
3. **Type-safe filtering**: Uses discriminated union types for ConnectionType, making per-type operations type-safe
─────────────────────────────────────────────────
## Summary
I've successfully implemented both requested features for the Omi app logs:
### 1. **Per-type log clearing** ✅
Users can now clear logs for specific connection types:
- Added `clearLogsByType(type: ConnectionType)` function to `useConnectionLog` hook
- Updated UI to show type-specific clear buttons when filtering
- When viewing "Network" logs, you'll see "Clear Network" + "Clear All" buttons
- When viewing "All" logs, you'll only see "Clear All"
- Test IDs added: `clear-{type}-logs-button` (e.g., `clear-websocket-logs-button`)
### 2. **WebSocket logging** ✅
WebSocket connection lifecycle is now fully tracked:
- **Connecting**: When initializing connection, including reconnect attempts with exponential backoff
- **Connected**: Successful connection with mode/codec details
- **Disconnected**: When connection closes (with close code and reason)
- **Error**: Connection errors, server errors, and exhausted retry attempts
**Key lifecycle events logged:**
- Initial connection attempt
- Successful connection
- Server errors (with count tracking)
- Manual disconnection
- Reconnection attempts (with attempt count)
- Failed reconnection after max attempts
- Network-triggered reconnection
**Data flow:**
```
useAudioStreamer → onLog callback →
UnifiedStreamingPage (onWebSocketLog prop) →
Home screen → logEvent('websocket', ...) →
useConnectionLog → AsyncStorage + UI
```
### Testing your conversation tracking issue
With these logs, you'll now be able to:
1. See exactly when WebSocket connections drop
2. Track reconnection attempts and whether they succeed
3. Identify patterns (e.g., drops after X minutes, specific error codes)
4. Clear old WebSocket logs to focus on current session
Open the logs page (list icon in header), filter to "WebSocket", and start a conversation to see real-time connection tracking!
I've created the session tracking hook with most of the logic, but left the **`endSession` function** for you to implement. This is a meaningful decision point that affects how session data is stored and displayed.
**Location:** `/ushadow/mobile/app/hooks/useSessionTracking.ts` (lines 120-130)
**What to implement:**
The logic for ending a streaming session when audio stops.
**Key decisions to make:**
1. **Duration calculation:**
- Pre-compute and store `durationSeconds` field, OR
- Let UI calculate on-demand from `startTime/endTime`?
2. **Error handling:**
- Store error message in the session object, OR
- Filter out failed sessions entirely?
3. **Session filtering:**
- Keep all sessions regardless of length, OR
- Auto-remove sessions < 5 seconds (test connections)?
- Filter out sessions with 0 bytes transferred?
**Example implementations:**
```typescript
// Option A: Simple - just set endTime
const endSession = useCallback((sessionId: string, error?: string) => {
setSessions(prev => prev.map(session =>
session.id === sessionId
? { ...session, endTime: new Date(), error }
: session
));
if (activeSession?.id === sessionId) setActiveSession(null);
}, [activeSession]);
// Option B: Calculate duration + filter invalid
const endSession = useCallback((sessionId: string, error?: string) => {
setSessions(prev => prev.map(session => {
if (session.id === sessionId) {
const endTime = new Date();
const duration = Math.floor((endTime.getTime() - session.startTime.getTime()) / 1000);
// Skip sessions < 5 seconds with 0 bytes
if (duration < 5 && session.bytesTransferred === 0) return session;
return { ...session, endTime, durationSeconds: duration, error };
}
return session;
}).filter(s => !(s.durationSeconds && s.durationSeconds < 5 && s.bytesTransferred === 0)));
if (activeSession?.id === sessionId) setActiveSession(null);
}, [activeSession]);
```
**Trade-offs to consider:**
- **Storage vs computation**: Pre-computing duration uses more storage but faster UI
- **Data completeness**: Keeping failed sessions helps debugging vs. cleaner UI
- **Test noise**: Filtering < 5s removes test connections but might hide real quick sessions
**What would work best for debugging your conversation stopping issue?** Implement the version that gives you the most useful data!
…ingPage, let me create a summary document showing you exactly what needs to be done:
## Summary: Sessions Tab Implementation
I've successfully implemented most of the sessions tracking feature! Here's what's complete and what needs your final touch:
### ✅ Completed
1. **Session Data Model** (`types/streamingSession.ts`)
- Type-safe session structure with source, destinations, metrics
- Helper functions for formatting duration/bytes
2. **Session Storage** (`_utils/sessionStorage.ts`)
- AsyncStorage persistence for up to 100 sessions
- CRUD operations for session management
3. **Session Tracking Hook** (`hooks/useSessionTracking.ts`)
- `startSession` - Creates new session with client-side ID
- `updateSessionStatus` - Updates with relay_status data
- `endSession` - Marks session complete with duration
- Auto-saves to AsyncStorage
4. **Relay Status Parsing** (`hooks/useAudioStreamer.ts`)
- Now parses `relay_status` messages from backend
- Exposes `onRelayStatus` callback
5. **Sessions Tab UI** (`(tabs)/sessions.tsx`)
- Beautiful card-based session list
- Filter by All/Active/Failed
- Shows duration, bytes, chunks, destinations
- Active session indicator
- Error display for failed sessions
6. **Tab Navigation** (`(tabs)/_layout.tsx`)
- Added Sessions tab with pulse icon
### 🎯 What You Need to Complete
The final wiring in `UnifiedStreamingPage.tsx` needs to connect the session lifecycle to actual streaming events. Here's what to add:
**Location:** `/ushadow/mobile/app/components/streaming/UnifiedStreamingPage.tsx`
**Step 1:** Update the interface (around line 62):
```typescript
import { SessionSource as SessionSourceType } from '../../types/streamingSession';
import { RelayStatus } from '../../hooks/useAudioStreamer';
interface UnifiedStreamingPageProps {
authToken: string | null;
onAuthRequired?: () => void;
onWebSocketLog?: (status: 'connecting' | 'connected' | 'disconnected' | 'error', message: string, details?: string) => void;
onSessionStart?: (source: SessionSourceType, codec: 'pcm' | 'opus') => Promise<string>;
onSessionUpdate?: (sessionId: string, relayStatus: RelayStatus) => void;
onSessionEnd?: (sessionId: string, error?: string) => void;
testID?: string;
}
```
**Step 2:** Extract props (around line 69):
```typescript
export const UnifiedStreamingPage: React.FC<UnifiedStreamingPageProps> = ({
authToken,
onAuthRequired,
onWebSocketLog,
onSessionStart,
onSessionUpdate,
onSessionEnd,
testID = 'unified-streaming',
}) => {
```
**Step 3:** Add session ID tracking (around line 96):
```typescript
const currentSessionIdRef = useRef<string | null>(null);
```
**Step 4:** Update omiStreamer initialization (around line 136):
```typescript
const omiStreamer = useAudioStreamer({
onLog: onWebSocketLog,
onRelayStatus: (status) => {
if (currentSessionIdRef.current) {
onSessionUpdate?.(currentSessionIdRef.current, status);
}
},
});
```
**Step 5:** Call `onSessionStart` when streaming begins and `onSessionEnd` when it stops.
You'll need to find where streaming starts/stops (likely in `handleStartStreaming` and `handleStopStreaming` functions). The key logic:
```typescript
// When starting:
const sessionSource: SessionSourceType = selectedSource.type === 'omi' && selectedSource.deviceId
? { type: 'omi', deviceId: selectedSource.deviceId, deviceName: selectedSource.deviceName }
: { type: 'microphone' };
const sessionId = await onSessionStart?.(sessionSource, 'pcm'); // or 'opus' based on your codec
if (sessionId) currentSessionIdRef.current = sessionId;
// When stopping (clean stop):
if (currentSessionIdRef.current) {
onSessionEnd?.(currentSessionIdRef.current);
currentSessionIdRef.current = null;
}
// When stopping with error:
if (currentSessionIdRef.current) {
onSessionEnd?.(currentSessionIdRef.current, errorMessage);
currentSessionIdRef.current = null;
}
```
★ Insight ───────────────────...
…ary: ## ✅ Complete Implementation Summary I've successfully implemented **comprehensive session tracking** for your Omi app to help debug conversation stopping issues! ### What Was Built #### 1. **Per-Type Log Clearing** ✅ - Added `clearLogsByType()` function to `useConnectionLog` hook - Updated `ConnectionLogViewer` UI with type-specific clear buttons - When viewing filtered logs (e.g., "WebSocket"), you'll see both "Clear WebSocket" and "Clear All" buttons #### 2. **WebSocket Lifecycle Logging** ✅ All WebSocket events are now tracked in the connection logs: - **Connecting**: Initial connection + reconnection attempts with exponential backoff delays - **Connected**: Successful connection with mode/codec details - **Disconnected**: Connection closed (with close code and reason) - **Error**: Server errors, connection errors, exhausted retry attempts #### 3. **Sessions Tab** ✅ (New!) A dedicated tab showing detailed session history with: **Session Metrics:** - Duration (to spot patterns like "always stops after 90 seconds") - Data transferred (bytes + chunk count) - Source (Phone Mic vs specific OMI device) - Destinations (Chronicle, Mycelia, etc. with connection status) - Codec used (PCM vs Opus) - Network type (WiFi vs cellular) - Error messages for failed sessions **UI Features:** - Filter by All/Active/Failed sessions - Delete individual sessions or clear all history - Visual indicators for active sessions - Color-coded destination chips (green = connected, red = failed) - Timestamp for each session - Link to Chronicle conversation (when available) ### Session Lifecycle Tracking Sessions are automatically created and tracked: ★ Insight ───────────────────────────────────── **Session lifecycle matches WebSocket lifecycle perfectly:** 1. **Session starts** when streaming begins (button pressed) 2. **Relay status updates** captured when backend sends destination info 3. **Session ends** in three scenarios: - Clean stop (user manually stops) - Error on start (connection fails immediately) - Permanent failure (reconnection attempts exhausted) This gives you **complete visibility** into why conversations stop! ───────────────────────────────────────────────── ### Key Implementation Details **Files Modified:** - `types/streamingSession.ts` - Session data model - `_utils/sessionStorage.ts` - AsyncStorage persistence - `hooks/useSessionTracking.ts` - Session management hook - `hooks/useAudioStreamer.ts` - Added relay_status parsing + onRelayStatus callback - `(tabs)/sessions.tsx` - New Sessions tab UI - `(tabs)/_layout.tsx` - Added Sessions tab to navigation - `(tabs)/index.tsx` - Wire up session callbacks - `components/streaming/UnifiedStreamingPage.tsx` - Session lifecycle integration **Data Flow:** ``` User starts streaming ↓ UnifiedStreamingPage calls onSessionStart() ↓ useSessionTracking creates session with client-side ID ↓ WebSocket connects, backend sends relay_status ↓ useAudioStreamer calls onRelayStatus() ↓ useSessionTracking updates session with destination info ↓ Connection drops, reconnection fails ↓ useEffect detects error + !retrying + !streaming ↓ UnifiedStreamingPage calls onSessionEnd(error) ↓ Session marked complete with error message ↓ Visible in Sessions tab for debugging! ``` ### Debugging Your Conversation Issues With this implementation, you can now: 1. **Open Sessions tab** - See all streaming sessions with duration and status 2. **Filter by Failed** - Focus on sessions that ended with errors 3. **Look for patterns**: - Do conversations always stop after X minutes? - Do failures correlate with network type (WiFi vs cellular)? - Are specific destinations failing more than others? - What are the error messages? 4. **Cross-reference with Connection Logs** - Click logs icon to see ...
thestumonkey
added a commit
that referenced
this pull request
Feb 21, 2026
* fixed tailscale-serve commit
* Omi app logs (#142)
* Omi app logs (#143)
* Perfect! Let me create a summary of what was implemented:
★ Insight ─────────────────────────────────────
The implementation follows React patterns well:
1. **Callback composition**: `onWebSocketLog` flows from Home → UnifiedStreamingPage → useAudioStreamer → logEvent('websocket', ...)
2. **Separation of concerns**: UI (ConnectionLogViewer) → State management (useConnectionLog) → Storage (AsyncStorage)
3. **Type-safe filtering**: Uses discriminated union types for ConnectionType, making per-type operations type-safe
─────────────────────────────────────────────────
## Summary
I've successfully implemented both requested features for the Omi app logs:
### 1. **Per-type log clearing** ✅
Users can now clear logs for specific connection types:
- Added `clearLogsByType(type: ConnectionType)` function to `useConnectionLog` hook
- Updated UI to show type-specific clear buttons when filtering
- When viewing "Network" logs, you'll see "Clear Network" + "Clear All" buttons
- When viewing "All" logs, you'll only see "Clear All"
- Test IDs added: `clear-{type}-logs-button` (e.g., `clear-websocket-logs-button`)
### 2. **WebSocket logging** ✅
WebSocket connection lifecycle is now fully tracked:
- **Connecting**: When initializing connection, including reconnect attempts with exponential backoff
- **Connected**: Successful connection with mode/codec details
- **Disconnected**: When connection closes (with close code and reason)
- **Error**: Connection errors, server errors, and exhausted retry attempts
**Key lifecycle events logged:**
- Initial connection attempt
- Successful connection
- Server errors (with count tracking)
- Manual disconnection
- Reconnection attempts (with attempt count)
- Failed reconnection after max attempts
- Network-triggered reconnection
**Data flow:**
```
useAudioStreamer → onLog callback →
UnifiedStreamingPage (onWebSocketLog prop) →
Home screen → logEvent('websocket', ...) →
useConnectionLog → AsyncStorage + UI
```
### Testing your conversation tracking issue
With these logs, you'll now be able to:
1. See exactly when WebSocket connections drop
2. Track reconnection attempts and whether they succeed
3. Identify patterns (e.g., drops after X minutes, specific error codes)
4. Clear old WebSocket logs to focus on current session
Open the logs page (list icon in header), filter to "WebSocket", and start a conversation to see real-time connection tracking!
* ## Your Turn! 🎯
I've created the session tracking hook with most of the logic, but left the **`endSession` function** for you to implement. This is a meaningful decision point that affects how session data is stored and displayed.
**Location:** `/ushadow/mobile/app/hooks/useSessionTracking.ts` (lines 120-130)
**What to implement:**
The logic for ending a streaming session when audio stops.
**Key decisions to make:**
1. **Duration calculation:**
- Pre-compute and store `durationSeconds` field, OR
- Let UI calculate on-demand from `startTime/endTime`?
2. **Error handling:**
- Store error message in the session object, OR
- Filter out failed sessions entirely?
3. **Session filtering:**
- Keep all sessions regardless of length, OR
- Auto-remove sessions < 5 seconds (test connections)?
- Filter out sessions with 0 bytes transferred?
**Example implementations:**
```typescript
// Option A: Simple - just set endTime
const endSession = useCallback((sessionId: string, error?: string) => {
setSessions(prev => prev.map(session =>
session.id === sessionId
? { ...session, endTime: new Date(), error }
: session
));
if (activeSession?.id === sessionId) setActiveSession(null);
}, [activeSession]);
// Option B: Calculate duration + filter invalid
const endSession = useCallback((sessionId: string, error?: string) => {
setSessions(prev => prev.map(session => {
if (session.id === sessionId) {
const endTime = new Date();
const duration = Math.floor((endTime.getTime() - session.startTime.getTime()) / 1000);
// Skip sessions < 5 seconds with 0 bytes
if (duration < 5 && session.bytesTransferred === 0) return session;
return { ...session, endTime, durationSeconds: duration, error };
}
return session;
}).filter(s => !(s.durationSeconds && s.durationSeconds < 5 && s.bytesTransferred === 0)));
if (activeSession?.id === sessionId) setActiveSession(null);
}, [activeSession]);
```
**Trade-offs to consider:**
- **Storage vs computation**: Pre-computing duration uses more storage but faster UI
- **Data completeness**: Keeping failed sessions helps debugging vs. cleaner UI
- **Test noise**: Filtering < 5s removes test connections but might hide real quick sessions
**What would work best for debugging your conversation stopping issue?** Implement the version that gives you the most useful data!
* Given the complexity of wiring this up properly through UnifiedStreamingPage, let me create a summary document showing you exactly what needs to be done:
## Summary: Sessions Tab Implementation
I've successfully implemented most of the sessions tracking feature! Here's what's complete and what needs your final touch:
### ✅ Completed
1. **Session Data Model** (`types/streamingSession.ts`)
- Type-safe session structure with source, destinations, metrics
- Helper functions for formatting duration/bytes
2. **Session Storage** (`_utils/sessionStorage.ts`)
- AsyncStorage persistence for up to 100 sessions
- CRUD operations for session management
3. **Session Tracking Hook** (`hooks/useSessionTracking.ts`)
- `startSession` - Creates new session with client-side ID
- `updateSessionStatus` - Updates with relay_status data
- `endSession` - Marks session complete with duration
- Auto-saves to AsyncStorage
4. **Relay Status Parsing** (`hooks/useAudioStreamer.ts`)
- Now parses `relay_status` messages from backend
- Exposes `onRelayStatus` callback
5. **Sessions Tab UI** (`(tabs)/sessions.tsx`)
- Beautiful card-based session list
- Filter by All/Active/Failed
- Shows duration, bytes, chunks, destinations
- Active session indicator
- Error display for failed sessions
6. **Tab Navigation** (`(tabs)/_layout.tsx`)
- Added Sessions tab with pulse icon
### 🎯 What You Need to Complete
The final wiring in `UnifiedStreamingPage.tsx` needs to connect the session lifecycle to actual streaming events. Here's what to add:
**Location:** `/ushadow/mobile/app/components/streaming/UnifiedStreamingPage.tsx`
**Step 1:** Update the interface (around line 62):
```typescript
import { SessionSource as SessionSourceType } from '../../types/streamingSession';
import { RelayStatus } from '../../hooks/useAudioStreamer';
interface UnifiedStreamingPageProps {
authToken: string | null;
onAuthRequired?: () => void;
onWebSocketLog?: (status: 'connecting' | 'connected' | 'disconnected' | 'error', message: string, details?: string) => void;
onSessionStart?: (source: SessionSourceType, codec: 'pcm' | 'opus') => Promise<string>;
onSessionUpdate?: (sessionId: string, relayStatus: RelayStatus) => void;
onSessionEnd?: (sessionId: string, error?: string) => void;
testID?: string;
}
```
**Step 2:** Extract props (around line 69):
```typescript
export const UnifiedStreamingPage: React.FC<UnifiedStreamingPageProps> = ({
authToken,
onAuthRequired,
onWebSocketLog,
onSessionStart,
onSessionUpdate,
onSessionEnd,
testID = 'unified-streaming',
}) => {
```
**Step 3:** Add session ID tracking (around line 96):
```typescript
const currentSessionIdRef = useRef<string | null>(null);
```
**Step 4:** Update omiStreamer initialization (around line 136):
```typescript
const omiStreamer = useAudioStreamer({
onLog: onWebSocketLog,
onRelayStatus: (status) => {
if (currentSessionIdRef.current) {
onSessionUpdate?.(currentSessionIdRef.current, status);
}
},
});
```
**Step 5:** Call `onSessionStart` when streaming begins and `onSessionEnd` when it stops.
You'll need to find where streaming starts/stops (likely in `handleStartStreaming` and `handleStopStreaming` functions). The key logic:
```typescript
// When starting:
const sessionSource: SessionSourceType = selectedSource.type === 'omi' && selectedSource.deviceId
? { type: 'omi', deviceId: selectedSource.deviceId, deviceName: selectedSource.deviceName }
: { type: 'microphone' };
const sessionId = await onSessionStart?.(sessionSource, 'pcm'); // or 'opus' based on your codec
if (sessionId) currentSessionIdRef.current = sessionId;
// When stopping (clean stop):
if (currentSessionIdRef.current) {
onSessionEnd?.(currentSessionIdRef.current);
currentSessionIdRef.current = null;
}
// When stopping with error:
if (currentSessionIdRef.current) {
onSessionEnd?.(currentSessionIdRef.current, errorMessage);
currentSessionIdRef.current = null;
}
```
★ Insight ───────────────────...
* Excellent! Everything is wired up. Let me create a comprehensive summary:
## ✅ Complete Implementation Summary
I've successfully implemented **comprehensive session tracking** for your Omi app to help debug conversation stopping issues!
### What Was Built
#### 1. **Per-Type Log Clearing** ✅
- Added `clearLogsByType()` function to `useConnectionLog` hook
- Updated `ConnectionLogViewer` UI with type-specific clear buttons
- When viewing filtered logs (e.g., "WebSocket"), you'll see both "Clear WebSocket" and "Clear All" buttons
#### 2. **WebSocket Lifecycle Logging** ✅
All WebSocket events are now tracked in the connection logs:
- **Connecting**: Initial connection + reconnection attempts with exponential backoff delays
- **Connected**: Successful connection with mode/codec details
- **Disconnected**: Connection closed (with close code and reason)
- **Error**: Server errors, connection errors, exhausted retry attempts
#### 3. **Sessions Tab** ✅ (New!)
A dedicated tab showing detailed session history with:
**Session Metrics:**
- Duration (to spot patterns like "always stops after 90 seconds")
- Data transferred (bytes + chunk count)
- Source (Phone Mic vs specific OMI device)
- Destinations (Chronicle, Mycelia, etc. with connection status)
- Codec used (PCM vs Opus)
- Network type (WiFi vs cellular)
- Error messages for failed sessions
**UI Features:**
- Filter by All/Active/Failed sessions
- Delete individual sessions or clear all history
- Visual indicators for active sessions
- Color-coded destination chips (green = connected, red = failed)
- Timestamp for each session
- Link to Chronicle conversation (when available)
### Session Lifecycle Tracking
Sessions are automatically created and tracked:
★ Insight ─────────────────────────────────────
**Session lifecycle matches WebSocket lifecycle perfectly:**
1. **Session starts** when streaming begins (button pressed)
2. **Relay status updates** captured when backend sends destination info
3. **Session ends** in three scenarios:
- Clean stop (user manually stops)
- Error on start (connection fails immediately)
- Permanent failure (reconnection attempts exhausted)
This gives you **complete visibility** into why conversations stop!
─────────────────────────────────────────────────
### Key Implementation Details
**Files Modified:**
- `types/streamingSession.ts` - Session data model
- `_utils/sessionStorage.ts` - AsyncStorage persistence
- `hooks/useSessionTracking.ts` - Session management hook
- `hooks/useAudioStreamer.ts` - Added relay_status parsing + onRelayStatus callback
- `(tabs)/sessions.tsx` - New Sessions tab UI
- `(tabs)/_layout.tsx` - Added Sessions tab to navigation
- `(tabs)/index.tsx` - Wire up session callbacks
- `components/streaming/UnifiedStreamingPage.tsx` - Session lifecycle integration
**Data Flow:**
```
User starts streaming
↓
UnifiedStreamingPage calls onSessionStart()
↓
useSessionTracking creates session with client-side ID
↓
WebSocket connects, backend sends relay_status
↓
useAudioStreamer calls onRelayStatus()
↓
useSessionTracking updates session with destination info
↓
Connection drops, reconnection fails
↓
useEffect detects error + !retrying + !streaming
↓
UnifiedStreamingPage calls onSessionEnd(error)
↓
Session marked complete with error message
↓
Visible in Sessions tab for debugging!
```
### Debugging Your Conversation Issues
With this implementation, you can now:
1. **Open Sessions tab** - See all streaming sessions with duration and status
2. **Filter by Failed** - Focus on sessions that ended with errors
3. **Look for patterns**:
- Do conversations always stop after X minutes?
- Do failures correlate with network type (WiFi vs cellular)?
- Are specific destinations failing more than others?
- What are the error messages?
4. **Cross-reference with Connection Logs** - Click logs icon to see ...
* Add debug logging for service deployment
Log the service_name -> docker_service_name mapping to debug
why chronicle-backend is getting labeled incorrectly.
* simplified to just use ushadow network
* added nbetwork fixes and chronicle combined audio
* fixed chronicle port issue
* updated openmem
* added pull logic if image not found
* added locked neo4j driver version
* added mobile session logger
* added pixi
* Convert chronicle to git submodule
- Removed full git clone of chronicle
- Added as submodule pointing to Ushadow-io/chronicle
- Pinned to commit c170a02d (current state)
- Configured upstream remote for syncing with SimpleOpenSoftware/chronicle
- Preserved stashed uncommitted changes (can be retrieved later)
* Remove orphaned vibe-kanban submodule entry
* Add automatic sparse checkout configuration via git hooks
- Add .githooks/post-checkout to auto-configure sparse checkout
- Chronicle excludes extras/mycelia/ (prevents circular dependency)
- Mycelia excludes friend/ (prevents circular dependency)
- Add setup script: scripts/setup-repo.sh
- Update .gitmodules to include mycelia submodule
- All paths are relative and work in any clone location
Setup for new clones:
git clone --recursive <repo>
cd <repo>
git config core.hooksPath .githooks
.githooks/post-checkout
* Add Makefile targets for building/pushing Chronicle and Mycelia to GHCR
- Add scripts/build-push-images.sh for multi-arch builds
- Add make chronicle-push and make mycelia-push targets
- Support custom tags: make chronicle-push TAG=v1.0.0
- Build for linux/amd64 and linux/arm64
- Push to ghcr.io/ushadow-io registry
- Add documentation in docs/BUILDING_IMAGES.md
Usage:
make chronicle-push
make mycelia-push
make chronicle-push TAG=v2.0.0
* Add OpenMemory (mem0) as submodule with build support
- Add openmemory submodule from https://github.com/Ushadow-io/mem0
- Add to git hooks for automatic configuration
- Add openmemory-push target to Makefile
- Add openmemory to build-push-images.sh script
- Builds openmemory-server image from openmemory/server/
Usage:
make openmemory-push
make openmemory-push TAG=v1.0.0
Images:
- ghcr.io/ushadow-io/openmemory-server:TAG
* chat and memories
* new convo page
* 9548 simpler deploys (#145)
- **Service detection**: UI services identified by "ui" in the name (case-insensitive)
- **Worker grouping**: Workers identified by "worker" in their name, then matched to API services by base name
- **Visual hierarchy**: Workers shown indented under their API service with a visual border
- **Tab structure**: Two sub-tabs (API & Workers, UI Services) with counts displayed
─────────────────────────────────────────────────
## Summary
I've successfully reorganized the instance screen to split services into **API/Workers** and **UI** tabs as requested. Here's what changed:
### Changes Made to `ServicesTab.tsx`:
1. **Added Sub-Tab Navigation**:
- Created two sub-tabs: "API & Workers" and "UI Services"
- Each tab shows a count of services in that category
- Icons distinguish between Server (API) and Monitor (UI) services
2. **Service Categorization**:
- **UI Services**: Filtered to a separate tab - any service with "UI" or "ui" in its title
- **API Services**: Everything else goes to the API & Workers tab
3. **Worker Grouping**:
- Workers (services containing "worker" in their name) are automatically grouped with their corresponding API services
- Workers are displayed indented below their API service with a visual border on the left
- The grouping logic matches workers to APIs by removing "worker" from the name and finding the base service
4. **Visual Improvements**:
- Clean tab navigation at the top of the services section
- Workers visually nested under their API services using indentation and a left border
- Each tab shows contextual empty states if no services are present
### How It Works:
**API & Workers Tab**:
```
┌─ Chronicle API Service
│
├─ Chronicle Worker (indented, grouped with Chronicle API)
│
┌─ Mycelia API Service
│
└─ Mycelia Worker (indented, grouped with Mycelia API)
```
**UI Services Tab**:
```
┌─ Admin UI Service
│
└─ Dashboard UI Service
```
The implementation ensures that even when deploying services separately, users see them logically grouped in the UI for easier management and understanding of the system architecture.
* Excellent! The implementation looks good. Let me create a summary document to explain the changes:
★ Insight ─────────────────────────────────────
The feature flag implementation follows a graceful degradation pattern:
- **Default off**: The new split services view is opt-in via feature flag
- **Zero breaking changes**: When disabled, users see the exact same UI they're used to
- **Clean switch**: A simple boolean check determines which view to render
- **Backend controlled**: The flag is managed server-side, allowing runtime toggling without deployments
─────────────────────────────────────────────────
## Summary
I've successfully added the `split_services` feature flag to control the new service organization view. Here's what was implemented:
### Changes Made:
1. **ServicesTab.tsx** (`ushadow/frontend/src/components/services/ServicesTab.tsx`):
- Added `splitServicesEnabled?: boolean` prop (defaults to `false`)
- Implemented a **legacy view fallback**: When the flag is disabled, the component renders the original single-grid view
- When enabled, it shows the new split view with API/Workers and UI tabs
2. **ServiceConfigsPage.tsx** (`ushadow/frontend/src/pages/ServiceConfigsPage.tsx`):
- Passed the feature flag to ServicesTab: `splitServicesEnabled={isEnabled('split_services')}`
- The flag is read from the existing feature flag context
### How It Works:
```typescript
// When split_services flag is OFF (default):
// ✓ Shows all services in a single grid (original behavior)
// When split_services flag is ON:
// ✓ Shows API & Workers tab with grouped services
// ✓ Shows UI Services tab separately
// ✓ Workers are visually nested under their API services
```
### Backend Configuration Needed:
To enable this feature, you'll need to add the `split_services` feature flag to your backend feature flags configuration. The flag should be defined with:
```python
{
"split_services": {
"enabled": false, # Set to true to enable
"description": "Split services into API/Workers and UI tabs with worker grouping",
"type": "boolean"
}
}
```
### Benefits of This Approach:
- **Safe rollout**: Start with the flag disabled, test with specific users/environments first
- **Quick rollback**: If issues arise, disable the flag without code changes
- **A/B testing**: Can compare user engagement with both views
- **Gradual migration**: Give teams time to adjust to the new layout
The implementation maintains 100% backward compatibility while allowing you to toggle the new feature on/off at runtime!
* Excellent! The feature flag has been successfully added. Let me create a summary of what was done:
## Summary
I've successfully added the `split_services` feature flag to your existing feature flags configuration system. Here's what was changed:
### Changes Made:
**`config/feature_flags.yaml`**:
```yaml
# Split Services View - Organize services into API/Workers and UI tabs
split_services:
enabled: false
description: "Split services into API & Workers and UI Services tabs with automatic worker grouping"
type: release
```
### Feature Flag Details:
- **Name**: `split_services`
- **Default State**: `disabled` (enabled: false)
- **Type**: `release` (user-facing feature)
- **Description**: Clear explanation of what the feature does
### How It Works:
The feature flag system in your codebase supports **hot-reloading**, which means:
1. **No restart required**: Changes to `feature_flags.yaml` are picked up automatically
2. **Runtime toggling**: You can enable/disable the feature through the API or by editing the YAML file
3. **Gradual rollout**: Enable for specific environments or user groups
### Enabling the Feature:
To enable the split services view, simply change:
```yaml
split_services:
enabled: true # Change from false to true
```
The system will automatically pick up the change, and users will see the new split view with:
- **API & Workers** tab (with workers grouped under their APIs)
- **UI Services** tab (for all UI-related services)
When disabled (current state), users see the original single-grid view with all services together.
★ Insight ─────────────────────────────────────
The implementation follows a **feature flag best practice pattern**:
- Frontend code checks `isEnabled('split_services')` at runtime
- Backend serves the flag state via `/api/feature-flags/status`
- YAML configuration provides a single source of truth
- The graceful degradation ensures zero breaking changes when disabled
─────────────────────────────────────────────────
The feature is now ready for testing! You can enable it whenever you're ready to try out the new split services view.
* feat: Add Keycloak OAuth theme matching Ushadow design system
Complete custom Keycloak theme for login and registration pages with:
- Centered layout with gradient "Ushadow" brand text (green→purple)
- Purple/green radial glow background matching frontend design
- Rounded input fields (10px border-radius) with proper dark styling
- Green primary button with glow effect
- Single-column form layout for registration page
- Fixed password field white outline and inline required asterisks
- Semi-transparent card with backdrop blur
- Responsive design with mobile support
Frontend login page updated to match Keycloak OAuth pages:
- Form-based design with email/password fields
- Same dark theme and geometric background pattern
- Blue primary button and green register link
- Consistent styling across authentication flow
Infrastructure:
- Added Keycloak service to docker-compose.infra.yml
- Theme mounted from ushadow/frontend/keycloak-theme/
- Connected to Postgres for session storage
- Auto-imports realm configuration on startup
Theme files:
- ushadow/frontend/keycloak-theme/login/resources/css/login.css
- ushadow/frontend/keycloak-theme/login/theme.properties
- ushadow/frontend/keycloak-theme/login/resources/img/logo.png
- docs/KEYCLOAK_THEMING_GUIDE.md
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* security: Move Keycloak credentials to environment variables
Replace hardcoded Keycloak admin credentials with environment variables:
- KEYCLOAK_ADMIN (defaults to 'admin' for dev)
- KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev)
- KEYCLOAK_PORT (defaults to 8081)
- KEYCLOAK_MGMT_PORT (defaults to 9000)
Created .env.example template with:
- All required Keycloak configuration
- Security warnings about changing defaults in production
- Clear documentation for each variable
This prevents credentials from being committed to git and allows
different environments to use their own secure credentials.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Kaycloak
* feat: Add Keycloak OAuth theme matching Ushadow design system
Complete custom Keycloak theme for login and registration pages with:
- Centered layout with gradient "Ushadow" brand text (green→purple)
- Purple/green radial glow background matching frontend design
- Rounded input fields (10px border-radius) with proper dark styling
- Green primary button with glow effect
- Single-column form layout for registration page
- Fixed password field white outline and inline required asterisks
- Semi-transparent card with backdrop blur
- Responsive design with mobile support
Frontend login page updated to match Keycloak OAuth pages:
- Form-based design with email/password fields
- Same dark theme and geometric background pattern
- Blue primary button and green register link
- Consistent styling across authentication flow
Infrastructure:
- Added Keycloak service to docker-compose.infra.yml
- Theme mounted from ushadow/frontend/keycloak-theme/
- Connected to Postgres for session storage
- Auto-imports realm configuration on startup
Theme files:
- ushadow/frontend/keycloak-theme/login/resources/css/login.css
- ushadow/frontend/keycloak-theme/login/theme.properties
- ushadow/frontend/keycloak-theme/login/resources/img/logo.png
- docs/KEYCLOAK_THEMING_GUIDE.md
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* security: Move Keycloak credentials to environment variables
Replace hardcoded Keycloak admin credentials with environment variables:
- KEYCLOAK_ADMIN (defaults to 'admin' for dev)
- KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev)
- KEYCLOAK_PORT (defaults to 8081)
- KEYCLOAK_MGMT_PORT (defaults to 9000)
Created .env.example template with:
- All required Keycloak configuration
- Security warnings about changing defaults in production
- Clear documentation for each variable
This prevents credentials from being committed to git and allows
different environments to use their own secure credentials.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* memorycards and dualstream recording
# Conflicts:
# ushadow/frontend/src/components/services/ServicesTab.tsx
* feat: Add Keycloak OAuth implementation
Adds complete Keycloak OAuth2/OIDC authentication:
Frontend:
- KeycloakAuthContext: OAuth flow with token management
- TokenManager: PKCE support, token refresh, logout
- OAuthCallback: Handle OAuth redirect and token exchange
- ServiceTokenManager: Cross-service token generation
Backend:
- keycloak_admin.py: Admin API integration
- keycloak_auth.py: OAuth token validation
- token_bridge.py: Convert Keycloak tokens to service tokens
- keycloak_user_sync.py: Sync Keycloak users to MongoDB
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* F13f auth complete (#149)
* feat: Add Keycloak OAuth theme matching Ushadow design system
Complete custom Keycloak theme for login and registration pages with:
- Centered layout with gradient "Ushadow" brand text (green→purple)
- Purple/green radial glow background matching frontend design
- Rounded input fields (10px border-radius) with proper dark styling
- Green primary button with glow effect
- Single-column form layout for registration page
- Fixed password field white outline and inline required asterisks
- Semi-transparent card with backdrop blur
- Responsive design with mobile support
Frontend login page updated to match Keycloak OAuth pages:
- Form-based design with email/password fields
- Same dark theme and geometric background pattern
- Blue primary button and green register link
- Consistent styling across authentication flow
Infrastructure:
- Added Keycloak service to docker-compose.infra.yml
- Theme mounted from ushadow/frontend/keycloak-theme/
- Connected to Postgres for session storage
- Auto-imports realm configuration on startup
Theme files:
- ushadow/frontend/keycloak-theme/login/resources/css/login.css
- ushadow/frontend/keycloak-theme/login/theme.properties
- ushadow/frontend/keycloak-theme/login/resources/img/logo.png
- docs/KEYCLOAK_THEMING_GUIDE.md
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* security: Move Keycloak credentials to environment variables
Replace hardcoded Keycloak admin credentials with environment variables:
- KEYCLOAK_ADMIN (defaults to 'admin' for dev)
- KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev)
- KEYCLOAK_PORT (defaults to 8081)
- KEYCLOAK_MGMT_PORT (defaults to 9000)
Created .env.example template with:
- All required Keycloak configuration
- Security warnings about changing defaults in production
- Clear documentation for each variable
This prevents credentials from being committed to git and allows
different environments to use their own secure credentials.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: Add Keycloak OAuth implementation
Adds complete Keycloak OAuth2/OIDC authentication:
Frontend:
- KeycloakAuthContext: OAuth flow with token management
- TokenManager: PKCE support, token refresh, logout
- OAuthCallback: Handle OAuth redirect and token exchange
- ServiceTokenManager: Cross-service token generation
Backend:
- keycloak_admin.py: Admin API integration
- keycloak_auth.py: OAuth token validation
- token_bridge.py: Convert Keycloak tokens to service tokens
- keycloak_user_sync.py: Sync Keycloak users to MongoDB
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Revert "F13f auth complete (#149)" (#150)
This reverts commit dd21556ec58b739239e2646fe503837e2efb6427.
* F13f auth complete (#151)
* feat: Add Keycloak OAuth theme matching Ushadow design system
Complete custom Keycloak theme for login and registration pages with:
- Centered layout with gradient "Ushadow" brand text (green→purple)
- Purple/green radial glow background matching frontend design
- Rounded input fields (10px border-radius) with proper dark styling
- Green primary button with glow effect
- Single-column form layout for registration page
- Fixed password field white outline and inline required asterisks
- Semi-transparent card with backdrop blur
- Responsive design with mobile support
Frontend login page updated to match Keycloak OAuth pages:
- Form-based design with email/password fields
- Same dark theme and geometric background pattern
- Blue primary button and green register link
- Consistent styling across authentication flow
Infrastructure:
- Added Keycloak service to docker-compose.infra.yml
- Theme mounted from ushadow/frontend/keycloak-theme/
- Connected to Postgres for session storage
- Auto-imports realm configuration on startup
Theme files:
- ushadow/frontend/keycloak-theme/login/resources/css/login.css
- ushadow/frontend/keycloak-theme/login/theme.properties
- ushadow/frontend/keycloak-theme/login/resources/img/logo.png
- docs/KEYCLOAK_THEMING_GUIDE.md
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* security: Move Keycloak credentials to environment variables
Replace hardcoded Keycloak admin credentials with environment variables:
- KEYCLOAK_ADMIN (defaults to 'admin' for dev)
- KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev)
- KEYCLOAK_PORT (defaults to 8081)
- KEYCLOAK_MGMT_PORT (defaults to 9000)
Created .env.example template with:
- All required Keycloak configuration
- Security warnings about changing defaults in production
- Clear documentation for each variable
This prevents credentials from being committed to git and allows
different environments to use their own secure credentials.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: Add Keycloak OAuth implementation
Adds complete Keycloak OAuth2/OIDC authentication:
Frontend:
- KeycloakAuthContext: OAuth flow with token management
- TokenManager: PKCE support, token refresh, logout
- OAuthCallback: Handle OAuth redirect and token exchange
- ServiceTokenManager: Cross-service token generation
Backend:
- keycloak_admin.py: Admin API integration
- keycloak_auth.py: OAuth token validation
- token_bridge.py: Convert Keycloak tokens to service tokens
- keycloak_user_sync.py: Sync Keycloak users to MongoDB
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: Add automatic Keycloak redirect URI registration
Implements three methods for configuring Keycloak redirect URIs:
1. **Automatic Registration** (Recommended):
- Backend auto-registers redirect URIs on startup
- Detects PORT_OFFSET, TAILSCALE_HOSTNAME, FRONTEND_URL
- Merges with existing URIs (safe for multi-worktree)
- Non-blocking, logs warnings if Keycloak unavailable
2. **Manual Script**:
- scripts/register_keycloak_redirects.py
- Register specific URIs on-demand
- Useful for production deployments
3. **Admin Console**:
- Manual configuration via Keycloak UI
- Documented in KEYCLOAK_URL_CONFIGURATION.md
Files:
- keycloak_startup.py: Auto-registration logic
- register_keycloak_redirects.py: Manual registration script
- main.py: Calls keycloak_startup during lifespan
- keycloak_admin router: Added to API
- KEYCLOAK_URL_CONFIGURATION.md: Complete documentation
This enables multi-worktree development without manual Keycloak config.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: Add Keycloak SSO integration with conversation sharing (#152)
Implements complete Keycloak OAuth 2.0 authentication flow with PKCE for
federated single sign-on (SSO). Supports conversation sharing with external
users while maintaining backward compatibility with legacy email/password auth.
- Add KeycloakAuthContext with OAuth PKCE flow (login, register, logout)
- Implement OAuthCallback component for code-to-token exchange
- Add token storage in sessionStorage (cleared on tab close)
- Implement automatic token refresh (60s before expiry)
- Clear authorization code from URL to prevent replay attacks
- Redesign LoginPage with Keycloak sign-in button
- Add "Create account" registration link (routes to Keycloak registration)
- Implement hybrid logout (detects Keycloak vs legacy auth)
- Update Layout component with unified logout handler
- Update axios interceptor to send Keycloak tokens in Authorization header
- Add fallback to legacy JWT tokens for backward compatibility
- Add keycloak_id field to User model for SSO identity mapping
- Support both legacy (email/password) and Keycloak users in same database
- Replace get_current_user with get_current_user_hybrid
- Accept both legacy JWT and Keycloak OIDC tokens
- Validate Keycloak tokens (issuer, expiration)
- Extract user info from token claims (email, name, sub)
- Implement automatic Keycloak → service token conversion for proxied services
- Sync Keycloak users to MongoDB (just-in-time provisioning)
- Generate Chronicle-compatible JWTs with MongoDB ObjectIds
- Support audiences: ["ushadow", "chronicle"]
- Add token bridging to /api/services/{name}/proxy endpoints
- Automatically convert Keycloak tokens before forwarding to Chronicle
- Maintain backward compatibility with legacy tokens
- Add automatic redirect URI registration on startup
- Implement Keycloak admin API integration (user management, realm config)
- Add keycloak-admin router with user CRUD operations
- Enable Keycloak by default
- Configure internal and external URLs
- Set realm: ushadow
- Configure client IDs: ushadow-backend, ushadow-frontend
- KEYCLOAK_URL: Internal container URL
- KEYCLOAK_PUBLIC_URL: External user-facing URL
- KEYCLOAK_REALM: Realm name
- KEYCLOAK_ADMIN_USER/PASSWORD: Admin credentials
- PKCE (Proof Key for Code Exchange) for OAuth flow
- CSRF protection via state parameter
- Token stored in sessionStorage (auto-cleared on tab close)
- Authorization code single-use enforcement
- Proper SSO logout (terminates Keycloak session)
- Keycloak token validation (issuer, expiration, audience)
None - maintains full backward compatibility with legacy auth.
Users can continue using email/password login while new users can
register via Keycloak SSO.
1. Existing users: Continue using email/password
2. New users: Register via Keycloak
3. Existing users can link Keycloak account (auto-linked on first SSO login)
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Add ingress configuration UI to DeployToK8sModal
- Add ingress enable/disable checkbox
- Auto-configure ingress based on cluster settings
- Auto-generate hostname from service name
- Allow hostname customization with validation
- Send ingress spec to backend deployment API
Part of Tailscale MagicDNS + Ingress automation feature.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: Add kubernetes cluster ingress configuration
- Add ingress configuration fields to KubernetesCluster model
- Add PATCH endpoint for updating cluster settings
- Auto-configure ingress hostnames based on cluster settings
- Add jwt-decode dependency for token handling
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: Resolve merge conflict in App.tsx - keep KeycloakAuthProvider
* fix: Resolve merge conflicts in ConversationDetailPage - add share functionality
* feat: Add Keycloak user compatibility across all routers
- Create auth_helpers utility for dual Keycloak/legacy support
- Update all routers to handle Keycloak user dict format
- Auto-detect Tailscale hostname using TailscaleManager
- Fix AttributeError when accessing user.id/user.email
Changes:
- New: src/utils/auth_helpers.py (get_user_id, get_user_email, get_user_name)
- Updated: tailscale.py, chat.py, memories.py, share.py, unodes.py
- Enhanced: keycloak_startup.py to auto-detect Tailscale URLs
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: Resolve merge conflicts in keycloak_user_sync - use display_name
* fix: Handle Keycloak user dict in /users/me endpoint
* feat: Add token bridging to audio relay WebSocket
- Convert Keycloak tokens to service tokens before forwarding
- Enables Chronicle to accept authentication via audio relay
- Fixes ObjectId parse error for Keycloak UUIDs
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: Add codec parameter to audio relay destinations
- Automatically add ?codec=pcm or ?codec=opus based on audio source
- mic (device microphone) → pcm
- omi (hardware device) → opus
- Fixes division by zero error in Chronicle audio processing
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fixed audiorelay codec
* upgraded mobile
* Add Chronicle-powered dashboard with recent activities
Backend:
- Created dashboard models (ActivityEvent, DashboardStats, DashboardData)
- Implemented DashboardService to fetch conversations & memories from Chronicle
- Added /api/dashboard endpoint with conversation and memory limits
- Registered dashboard router in main.py
Frontend:
- Created dashboardApi client with TypeScript interfaces
- Implemented useDashboardData hook with 30s auto-refresh
- Updated Dashboard.tsx with real-time data:
* Conversation and memory counts (clickable stats)
* Combined activity feed showing recent conversations & memories
* Timestamp formatting ("2m ago", "Yesterday", etc.)
* Loading, error, and empty states
* "Start Chat" button navigates to /chat
* "View Conversations" button navigates to /conversations
Dashboard now shows actual Chronicle data and auto-refreshes every 30 seconds.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
# Conflicts:
# ushadow/backend/main.py
* added postgres as default
* added kc realm
* made mycelia default service
* aded method indeox for claude
* feat: Add env sync tool to sync .env with .env.example (#154)
- Fix merge conflict in .env.example
- Add scripts/sync-env.py to detect and apply missing variables
- Add make env-sync and env-sync-apply targets
Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Stuart Alexander <thestumonkey@gmail.com>
* aded method indeox for claude
* fixed bad docker mount and added kc sub
* fixed kc user details
* extended token lifespan until refresh complete
* used standar kc lib
* fixced keycloak aiuth
* fixed default KC
* updated service ui
* added share
* updated anon auth
* changed uniode to machine name
* added tags and build services
* deploy tweaks
* taghs
* updated ush client
* refactored url proxy
* auth tweaks
* environments
* updated mobile to use KC
* updated docker manager and script
* externals support
* addfunnel
* fixed kc token
* added backend vars for mycelai
* bits n pieces
* added images
* fixed keycloak login
* fixed wizard startup studff
* amend getting started, added util scritps
* Disable New Architecture for Android build compatibility
Several dependencies (friend-lite-react-native, react-native-audio-record,
react-native-network-info) are not compatible with the New Architecture,
causing Android builds to fail. Disabling to unblock Play Store submission.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Fix Android build: Enable Jetifier and configure build properties
Changes:
- Enable android.enableJetifier=true via expo-build-properties to convert legacy support libraries to AndroidX
- Explicitly disable newArchEnabled for Android
- Disable Proguard in release builds to avoid obfuscation issues
- Remove unused react-native-network-info dependency
This fixes duplicate class errors from mixing androidx and android.support libraries.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Add custom Expo config plugin to fix AndroidManifest merge conflict
Creates a config plugin that adds tools:replace="android:appComponentFactory"
to the AndroidManifest to resolve the conflict between androidx.core and
android.support.v4 libraries.
This addresses the manifest merge error:
- androidx.core.app.CoreComponentFactory (androidx.core:core:1.16.0)
- android.support.v4.app.CoreComponentFactory (com.android.support:support-compat:28.0.0)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Fix AndroidManifest plugin to specify appComponentFactory value
The manifest merger requires both tools:replace AND the actual value
to be specified. Now explicitly sets:
android:appComponentFactory="androidx.core.app.CoreComponentFactory"
Also adds gradle exclusion plugin to force remove android.support libraries.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Update eas.json with App Store Connect App ID
Added actual ascAppId (6757369623) for iOS app submission.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Add BGTaskSchedulerPermittedIdentifiers for iOS background processing
Apple requires this Info.plist key when using UIBackgroundModes: processing.
Added task identifiers for background refresh and processing tasks.
Fixes App Store Connect validation error.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Add ITSAppUsesNonExemptEncryption flag for iOS
Declares that the app doesn't use non-exempt encryption, avoiding
manual compliance questionnaire in App Store Connect for every build.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* moved keycloak url
* using hostip for kc
* protect against login loop
* Bump iOS build number to 4
Incrementing to avoid build number conflict in App Store Connect.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* move to using standard keycloak admin
* updated kc and mongo urls
* more kc madness
* General lamcher (#156)
* chore(launcher): release v0.5.1
* added tmux
* Tmux windows and add from branch
* added generic installer
* split pages into sections
* chore(launcher): release v0.6.0
* added ui tweaks and generic installer
* chore(launcher): release v0.6.1
* moved us to preconditions
* chore(launcher): release v0.6.2
* changed uinicode for win
* chore(launcher): release v0.6.3
* refactored to have better muti plat
* chore(launcher): release v0.7.0
* chore(launcher): release v0.7.1
* chore(launcher): release v0.7.2
* fix(launcher): remove obsolete Windows platform methods
Remove methods that are no longer part of the PlatformOps trait:
- install_docker, install_git, install_tailscale, install_homebrew
- create_shell_command, python_executable
These are now handled by the generic YAML-driven installer via
the install_package method.
Fixes Windows build compilation errors.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.3
* fix(launcher): remove obsolete Linux platform methods
Remove methods that are no longer part of the PlatformOps trait:
- install_docker, install_git, install_tailscale, install_homebrew
- create_shell_command, python_executable
These are now handled by the generic YAML-driven installer via
the install_package method.
Fixes Linux build compilation errors.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.4
* fix(launcher): bundle prerequisites.yaml with application
- Add prerequisites.yaml to bundle resources in tauri.conf.json
- Update path resolution to check next to executable (Windows/Linux)
- Update path resolution to check resources subdirectory (Windows)
- Fixes "could not find prerequisites.yaml" error on Windows builds
The file will now be properly bundled with the application in
production builds and found at runtime.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat(launcher): bundle startup resources for version stability
Each launcher version now bundles its own copy of setup scripts and
compose files, ensuring it remains functional even when the main
repository code evolves.
**Architecture:**
- Bundle script (bundle-resources.sh) copies setup/ and compose/ at build time
- Bundled resources are gitignored and regenerated on each build
- Runtime code prefers bundled versions, falls back to repo if needed
- Version stamp tracks what code was bundled with each build
**Changes:**
- Add bundle-resources.sh to copy setup/ and compose/ to src-tauri/bundled/
- Update package.json to run bundling before all tauri commands
- Add bundled/**/* to Tauri resources in tauri.conf.json
- Create bundled.rs module to locate bundled resources at runtime
- Update docker.rs to use bundled compose files and setup scripts
- Add src-tauri/bundled/ to .gitignore
**Benefits:**
- Launcher version X works with repo code from version X (version-locked)
- Breaking changes in repo don't break old launchers
- Each launcher is self-contained and testable
- Setup code has single source of truth in main repo
Fixes launcher stability when repo code changes.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.5
* fix(launcher): properly quote paths with spaces for shell commands
Windows paths like "C:/Program Files/Ushadow" were breaking shell
commands because they weren't quoted, causing "failed to spawn c:/program"
errors.
**Root Cause:**
Shell commands interpret spaces as argument separators, so unquoted
paths get split. Example:
uv run setup/run.py → Works
uv run C:/Program Files/setup/run.py → Breaks (tries to run "C:/Program")
**Solution:**
- Add quote_path() and quote_path_buf() utilities in utils.rs
- Use single quotes compatible with PowerShell and bash/zsh
- Escape internal single quotes by doubling them ('' → '''')
- Quote all paths used in shell commands:
* Bundled resource paths (setup/run.py, docker-compose.infra.yml)
* Working directories in build_env_command()
**Updated Files:**
- utils.rs: New quote_path() and quote_path_buf() functions
- docker.rs: Quote bundled setup script and compose file paths
- platform/*.rs: Quote working_dir in build_env_command()
**Testing:**
✓ Compiles on macOS
✓ Handles paths with spaces: 'C:/Program Files/App'
✓ Escapes single quotes: 'It''s App' (doubled apostrophe)
Fixes Windows installation path errors.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.6
* fix(launcher): copy bundled resources to working dir before use
Windows was failing with "PermissionError: Access is denied" when
trying to create config directories because bundled scripts were
running from C:\Program Files\ which requires admin privileges.
**Root Cause:**
Python's run.py uses `Path(__file__).parent` to determine paths,
so when run from C:\Program Files\Ushadow\bundled\setup\run.py,
it tries to create config/ in C:\Program Files\ → Permission denied
**Solution:**
1. Copy bundled setup/ directory to working directory before running
2. Copy bundled compose files to working directory before using
3. Run scripts from writable location (user's repos directory)
**Implementation:**
- Add copy_dir_recursive() to recursively copy setup directory
- Copy setup before running: C:\Program Files\...\bundled\setup → C:\Users\...\repos\ushadow\setup
- Copy compose before using: bundled\compose\*.yml → repos\...\compose\*.yml
- Skip __pycache__ and .pyc files during copy
- Gracefully handle copy failures (continue with partial copy)
**Benefits:**
✓ No admin privileges required
✓ Scripts can create config, logs, etc. in user directory
✓ Still version-locked (copies bundled version)
✓ Works on all platforms (no-op when using repo version directly)
Fixes Windows Permission denied errors.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.7
* feat(launcher): set default ushadow env color to purple
- Add special case in get_colors_for_name() for 'ushadow' environment
- Returns purple color instead of hashed color
- Improves visual distinction for the default environment
* feat(launcher): rename tabs and improve navigation flow
- Rename 'Launch' tab to 'Install' (now landing page)
- Rename 'Install' tab to 'Infra'
- Update AppMode type: 'launch' -> 'install', 'install' -> 'infra'
- Auto-navigate to infra page if prereqs/infra need setup
- Auto-navigate to environments page when setup complete
- Improves UX by showing relevant page during installation flow
* feat(launcher): add loading animation overlay when starting containers
- Add full-screen loading overlay in DetailView when starting stopped env
- Add loading overlay in BrowserView when containers are loading
- Shows animated spinner with 'Starting containers...' message
- Improves UX by providing clear visual feedback during startup
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix(launcher): use configured project_root for discovery instead of hardcoded path
- Update discover_environments to get project_root from app state
- Pass project_root to discover_environments_with_config
- Fixes issue where discovery always looked in ~/repos/ushadow
even when user configured ushadow-dev or other paths
- Ensures environments are discovered from the correct repository
This fixes the path depth issue where the system was looking for
files in the wrong repo location (e.g., ushadow instead of ushadow-dev)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix(setup): make PROJECT_ROOT calculation more robust
- Validate PROJECT_ROOT by checking for docker-compose.yml
- Fall back to current working directory if not found at calculated path
- Handles cases where setup script is copied to unexpected locations
- Prevents errors looking for files in incorrect nested paths
This works together with the discovery fix to ensure the setup
script uses the correct repository root even when copied to the
working directory during bundled resource deployment.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.8
* fix(launcher): resolve compiler errors and warnings
- Fix Send trait error in discover_environments by properly scoping MutexGuard
- Remove unused import std::process::Command from docker.rs
- Remove unused import shell_command from windows.rs
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.9
* fix(launcher): improve ushadow environment UX and color consistency
- Add 'ushadow' to NAMED_COLORS with purple color for consistent theming
- Fix duplicate environment entries by checking discovery before adding to creatingEnvs
- Fix tab navigation: 'install' tab shows launch page, 'infra' shows prereqs/infrastructure
- Add branch logging in quick launch to acknowledge which branch is being used
- Pass activeBranch to handleClone during quick launch
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix(launcher): prevent PowerShell windows from flashing on Windows
- Add -WindowStyle Hidden flag to PowerShell commands
- Add -NonInteractive flag to prevent waiting for user input
- Maintains CREATE_NO_WINDOW flag for double protection
- Eliminates command window flashing during operations
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.10
* fix(launcher): improve Windows UX and fix branch detection
1. Fix Windows command popups:
- Replace all Command::new("git") with silent_command("git") in worktree.rs
- Use silent_command for osascript, open, and xdg-open commands
- Ensures no console windows flash when running git/system commands
2. Improve activity log branch visibility:
- Show branch name when cloning: "Cloning Ushadow on dev branch..."
- Show branch when pulling existing repo: "Repository found (on dev branch)..."
- Add "✓ Using dev branch" log after successful pull
- Makes it clear which branch (main/dev) is being used
3. Fix environment base_branch detection:
- Non-worktree environments now check actual git branch via git CLI
- Use determine_base_branch() for accurate main/dev detection
- Fallback to path-based detection if git command fails
- Fixes issue where dev branch showed as "main" in environment cards
4. Restore creating containers loading card:
- Always add environment to creatingEnvs when starting (not just new envs)
- Provides immediate visual feedback during container startup
- Fixes missing loading animation when starting stopped containers
- Card is removed after 15s when containers are healthy
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.11
* fix(launcher): improve environment creation UX
1. Show folder path in activity log:
- When starting an environment, log the folder path with "Creating in: /path"
- Gives users visibility into where the environment is being created
- Helps distinguish between different environment locations
2. Merge creating cards with discovered environments:
- When discovery finds an environment matching a "creating" card, remove the creating card
- Prevents duplicate cards (one "creating" + one "discovered")
- The discovered environment card takes precedence
- Provides cleaner, less confusing UI during environment startup
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.12
* refactor(launcher): switch to worktree-only architecture
BREAKING CHANGE: Fundamental architecture shift to always use worktrees
## Overview
Changed from folder-based cloning to worktree-based architecture. This enables
running main and dev environments simultaneously and provides a cleaner workflow.
## Key Changes
1. **Always clone main branch to project root**
- Project root now ALWAYS contains main branch
- Never clones dev branch directly
2. **Main button → starts root environment**
- Starts "ushadow" environment from the main repo
- This is the root environment and cannot be deleted
3. **Dev button → creates/starts dev worktree**
- Checks if "ushadow-dev" worktree exists from dev branch
- Creates it if not found
- Starts "ushadow-dev" environment
- Can run simultaneously with main
4. **Simplified "New Environment" dialog**
- Now just asks for: name + base branch (main/dev)
- Removed complex branch selector
- Always creates worktrees (never standalone repos)
- Cleaner UX with main/dev toggle
5. **Protect root environment from deletion**
- Prevents deleting "ushadow" non-worktree environment
- Shows helpful message explaining it's the root repo
- Other worktrees can still be deleted normally
## Benefits
- ✅ Run main and dev simultaneously
- ✅ Simpler mental model (root = main, worktrees = everything else)
- ✅ No more folder path confusion
- ✅ Consistent project structure
- ✅ Prevents accidental deletion of main repo
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.13
* fix(launcher): improve UX for environment creation and infra panels
1. Add branch name input to new environment dialog:
- Users can now specify a custom branch name (e.g., feature/auth)
- Branch name is optional - defaults to base branch (main/dev)
- Base branch selector now clearly labeled as fallback option
- More flexible workflow for different branching strategies
2. Expand all infra page drawers by default:
- Prerequisites panel now starts expanded (was collapsed when ready)
- Infrastructure panel now starts expanded (was collapsed when running)
- Users can see status at a glance without clicking to expand
3. Show starting environments in running section:
- Environments being created/started now appear in "running" tab
- Uses creatingEnvs and loadingEnv state to determine placement
- Prevents confusing "detected" → "running" tab switching
- Provides consistent UX during environment startup
- Only shows in "detected" if environment stops/fails
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.14
* Remove FoldersPanel from install page
With worktree-only architecture, folder paths are automatically
managed. The FoldersPanel showing project root and worktrees dir
is no longer needed since users don't manually configure these paths.
- Removed FoldersPanel component from install page render
- Removed FoldersPanel import from App.tsx
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Fix environment panel reload loop
Moved auto-select logic from render body into useEffect to prevent
render loop. The previous code was calling setSelectedEnv during
render, which triggered another render, causing an infinite loop
especially during the startup polling period.
Fixed by:
- Moving auto-select into useEffect
- Using stable dependency (env names string) instead of array reference
- Only re-running when selection state or env list actually changes
ushadow/launcher/src/components/EnvironmentsPanel.tsx:78-85
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Fix environment discovery and port detection
Root cause analysis:
- Service containers (mycelia, chronicle, mem0) were being detected as
environment containers, causing wrong port mappings
- Orange env was picking up mycelia-backend port (5173) instead of its
own backend port (8360)
- This caused frontend to connect to wrong backend → reload loop
Fixed by:
1. **Simplified container filtering**: Only containers with "ushadow-"
prefix are environment containers. Service containers don't have this
prefix (using ENV_NAME pattern from other branch)
2. **Read ports from .env files**: Use BACKEND_PORT and WEBUI_PORT from
each worktree's .env file as source of truth, with Docker detection
as fallback
3. **Setup script working directory**: Run setup from correct working_dir
so it finds the right PROJECT_ROOT for docker compose
4. **Remove auto-select loop**: Removed auto-select logic that caused
infinite render loops during environment startup polling
Changes:
- ushadow/launcher/src-tauri/src/commands/discovery.rs:17-46,249-253,309-338
- ushadow/launcher/src-tauri/src/commands/docker.rs:490
- ushadow/launcher/src/components/EnvironmentsPanel.tsx:75-77
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Simplify launcher architecture and fix worktree creation
Major changes:
- Remove dev branch dual-checkout infrastructure (now single main checkout only)
- Fix worktree creation to branch from origin/dev or origin/main
- Implement envname/branchname-suffix branch naming convention
- Add iframe permissions for microphone, camera, clipboard
- Add cache clearing functionality to dev tools
Backend changes:
- Simplify determine_base_branch() to parse branch suffix (-dev/-main)
- Update create_worktree() to branch from origin/dev or origin/main
- Fix container filtering to use exact 3-part name matching
Frontend changes:
- Remove BranchToggle component and all branch-switching logic
- Remove branch-specific state (activeBranch, mainBranchPath, devBranchPath)
- Simplify ProjectSetupDialog (single path, no branch context)
- Update NewEnvironmentDialog with correct branch naming (envname/branch-base)
- Add Clear Cache button to DevToolsPanel
- Add iframe allow attribute for media device permissions
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(launcher): release v0.7.15
* Now I need your input on the validation logic. Let me prepare the file for you to add the validation rules:
## Config Validation - Your Input Needed
I've created the configuration system with basic validation, but the **validation rules** need your design decisions. This is where your judgment shapes how strict vs. permissive the launcher will be.
**Context**: The `validate()` method at ushadow/launcher/src-tauri/src/config.rs:180 currently has TODO comments asking for guidance on validation rules.
**Why this matters**:
- **Too strict** → Valid configs get rejected, frustrating users
- **Too lenient** → Bad configs cause cryptic runtime errors later
- **Just right** → Catches 95% of mistakes with clear error messages
**Trade-offs to consider:**
1. **Service names with special characters**
- Allow spaces/dashes or enforce `[a-z0-9-_]` only?
- Docker container names have restrictions, but should we enforce stricter patterns?
2. **Port range validation**
- Should we prevent ports < 1024 (requires root on Unix)?
- Should we check that `base_port + offset.max` doesn't exceed 65535?
- Should we warn if ports conflict with common services (MySQL 3306, Postgres 5432)?
3. **File path validation**
- Should we check that `compose_file` exists at validation time?
- Or allow non-existent paths (might be created later)?
- Should we block path traversal attempts (`../../../etc/passwd`)?
4. **Command injection prevention**
- Should we validate that setup commands don't contain dangerous patterns?
- Or trust users since this is their own config file?
5. **Port calculation circular dependencies**
- Currently checks for self-reference, but what about A→B→A loops?
- Should we implement deep cycle detection?
**In config.rs:180-227**, please implement the validation rules you think strike the best balance. Here's the function signature:
```rust
fn validate(&self) -> Result<(), String> {
// Add your validation logic here
// Return Err(message) for validation failures
// Return Ok(()) if config is valid
}
```
Consider adding validations for the aspects above that you think are most important. The existing code already validates:
- ✅ `project.name` is non-empty
- ✅ At least one service is defined
- ✅ Port calculations don't self-reference
- ✅ Port calculations reference existing services
Would you like to add the additional validation rules, or should I implement a reasonable default set based on common mistakes?
* ## Phase 1 Complete! 🎉
I've successfully implemented the **Core Configuration Module** for generalizing the ushadow launcher. Here's what was built:
### What You Now Have
**1. Configuration Infrastructure** (`config.rs` - 377 lines)
- Complete YAML-based configuration system with 10+ data structures
- Simple validation (port ranges, service references, required fields)
- Helper methods for container naming and variable expansion
- Unit tests covering core functionality
**2. Tauri Integration** (4 new commands)
- `load_project_config()` - Load and validate config
- `get_current_config()` - Access loaded config
- `check_launcher_config_exists()` - Check for config file
- `validate_config_file()` - Validate without loading
**3. Sample Configuration** (`.launcher-config.yaml`)
- Complete 159-line config file for ushadow in the repository root
- Defines all services, ports, infrastructure, and setup commands
- Ready to use as a template for other projects
**4. Zero Breaking Changes**
- All existing launcher functionalit…
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
added better logging to mobile app