Conversation
WalkthroughIntroduces a centralized LLMClient with streaming and thinking-token parsing, a ThinkingParsers framework, UI for real-time "thinking" tokens, migration of AI calls to LLMClient across services/views, added AppServices container, minor formatting and metadata updates, and version bump to 1.5.1-beta.3. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as UI (ContentView / Command/Rewrite)
participant Settings as SettingsStore
participant LLM as LLMClient
participant Parser as ThinkingParser
participant API as LLM Endpoint
UI->>Settings: query isReasoningModel(model)
UI->>LLM: call(config: messages, model, streaming, extraParameters)
LLM->>API: send request (HTTP/SSE)
loop stream chunks
API-->>LLM: chunk
LLM->>Parser: processChunk(chunk)
Parser-->>LLM: (thinkingChunk, contentChunk)
LLM->>UI: onThinkingChunk / onContentChunk
UI->>UI: update streamingThinkingText / streamingText
end
API-->>LLM: stream end
LLM->>Parser: finalize()
Parser-->>LLM: (finalThinking, finalContent)
LLM->>UI: return Response(thinking, content, toolCalls)
UI->>Store: persist message (with thinking)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings, 1 inconclusive)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
Sources/Fluid/Services/TypingService.swift (1)
264-264: Remove debugging comment before release.Line 264 contains a debug-era comment:
"Why is it working now? And why is it not working now?"This should be removed or replaced with a proper TODO comment if there's a documented concern.Sources/Fluid/TalkingAnimations.swift (1)
25-30: Retain cycle prevents deallocation.The
Timer.scheduledTimerclosure capturesselfstrongly whileselfholds a strong reference to the timer. This creates a retain cycle, sodeinitwill never execute and the timer will run indefinitely.🔎 Apply this diff to break the retain cycle:
private func startTracking() { // Reduced frequency for app tracking - 2 FPS is sufficient for app name updates - self.timer = Timer.scheduledTimer(withTimeInterval: 2.0, repeats: true) { _ in - self.updateActiveApp() + self.timer = Timer.scheduledTimer(withTimeInterval: 2.0, repeats: true) { [weak self] _ in + self?.updateActiveApp() } }Sources/Fluid/UI/AISettingsView.swift (1)
1362-1385: Duplicate Cancel buttons in addProviderSection
addProviderSectioncurrently renders two “Cancel” buttons in the same HStack (one disabled based on input, one unconditional), which is confusing UX and not intended.You can safely keep a single enabled Cancel:
- HStack(spacing: 8) { - Button("Save Provider") { self.saveNewProvider() } - .buttonStyle(GlassButtonStyle()) - .disabled(self.newProviderName.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty || self.newProviderBaseURL.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty) - Button("Cancel") { self.showingSaveProvider = false; self.newProviderName = ""; self.newProviderBaseURL = ""; self.newProviderApiKey = ""; self.newProviderModels = "" } - .buttonStyle(GlassButtonStyle()) - .disabled(self.newProviderName.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty || self - .newProviderBaseURL.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty) - Button("Cancel") { - self.showingSaveProvider = false; self.newProviderName = ""; self.newProviderBaseURL = ""; self - .newProviderApiKey = ""; self.newProviderModels = "" - } - .buttonStyle(GlassButtonStyle()) - } + HStack(spacing: 8) { + Button("Save Provider") { self.saveNewProvider() } + .buttonStyle(GlassButtonStyle()) + .disabled(self.newProviderName.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty || + self.newProviderBaseURL.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty) + Button("Cancel") { + self.showingSaveProvider = false + self.newProviderName = "" + self.newProviderBaseURL = "" + self.newProviderApiKey = "" + self.newProviderModels = "" + } + .buttonStyle(GlassButtonStyle()) + }
🧹 Nitpick comments (20)
Sources/Fluid/Persistence/KeychainService.swift (1)
104-107: Excellent fix for inverted conditional logic.The corrected logic now properly:
- Removes specific legacy entries when
providerIDsare provided- Removes all legacy entries when called with the default empty array (as in lines 159 and 169)
The previous inverted logic would have removed nothing when called without arguments and removed all entries when specific IDs were provided—the opposite of the intended behavior.
Optional: Remove redundant parentheses on line 107
- targets = try Array((self.legacyProviderEntries()).keys) + targets = try Array(self.legacyProviderEntries().keys)Sources/Fluid/TalkingAnimations.swift (3)
128-128: Unused state variable.
animationTriggeris declared but never read or modified inSpokenlyWaveform. Consider removing it to reduce unnecessary state.🔎 Apply this diff:
@State private var animationPhases: [Double] = [] -@State private var animationTrigger: Int = 0 @State private var lastUpdateTime: TimeInterval = 0
170-172: Config FPS values are defined but not used.
TalkingAnimationConfigdefinesactiveFPS=60andidleFPS=30, butupdateBars()uses hardcoded values (30/20). Consider using the config properties for consistency.🔎 Apply this diff:
// Adaptive frame limiting - reduce rate during active processing to prevent CoreML conflicts -let targetFPS: Double = self.isActive ? 30.0 : 20.0 // Lower FPS to reduce state update conflicts +let targetFPS: Double = self.isActive ? self.config.activeFPS : self.config.idleFPS let frameTime = 1.0 / targetFPSNote: If the hardcoded values are intentional (e.g., for CoreML compatibility reasons as commented), consider removing the unused config properties to avoid confusion.
275-276: Redundant frame limiting.The timer fires every 0.033s (30 FPS), and
updateParticleAnimation()also checks if 0.033s has elapsed. The check will almost always pass since both use the same interval. Either remove the manual check or increase the timer interval for efficiency.🔎 Option A - Remove redundant check:
private func updateParticleAnimation() { - let currentTime = Date().timeIntervalSince1970 - - // Simple frame limiting for consistent 30 FPS - if currentTime - self.lastParticleUpdateTime < 0.033 { return } - self.lastParticleUpdateTime = currentTime - self.animationTrigger += 1🔎 Option B - Increase timer interval and keep the check for drift protection:
-private let particleTimer = Timer.publish(every: 0.033, on: .main, in: .common).autoconnect() // 30 FPS +private let particleTimer = Timer.publish(every: 0.016, on: .main, in: .common).autoconnect() // 60 FPS base, throttled to 30 FPSAlso applies to: 375-377
Sources/Fluid/UI/FeedbackView.swift (1)
86-87: Good clarity improvement with the variable rename.Renaming
urltogithubURLmakes the code more self-documenting.Sources/Fluid/Persistence/SettingsStore.swift (1)
583-593: Consider consolidating model detection logic to avoid duplication.The
isReasoningModel(_:)function duplicates the model pattern matching already present ingetReasoningConfig(forModel:provider:)(lines 526-546). This creates a maintenance risk where updates to one location may be missed in the other.🔎 Suggested approach
Consider having
isReasoningModelcallgetReasoningConfiginternally, or extract the pattern matching to a shared helper:func isReasoningModel(_ model: String) -> Bool { - let modelLower = model.lowercased() - return modelLower.hasPrefix("gpt-5") || - modelLower.contains("gpt-5.") || - modelLower.hasPrefix("o1") || - modelLower.hasPrefix("o3") || - modelLower.contains("gpt-oss") || - modelLower.hasPrefix("openai/") || - (modelLower.contains("deepseek") && modelLower.contains("reasoner")) + // Use getReasoningConfig with a dummy provider to check if this is a reasoning model + // The provider doesn't affect pattern matching for known model types + return getReasoningConfig(forModel: model, provider: "") != nil }Sources/Fluid/Services/ASRService.swift (1)
373-374: Remove duplicate sample count logging.Both debug statements log the sample count (
pcm.count) which is redundant. Consider consolidating into a single, more informative log statement.🔎 Suggested fix:
- DebugLogger.shared.debug("Starting transcription with \(pcm.count) samples (\(Float(pcm.count) / 16_000.0) seconds)", source: "ASRService") - DebugLogger.shared.debug("stop(): starting full transcription (samples: \(pcm.count)) using \(self.transcriptionProvider.name)", source: "ASRService") + DebugLogger.shared.debug("stop(): starting full transcription (\(pcm.count) samples, \(String(format: "%.2f", Float(pcm.count) / 16_000.0))s) using \(self.transcriptionProvider.name)", source: "ASRService")Sources/Fluid/Networking/FunctionCallingProvider.swift (1)
146-177: ExtractisLocalEndpointto a shared utility.This function is duplicated across 5 files:
AISettingsView.swift,FunctionCallingProvider.swift,LLMClient.swift,ContentView.swift, andAIProvider.swift. Consider extracting to a shared networking utility.// Example: Sources/Fluid/Networking/NetworkUtils.swift enum NetworkUtils { static func isLocalEndpoint(_ urlString: String) -> Bool { // shared implementation } }Sources/Fluid/Views/NotchContentViews.swift (1)
349-350: Waveform bar height and glow logic remain soundGlow shadows now use two explicit
.shadowmodifiers and bar height math is consolidated, but formulas and state usage are unchanged. Given the low bar count, the extra shadow per bar is acceptable from a perf standpoint.Also applies to: 427-432, 948-949
Sources/Fluid/Views/CommandModeView.swift (1)
669-723: ThinkingShimmerLabel animation is fine; consider trimming unused stateShimmer and sparkle animations are wired correctly and run on the main actor via
DispatchQueue.main.asyncAfter. Note thatsparkleOpacityhas 5 entries but only indices 0..<3 are animated; you can shrink the array to 3 elements to avoid confusion.Sources/Fluid/ContentView.swift (2)
1225-1233: Local endpoint and API key handling is consistentUsing
isLocalEndpoint(derivedBaseURL)to skip API key enforcement, and mirroring that check again instopAndProcessTranscription, makes local deployments usable without keys while still preventing remote calls with missing credentials. This matches the similar helpers in AIProvider/LLMClient.Also applies to: 1334-1368
1406-1414: Debug logging adds valuable observability; watch for sensitive contentNew debug/info logs around typing decisions, captured app context, command/rewrite triggers, and ASR preload significantly improve diagnosability. Note that some logs include full user text (
transcribedText, rewrite instructions, commands); if privacy is a concern, consider gating those onenableDebugLogsor redacting content.Also applies to: 1478-1481, 1692-1705, 1708-1733
Sources/Fluid/UI/AISettingsView.swift (1)
423-431: Reasoning model helpers are now centralized, but align openReasoningConfig with helper
hasReasoningConfigForCurrentModelnow usesSettingsStore.shared.isReasoningModel(_:)as a fallback, which is good. InopenReasoningConfig, the manualmodelLowerchecks could be replaced with the same helper (plus the DeepSeek special-case) to avoid drift between heuristics.Also applies to: 1174-1188
Sources/Fluid/Services/CommandModeService.swift (2)
60-99: Message.thinking is well-scoped as display-onlyExtending
Messagewith an optionalthinkingfield and defaulting it in the initializer keeps existing call sites working while letting new ones pass reasoning text. Conversion helpers intentionally ignore thinking when persisting toChatMessage, consistent with the “display-only” comment.
639-651: callLLM reasoning support and streaming callbacks are solidUsing
settings.isReasoningModel(model)to disable temperature, applying per-model reasoningConfig intoextraParams, and wiringonThinkingChunk/onContentChunkwith 60fps throttling gives you smooth UI without excessive updates. The 50ms post-stream delay before clearing streaming text is a reasonable buffer to ensure the final tokens render.Also applies to: 799-822
Sources/Fluid/Services/ThinkingParsers.swift (5)
43-67: Consider documenting expected model name patterns.The string-based model matching is pragmatic, but model names from LLM providers can evolve. Consider adding inline comments documenting the expected model name formats or centralizing the pattern matching strings as constants if these patterns are used elsewhere in the codebase.
Example refactor to centralize patterns:
+// Model name patterns for parser selection +private enum ModelPatterns { + static let nemotron = ["nemotron", "nemo"] + static let qwenThinking = ["qwen"] + static let qwenThinkingVariants = ["think", "qwq"] + static let deepseek = ["deepseek"] +} + static func createParser(for model: String) -> ThinkingParser { let modelLower = model.lowercased() - if modelLower.contains("nemotron") || modelLower.contains("nemo") { + if ModelPatterns.nemotron.contains(where: modelLower.contains) { // ... }
79-79: Clarify commented-out parameter.The
truncate_history_thinkingparameter is commented out with a note about exposing it in settings. Consider either removing this comment if it's not planned, converting it to a TODO, or implementing the settings integration.
91-93: Document incomplete model support.The comments mention Claude and OpenAI o1/o3 models but provide no implementation. If these are planned for future work, consider adding TODO comments or tracking issues. If they're not applicable, consider removing the comments to avoid confusion.
147-155: Extract magic number to named constant.The value
15is used to buffer partial tags (also at line 158). Consider extracting this to a named constant to improve readability and maintainability. The longest tag is</thinking>at 11 characters, so 15 provides a safe margin.Suggested refactor:
+private let partialTagBufferSize = 15 // Must be > max tag length (</thinking> = 11) + struct StandardThinkingParser: ThinkingParser { mutating func processChunk( // ... ) -> (ThinkingParserState, String, String) { // ... } else { // Still in thinking, emit what we have (minus potential partial tag) - // Keep last 15 chars in buffer to detect partial closing tags - let safeLength = max(0, tagBuffer.count - 15) + // Keep last N chars in buffer to detect partial closing tags + let safeLength = max(0, tagBuffer.count - partialTagBufferSize)
228-235: Same magic number as StandardThinkingParser.The value
15for partial tag buffering should also be extracted to a shared constant (as noted in StandardThinkingParser review). Consider defining it once at the file level or in a shared location.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (46)
.gitignore(1 hunks)Info.plist(1 hunks)Sources/Fluid/AppDelegate.swift(2 hunks)Sources/Fluid/ContentView.swift(28 hunks)Sources/Fluid/CustomAnimations.swift(2 hunks)Sources/Fluid/Networking/AIProvider.swift(2 hunks)Sources/Fluid/Networking/FunctionCallingProvider.swift(2 hunks)Sources/Fluid/Networking/ModelDownloader.swift(6 hunks)Sources/Fluid/Persistence/ChatHistoryStore.swift(3 hunks)Sources/Fluid/Persistence/KeychainService.swift(1 hunks)Sources/Fluid/Persistence/SettingsStore.swift(4 hunks)Sources/Fluid/Persistence/TranscriptionHistoryStore.swift(2 hunks)Sources/Fluid/Services/ASRService.swift(10 hunks)Sources/Fluid/Services/AppServices.swift(1 hunks)Sources/Fluid/Services/AudioDeviceService.swift(3 hunks)Sources/Fluid/Services/CommandModeService.swift(11 hunks)Sources/Fluid/Services/DebugLogger.swift(1 hunks)Sources/Fluid/Services/FileLogger.swift(1 hunks)Sources/Fluid/Services/GlobalHotkeyManager.swift(6 hunks)Sources/Fluid/Services/LLMClient.swift(1 hunks)Sources/Fluid/Services/MenuBarManager.swift(2 hunks)Sources/Fluid/Services/NotchOverlayManager.swift(2 hunks)Sources/Fluid/Services/RewriteModeService.swift(5 hunks)Sources/Fluid/Services/SimpleUpdater.swift(1 hunks)Sources/Fluid/Services/TextSelectionService.swift(2 hunks)Sources/Fluid/Services/ThinkingParsers.swift(1 hunks)Sources/Fluid/Services/TypingService.swift(5 hunks)Sources/Fluid/Services/WhisperProvider.swift(4 hunks)Sources/Fluid/TalkingAnimations.swift(5 hunks)Sources/Fluid/Theme/Components/SetupComponents.swift(3 hunks)Sources/Fluid/Theme/Components/ThemedCard.swift(1 hunks)Sources/Fluid/Theme/NativeButtonStyles.swift(5 hunks)Sources/Fluid/UI/AISettingsView.swift(27 hunks)Sources/Fluid/UI/FeedbackView.swift(3 hunks)Sources/Fluid/UI/GlossyEffects.swift(1 hunks)Sources/Fluid/UI/MeetingTranscriptionView.swift(1 hunks)Sources/Fluid/UI/MouseTracker.swift(1 hunks)Sources/Fluid/UI/RecordingView.swift(2 hunks)Sources/Fluid/UI/SettingsView.swift(9 hunks)Sources/Fluid/UI/StatsView.swift(2 hunks)Sources/Fluid/UI/TranscriptionHistoryView.swift(4 hunks)Sources/Fluid/UI/WelcomeView.swift(8 hunks)Sources/Fluid/Views/CommandModeView.swift(10 hunks)Sources/Fluid/Views/NotchContentViews.swift(9 hunks)Sources/Fluid/Views/RewriteModeView.swift(4 hunks)Sources/Fluid/fluidApp.swift(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (14)
Sources/Fluid/Persistence/TranscriptionHistoryStore.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
debug(151-153)
Sources/Fluid/UI/WelcomeView.swift (5)
Sources/Fluid/Networking/FunctionCallingProvider.swift (1)
isLocalEndpoint(147-177)Sources/Fluid/ContentView.swift (1)
isLocalEndpoint(1135-1171)Sources/Fluid/Networking/AIProvider.swift (1)
isLocalEndpoint(26-62)Sources/Fluid/Services/LLMClient.swift (1)
isLocalEndpoint(551-589)Sources/Fluid/UI/AISettingsView.swift (1)
isLocalEndpoint(409-421)
Sources/Fluid/ContentView.swift (6)
Sources/Fluid/Services/AudioDeviceService.swift (2)
getDefaultInputDevice(78-81)getDefaultOutputDevice(83-86)Sources/Fluid/Services/DebugLogger.swift (3)
debug(151-153)info(139-141)error(147-149)Sources/Fluid/Networking/FunctionCallingProvider.swift (1)
isLocalEndpoint(147-177)Sources/Fluid/Networking/AIProvider.swift (1)
isLocalEndpoint(26-62)Sources/Fluid/Services/LLMClient.swift (2)
isLocalEndpoint(551-589)call(123-151)Sources/Fluid/Persistence/SettingsStore.swift (3)
isReasoningModel(584-593)getReasoningConfig(517-550)getAPIKey(193-201)
Sources/Fluid/Services/RewriteModeService.swift (3)
Sources/Fluid/Persistence/SettingsStore.swift (1)
getReasoningConfig(517-550)Sources/Fluid/Services/DebugLogger.swift (2)
debug(151-153)info(139-141)Sources/Fluid/Services/LLMClient.swift (1)
call(123-151)
Sources/Fluid/Services/GlobalHotkeyManager.swift (1)
Sources/Fluid/Services/DebugLogger.swift (3)
warning(143-145)error(147-149)info(139-141)
Sources/Fluid/Services/ASRService.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
debug(151-153)info(139-141)
Sources/Fluid/Services/ThinkingParsers.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
debug(151-153)
Sources/Fluid/UI/AISettingsView.swift (9)
Sources/Fluid/ContentView.swift (5)
defaultModels(935-941)providerKey(929-933)updateCurrentProvider(943-950)isLocalEndpoint(1135-1171)body(1880-1885)Sources/Fluid/Views/CommandModeView.swift (1)
defaultModels(627-633)Sources/Fluid/Views/RewriteModeView.swift (1)
defaultModels(316-323)Sources/Fluid/Persistence/SettingsStore.swift (1)
isReasoningModel(584-593)Sources/Fluid/Networking/FunctionCallingProvider.swift (1)
isLocalEndpoint(147-177)Sources/Fluid/Networking/AIProvider.swift (1)
isLocalEndpoint(26-62)Sources/Fluid/Services/LLMClient.swift (1)
isLocalEndpoint(551-589)Sources/Fluid/Theme/NativeButtonStyles.swift (1)
body(300-308)Sources/Fluid/UI/GlossyEffects.swift (1)
body(60-73)
Sources/Fluid/Services/WhisperProvider.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
info(139-141)
Sources/Fluid/Persistence/SettingsStore.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
info(139-141)
Sources/Fluid/Networking/AIProvider.swift (1)
Sources/Fluid/Networking/AppleIntelligenceProvider.swift (1)
process(47-66)
Sources/Fluid/AppDelegate.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
debug(151-153)error(147-149)
Sources/Fluid/UI/SettingsView.swift (3)
Sources/Fluid/Services/DebugLogger.swift (2)
warning(143-145)debug(151-153)Sources/Fluid/Theme/NativeButtonStyles.swift (1)
body(300-308)Sources/Fluid/UI/GlossyEffects.swift (1)
body(60-73)
Sources/Fluid/Networking/FunctionCallingProvider.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
info(139-141)
🪛 GitHub Actions: Build and Test
Sources/Fluid/UI/MeetingTranscriptionView.swift
[error] 175-175: Force Unwrapping Violation: Force unwrapping should be avoided (force_unwrapping)
[error] 176-176: Force Unwrapping Violation: Force unwrapping should be avoided (force_unwrapping)
[error] 177-177: Force Unwrapping Violation: Force unwrapping should be avoided (force_unwrapping)
Sources/Fluid/Services/MenuBarManager.swift
[error] 346-346: Force Unwrapping Violation: Force unwrapping should be avoided (force_unwrapping)
[error] 353-353: Force Unwrapping Violation: Force unwrapping should be avoided (force_unwrapping)
Sources/Fluid/Views/RewriteModeView.swift
[error] 462-462: Empty Count Violation: Prefer checking isEmpty over comparing count to zero (empty_count)
Sources/Fluid/Views/CommandModeView.swift
[error] 358-358: Empty Count Violation: Prefer checking isEmpty over comparing count to zero (empty_count)
Sources/Fluid/ContentView.swift
[error] 1250-1250: Discouraged Optional Collection Violation: Prefer empty collection over optional collection (discouraged_optional_collection)
Sources/Fluid/Services/RewriteModeService.swift
[error] 198-198: Discouraged Optional Collection Violation: Prefer empty collection over optional collection (discouraged_optional_collection)
Sources/Fluid/Services/SimpleUpdater.swift
[error] 221-221: Force Unwrapping Violation: Force unwrapping should be avoided (force_unwrapping)
[error] 221-221: Force Unwrapping Violation: Force unwrapping should be avoided (force_unwrapping)
Sources/Fluid/Services/LLMClient.swift
[error] 46-46: Discouraged Optional Collection Violation: Prefer empty collection over optional collection (discouraged_optional_collection)
[error] 74-74: Discouraged Optional Collection Violation: Prefer empty collection over optional collection (discouraged_optional_collection)
[error] 82-82: Discouraged Optional Collection Violation: Prefer empty collection over optional collection (discouraged_optional_collection)
[error] 101-101: Discouraged Optional Collection Violation: Prefer empty collection over optional collection (discouraged_optional_collection)
[error] 104-104: Discouraged Optional Collection Violation: Prefer empty collection over optional collection (discouraged_optional_collection)
[error] 416-416: Discouraged Optional Collection Violation: Prefer empty collection over optional collection (discouraged_optional_collection)
[error] 447-447: Discouraged Optional Collection Violation: Prefer empty collection over optional collection (discouraged_optional_collection)
Sources/Fluid/UI/AISettingsView.swift
[error] 1221-1221: Statement Position Violation: Else and catch should be on the same line, one space after the previous declaration (statement_position)
[error] 1222-1222: Statement Position Violation: Else and catch should be on the same line, one space after the previous declaration (statement_position)
Sources/Fluid/Services/WhisperProvider.swift
[error] 26-26: Force Unwrapping Violation: Force unwrapping should be avoided (force_unwrapping)
Sources/Fluid/Networking/ModelDownloader.swift
[error] 35-35: Force Unwrapping Violation: Force unwrapping should be avoided (force_unwrapping)
[error] 41-41: Force Unwrapping Violation: Force unwrapping should be avoided (force_unwrapping)
🔇 Additional comments (84)
Sources/Fluid/Services/TextSelectionService.swift (1)
44-44: Formatting consolidation looks good.The reflow of
AXUIElementCopyAttributeValuecalls onto single lines is a straightforward formatting improvement that aligns with the rest of the file (line 68 already uses the single-line pattern). Logic, error handling, and API usage remain unchanged and correct.Also applies to: 56-56
Sources/Fluid/Services/FileLogger.swift (1)
16-16: LGTM!The formatting change consolidating the multiline chained call to a single line preserves the initialization logic and aligns with the project's formatting standards (per the
swiftformatrun noted in PR testing). The fallback pattern remains correct: library directory or temporary directory.Sources/Fluid/Services/NotchOverlayManager.swift (1)
303-303: LGTM!The comment clarifies the intentional empty
hoverBehaviorarray for the expanded command output, explaining the design decision to allow users to close via X/Escape regardless of cursor position. This is appropriate documentation that aligns with the class's overall state management approach.Sources/Fluid/Services/TypingService.swift (1)
153-153: Formatting refactors look good.The single-line reformatting of AXUIElementCopyAttributeValue and AXUIElementSetAttributeValue calls is consistent with the broader style consolidation observed across this PR. No logic or behavioral changes introduced.
Also applies to: 212-212, 280-280, 299-299
Info.plist (1)
14-14: LGTM! Appropriate beta version for pre-release testing.The version bump to
1.5.1-beta.3correctly identifies this as a pre-release build, which aligns with the substantial refactoring and feature additions in this PR..gitignore (1)
205-207: LGTM! Appropriate ignore patterns for audio files and temporary directories.The additions correctly exclude:
- Audio files (
**.m4a) using a globstar pattern that matches across all subdirectories- A scratch directory for temporary work files
These are standard repository hygiene practices for a voice/audio application.
Sources/Fluid/UI/MouseTracker.swift (1)
49-87: LGTM! Solid defensive coding and input validation.The MouseTrackingModifier properly validates all inputs (finite coordinates, window frame initialization) before use, debounces updates to prevent performance issues during scrolling, and handles edge cases (off-screen geometry) gracefully. The restructuring of the
.backgroundmodifier preserves the existing logic while improving clarity.Sources/Fluid/Services/MenuBarManager.swift (1)
272-272: LGTM!The error handling is clear and appropriate, providing a descriptive error message when status item creation fails.
Sources/Fluid/TalkingAnimations.swift (6)
60-85: LGTM!The configuration struct is well-organized with appropriate clamping for the noise threshold and sensible defaults for animation parameters.
89-116: LGTM!The view correctly uses
@StateObjectfor the data model andonReceivefor reactive UserDefaults updates. The dynamic threshold pattern is clean.
211-226: LGTM!Clean configuration struct with appropriate clamping for the threshold value.
228-260: LGTM!Consistent pattern with
TalkingAudioVisualizationView. The frame width calculation correctly accounts for particle sizes and spacing.
302-354: LGTM!The gradient definitions are reformatted for improved readability. No functional changes.
399-413: LGTM!Simple, stateless view component with clean implementation.
Sources/Fluid/UI/FeedbackView.swift (1)
97-107: Nice UX improvement with the hover effect.The gradient formatting is cleaner, and adding
.buttonHoverEffect()improves interaction consistency with other buttons in the app.Sources/Fluid/Persistence/TranscriptionHistoryStore.swift (1)
118-118: LGTM! Logging call consolidated for consistency.The single-line logging format improves readability and aligns with the codebase-wide refactoring pattern described in the PR.
Sources/Fluid/fluidApp.swift (1)
1-7: LGTM! Header comment added.Metadata addition with no code logic changes.
Sources/Fluid/Services/DebugLogger.swift (1)
129-129: LGTM! Inline formatting improves readability.The single-line invocation is equivalent to the previous multi-line form with no behavioral changes.
Sources/Fluid/Persistence/ChatHistoryStore.swift (2)
44-51: LGTM! Init parameters consolidated.The single-line parameter list improves consistency with no changes to default values or behavior.
63-69: LGTM! Init parameters consolidated.Formatting improvement with no functional changes.
Sources/Fluid/UI/TranscriptionHistoryView.swift (2)
140-143: LGTM! SwiftUI modifier grouping standardized.The background modifier formatting improves code structure without affecting runtime behavior or visual appearance.
Also applies to: 163-166, 365-368
397-398: LGTM! Method calls consolidated.Single-line formatting improves readability.
Sources/Fluid/AppDelegate.swift (1)
118-118: LGTM! Error logging consolidated.The single-line debug call maintains the same functionality while improving consistency with the codebase-wide logging refactor.
Sources/Fluid/UI/StatsView.swift (1)
323-326: LGTM! SwiftUI modifier formatting standardized.The background modifier grouping improves code structure without affecting visual rendering or behavior.
Also applies to: 454-457
Sources/Fluid/Services/AudioDeviceService.swift (2)
60-61: LGTM! Property retrieval calls consolidated.Single-line formatting improves readability while maintaining the same selector and scope parameters.
79-79: LGTM! Guard statements consolidated.The single-line guard format is more concise while preserving the same nil-handling logic.
Also applies to: 84-84
Sources/Fluid/UI/GlossyEffects.swift (1)
26-38: LGTM! Clean formatting consolidation.The overlay and shadow restructuring improves readability while preserving all visual behavior and hover effects.
Sources/Fluid/CustomAnimations.swift (2)
41-49: LGTM! Improved expression formatting.The single-line scaleEffect and opacity expressions enhance readability while preserving the animation logic and thresholds.
84-88: LGTM! Consistent formatting.The wave animation scaleEffect follows the same clean single-line pattern applied to the pulse animation.
Sources/Fluid/Theme/Components/SetupComponents.swift (2)
39-42: LGTM! Consistent overlay formatting.The explicit overlay wrapper aligns with the broader PR pattern of consolidating SwiftUI modifier chains.
94-108: LGTM! Well-structured background composition.The nested fill/overlay/stroke pattern maintains visual behavior while improving code organization.
Sources/Fluid/UI/RecordingView.swift (1)
51-53: LGTM! Clean text formatting.The consolidated Text and foregroundStyle expressions improve readability while preserving the conditional status display logic.
Sources/Fluid/Services/WhisperProvider.swift (3)
38-38: LGTM! Cleaner log formatting.The single-line debug log improves readability while preserving the model change detection logic.
79-79: LGTM! Consolidated transcription assembly.The single-line text joining and trimming maintains the same behavior in a more concise form.
129-129: LGTM! Simplified error construction.The single-entry userInfo dictionary maintains the same error information in a cleaner format.
Sources/Fluid/Theme/Components/ThemedCard.swift (1)
37-56: LGTM! Excellent modifier consolidation.The unified background chain with nested shape, overlay, and shadow improves code organization while preserving all visual behavior and hover effects.
Sources/Fluid/Services/GlobalHotkeyManager.swift (3)
138-149: LGTM! Cleaner logging format.The single-line DebugLogger calls improve readability throughout the retry logic.
263-304: LGTM! Consistent shortcut handling format.The reformatted shortcut matching calls and logging maintain all conditional logic while improving code clarity across command mode and rewrite mode handling.
555-564: LGTM! Cleaner modifier intersection logic.The single-line modifier intersection expressions improve readability while preserving the exact shortcut matching behavior.
Sources/Fluid/Theme/NativeButtonStyles.swift (5)
27-47: LGTM! Improved button style structure.The consolidated background/overlay chain with nested stroke and accent overlay enhances code organization while preserving all visual effects and hover animations for the glass button.
105-121: LGTM! Consistent premium button formatting.The unified background structure with gradient fill and overlay maintains the recording state visuals and hover effects.
153-168: LGTM! Clean secondary button consolidation.The background/overlay pattern is consistently applied, preserving all button styling and interaction behavior.
203-220: LGTM! Compact button formatting aligned.The consolidated structure maintains the isReady state handling and hover effects while improving readability.
252-255: LGTM! Simplified inline button background.The cleaner background pattern works well for the simpler capsule-shaped inline button style.
Sources/Fluid/Persistence/SettingsStore.swift (3)
60-60: LGTM!The new
showThinkingTokenskey follows the existing naming convention and placement within the Keys enum.
595-606: LGTM!The
showThinkingTokensproperty follows the established pattern for boolean settings with UserDefaults persistence, including properobjectWillChange.send()notification.
1089-1089: LGTM!The consolidated single-line log statement is clearer and aligns with the formatting consolidation across the codebase.
Sources/Fluid/Networking/ModelDownloader.swift (1)
139-139: LGTM!The formatting consolidations (single-line method signatures, compact
createDirectorycalls, and streamlined delegate methods) align with the broader code style normalization in this PR.Also applies to: 165-165, 170-170, 184-184, 210-210, 219-219
Sources/Fluid/Services/ASRService.swift (1)
193-193: LGTM!The formatting consolidations (single-line log statements, condensed block signatures) are consistent with the broader style normalization across the codebase.
Also applies to: 480-483, 642-642, 701-701, 711-711, 768-768, 804-804, 831-831, 850-850, 856-856
Sources/Fluid/Networking/AIProvider.swift (2)
71-72: LGTM!The condensed method signature with default parameter
stream: Bool = falsemaintains backward compatibility while aligning with the protocol. The endpoint construction logic is streamlined without changing behavior.
4-4: Protocol conformance is correct; AppleIntelligenceProvider does not adopt AIProvider.The AIProvider protocol requires 6 parameters. OpenAICompatibleProvider is the only conforming type and correctly implements this requirement with
stream: Bool = false(default parameters in implementations are valid in Swift). AppleIntelligenceProvider has separate, unrelated methods with different signatures and does not declare conformance to AIProvider.Likely an incorrect or invalid review comment.
Sources/Fluid/Services/RewriteModeService.swift (4)
9-9: LGTM!The new
streamingThinkingTextproperty andthinkingBufferprovide real-time thinking token support for the UI. The separation of the buffer (private) from the published property (public) is a clean design.Also applies to: 16-16
220-236: LGTM!The streaming callbacks correctly use
[weak self]to avoid retain cycles, and theTask { @MainActor in ... }pattern ensures UI updates happen on the main thread. The thinking and content chunks are properly separated.
240-251: LGTM!Good practice to clear streaming buffers before starting a new request (lines 240-245) and after completion (lines 249-251). This ensures clean state between requests.
105-112: LGTM!The
clearState()method correctly resets the newstreamingThinkingTextandthinkingBufferproperties, ensuring clean state when the service is reset.Sources/Fluid/Services/AppServices.swift (1)
18-48: Well-designed service container pattern.Good approach to centralize heavy services and reduce SwiftUI view type signature complexity. The
objectWillChangeforwarding ensures UI updates propagate correctly while avoiding the type metadata crashes mentioned in the comments.Sources/Fluid/Views/RewriteModeView.swift (1)
430-474: LGTM - Clean thinking view implementation.The expandable/collapsible thinking view with shimmer effect and preview truncation is well-structured.
Sources/Fluid/Networking/FunctionCallingProvider.swift (1)
27-33: LGTM - Improved API flexibility.Making
tool_callsoptional with nil default while normalizing to empty array internally is a clean approach that simplifies call sites.Sources/Fluid/UI/WelcomeView.swift (2)
397-435: LGTM - Cleaner modifier composition.The restructured background and overlay blocks improve readability while maintaining the same visual behavior.
466-476: The concern about bypassing state management is unfounded.micStatusis a@Publishedproperty that uses Combine's publisher mechanism for state management—not property observers. Direct assignment to@Publishedproperties is the standard and safe approach in SwiftUI. The pattern in WelcomeView matches exactly what's used inASRService.initialize(), confirming this is the intended usage.Sources/Fluid/Services/LLMClient.swift (2)
123-151: Solid retry logic with exponential backoff.The retry implementation correctly handles transient network errors with exponential backoff. The pattern of catching specific
URLErrorcodes is appropriate.
277-438: Well-structured streaming implementation.The streaming handler correctly manages SSE parsing, state transitions for thinking/content, and tool call accumulation. The integration with
ThinkingParserFactoryfor model-specific parsing is a good design choice.Sources/Fluid/UI/SettingsView.swift (3)
36-40: Good fix for CoreAudio/SwiftUI race condition.Caching the default device names in
@Stateand populating them inonAppearrather than during view body evaluation is the correct approach to avoid the HALSystem/AttributeGraph race condition. The comments clearly document the rationale.
593-605: LGTM - Proper cache initialization.Populating the cached device names after
AudioStartupGateopens ensures the values are ready when the view renders, while avoiding CoreAudio calls during the view body.
483-504: Consistent cache update on refresh.Good that the cache is also updated when the user manually refreshes devices, ensuring the displayed default names stay in sync.
Sources/Fluid/Views/NotchContentViews.swift (3)
199-211: ShimmerText gradient refactor looks correctGradient stops and start/end points are preserved; only the modifier style changed. No behavior or layout issues spotted.
541-544: Animation keyed on conversation history is appropriateAnimating the frame on
commandConversationHistory.count(separate from the recording animation) cleanly ties expansion to content growth without extra state. No obvious animation conflicts with the other.animationmodifiers.
590-595: Header button hover/processing styles are consistentHover fills and disabled foreground adjustments for New, Delete, and Dismiss buttons correctly respect
isCommandProcessingand match the command red theme; no interaction logic issues spotted.Also applies to: 654-659, 677-681
Sources/Fluid/Views/CommandModeView.swift (2)
523-526: Model picker binding default is reasonableUsing
settings.commandModeSelectedModel ?? availableModels.first ?? "gpt-4o"keeps a nonempty selection even when settings are missing. Just be aware that ifavailableModelsis empty, the selection value won’t appear in the menu; that’s acceptable but something to keep in mind when populating models upstream.
757-840: Per-message thinking section behavior is soundPersisted
thinkingtext is rendered behind a collapsible header with a short preview, and the per‑messageisThinkingExpandedstate ensures expansion doesn’t leak across messages. This is a good UX trade‑off between detail and noise.Sources/Fluid/ContentView.swift (3)
1177-1207: Provider/model derivation now correctly uses SettingsStoreReading
selectedProviderID, API keys, saved providers, and per‑provider selected models directly fromSettingsStoreinsideprocessTextWithAIfixes the stale @State issue; the derivation branches for OpenAI, Groq, and custom providers look consistent with the rest of the app.
1280-1291: LLMClient usage for transcription is well-structuredNon‑streaming
LLMClient.Configwith a simple system+user message pair, temperature disabled for reasoning models, and logging of thinking-token length only (not contents) is a clean, low‑risk integration. Error handling returns user-visible error text while preserving log detail.Also applies to: 1292-1304
1241-1243: Fix discouraged_optional_collection lint by using non-optional dictionary for extraParamsThe optional collection
var extraParams: [String: Any]? = nilis correct to flag—change it to a non-optional dictionary initialized as[:]and only pass it toextraParameterswhen nonempty using the ternaryextraParams.isEmpty ? nil : extraParams. This removes the SwiftLint violation while preserving behavior.However, do not replace the inline reasoning model check with
SettingsStore.shared.isReasoningModel()at line 1242. The SettingsStore method detects a broader set of models (includesgpt-5.*,gpt-oss,openai/*,deepseek-reasoner), whereas the current inline check only detectso1,o3, andgpt-5. Using SettingsStore's method would unintentionally change behavior by treating additional models as reasoning models and skipping the temperature parameter for them.Also applies to: 1245-1263, 1249-1251, 1279-1288
Likely an incorrect or invalid review comment.
Sources/Fluid/UI/AISettingsView.swift (4)
118-179: Settings loading order fix is correctSetting
openAIBaseURLandavailableModelsbased onselectedProviderIDbefore callingupdateCurrentProvider()ensurescurrentProviderreflects the correct canonical provider, which fixes earlier misclassification when baseURLs were stale. The final debug log helps confirm state at load.
604-637: testAPIConnection reasoning parameters and max tokens look correctUsing
SettingsStore.shared.isReasoningModel(self.selectedModel)to choose betweenmax_completion_tokensandmax_tokens, and applyingreasoningConfigexactly as in runtime LLM calls, makes the test endpoint faithfully mirror real usage. The additional debug log (testAPIConnection: provider=…, fullURL=…) should help diagnose misconfigurations.Also applies to: 615-629
736-743: showThinkingTokens toggle is wired correctlyThe new “Show Thinking Tokens” toggle is only shown when AI processing is enabled and the provider is not Apple Intelligence, and it directly updates
SettingsStore.shared.showThinkingTokens. This matches how CommandModeView/RewriteModeView gate their thinking UIs.Also applies to: 812-834
1347-1355: Resetting connection status after API key change is a good touchClearing
connectionStatus/connectionErrorMessagewhen the API key is updated avoids stale “failed”/“success” states and nudges users to re‑verify after key changes.Sources/Fluid/Services/CommandModeService.swift (4)
11-27: New thinking streaming state is structured coherentlyAdding
streamingThinkingText,thinkingBuffer, andlastThinkingUIUpdateparallels the existing content streaming state and keeps thinking updates isolated from main content, which is ideal for separate UI sections.
388-465: Attaching thinking to assistant messages fits the agent loopWhen a tool call is proposed, the assistant message now carries both user-facing content and the captured thinking string, and non-tool responses similarly record thinking. This integrates cleanly with the new UI without altering the control flow around pending commands or confirmation checks.
572-600: executeCommand integration unchanged apart from optional purposeexecuteCommand now accepts an optional
purposeand passes it intoEnhancedCommandResult, preserving the agent loop (tool result → next LLM turn) with no behavioral regressions.
880-912: Finalization and LLMResponse mapping handle thinking correctlyCapturing
finalThinkingfromresponse.thinkingor the buffered stream before clearing UI state, logging its size, and then threading it intoLLMResponse(both tool-call and text-only paths) ensures thinking is available for display but never sent back to the model. Notch streaming text is also cleared consistently at the end.Also applies to: 916-941
Sources/Fluid/Services/ThinkingParsers.swift (3)
3-36: LGTM! Well-designed parser protocol.The protocol design is clean and appropriate for streaming parsing scenarios. The
mutatingmodifier onprocessChunkallows parsers to maintain internal state if needed, and theinout tagBufferparameter enables efficient buffer management across chunks.
246-266: Excellent fallback logic for missing closing tags.The finalization logic correctly handles the case where no
</think>tag is found by moving all content to the content buffer. This gracefully handles scenarios where the server doesn't use thinking mode, preventing loss of response data.
269-286: LGTM! Clean implementation for non-thinking models.The pass-through implementation is correct and appropriately minimal for models that don't use thinking tokens.
| // Reset streaming state | ||
| self.streamingText = "" | ||
| self.streamingThinkingText = "" | ||
| self.streamingBuffer = [] | ||
| self.thinkingBuffer = [] | ||
| self.lastUIUpdate = CFAbsoluteTimeGetCurrent() | ||
| self.lastThinkingUIUpdate = CFAbsoluteTimeGetCurrent() | ||
|
|
||
| // Build LLMClient configuration | ||
| var config = LLMClient.Config( | ||
| messages: messages, | ||
| model: model, | ||
| baseURL: baseURL, | ||
| apiKey: apiKey, | ||
| streaming: enableStreaming, | ||
| tools: [TerminalService.toolDefinition], | ||
| temperature: isReasoningModel ? nil : 0.1, | ||
| extraParameters: extraParams | ||
| ) |
There was a problem hiding this comment.
Consider mirroring non-optional extraParams pattern used elsewhere
As in ContentView, var extraParams: [String: Any]? can trip the discouraged_optional_collection lint. If you want to align patterns, you can use a nonoptional dict and only attach it when nonempty:
- let reasoningConfig = SettingsStore.shared.getReasoningConfig(forModel: model, provider: providerID)
- var extraParams: [String: Any]? = nil
- if let rConfig = reasoningConfig, rConfig.isEnabled {
- if rConfig.parameterName == "enable_thinking" {
- extraParams = [rConfig.parameterName: rConfig.parameterValue == "true"]
- } else {
- extraParams = [rConfig.parameterName: rConfig.parameterValue]
- }
- DebugLogger.shared.debug("Added reasoning param: \(rConfig.parameterName)=\(rConfig.parameterValue)", source: "CommandModeService")
- }
+ let reasoningConfig = SettingsStore.shared.getReasoningConfig(forModel: model, provider: providerID)
+ var extraParams: [String: Any] = [:]
+ if let rConfig = reasoningConfig, rConfig.isEnabled {
+ if rConfig.parameterName == "enable_thinking" {
+ extraParams[rConfig.parameterName] = (rConfig.parameterValue == "true")
+ } else {
+ extraParams[rConfig.parameterName] = rConfig.parameterValue
+ }
+ DebugLogger.shared.debug("Added reasoning param: \(rConfig.parameterName)=\(rConfig.parameterValue)", source: "CommandModeService")
+ }
…
- extraParameters: extraParams
+ extraParameters: extraParams.isEmpty ? nil : extraParamsAlso applies to: 837-840
| .overlay( | ||
| VStack { | ||
| if self.feedbackText.isEmpty { | ||
| Text("Share your thoughts, report bugs, or suggest features...") | ||
| .font(.subheadline) | ||
| .foregroundStyle(.secondary) | ||
| } | ||
| } | ||
| } | ||
| .allowsHitTesting(false)) | ||
| ) | ||
| .allowsHitTesting(false) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "FeedbackView.swift" 2>/dev/null | head -5Repository: altic-dev/FluidVoice
Length of output: 101
🏁 Script executed:
sed -n '135,155p' ./Sources/Fluid/UI/FeedbackView.swift | cat -nRepository: altic-dev/FluidVoice
Length of output: 1461
Critical: .allowsHitTesting(false) breaks the TextEditor.
Line 149 applies .allowsHitTesting(false) to the entire TextEditor (including its overlay), making it unresponsive to user input. Users will not be able to type, tap, or interact with the feedback text field.
The modifier should be applied to the overlay content (the VStack) instead, so only the placeholder text remains non-interactive:
.overlay(
VStack {
if self.feedbackText.isEmpty {
Text("Share your thoughts, report bugs, or suggest features...")
.font(.subheadline)
.foregroundStyle(.secondary)
}
}
+ .allowsHitTesting(false)
)
- .allowsHitTesting(false)Test the TextEditor after applying this fix to confirm users can type feedback.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .overlay( | |
| VStack { | |
| if self.feedbackText.isEmpty { | |
| Text("Share your thoughts, report bugs, or suggest features...") | |
| .font(.subheadline) | |
| .foregroundStyle(.secondary) | |
| } | |
| } | |
| } | |
| .allowsHitTesting(false)) | |
| ) | |
| .allowsHitTesting(false) | |
| .overlay( | |
| VStack { | |
| if self.feedbackText.isEmpty { | |
| Text("Share your thoughts, report bugs, or suggest features...") | |
| .font(.subheadline) | |
| .foregroundStyle(.secondary) | |
| } | |
| } | |
| .allowsHitTesting(false) | |
| ) |
🤖 Prompt for AI Agents
In Sources/Fluid/UI/FeedbackView.swift around lines 140 to 149 the
.allowsHitTesting(false) modifier is currently applied to the entire TextEditor
(including its overlay), which prevents any user interaction; move the
.allowsHitTesting(false) modifier off the TextEditor and apply it to the
overlay’s VStack (the placeholder) so the placeholder is non-interactive while
the TextEditor remains tappable and editable, preserve the overlay structure and
styling, and run the UI to confirm the TextEditor accepts typing and taps.
| guard newFrame.minY < NSScreen.main?.frame.height ?? 1000 else { return } | ||
| guard newFrame.maxY > 0 else { return } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for NSScreen usage across the codebase
echo "=== NSScreen usage in codebase ==="
rg -C3 'NSScreen' --type swift
echo ""
echo "=== Platform-specific conditional compilation patterns ==="
rg -C2 '#if os\(macOS\)|@available|#if canImport\(AppKit\)' --type swift | head -100Repository: altic-dev/FluidVoice
Length of output: 3995
Add platform-specific guards for macOS AppKit API usage.
The code uses NSScreen.main?.frame.height (macOS AppKit API) on line 80 without conditional compilation guards. While the codebase already demonstrates proper use of #if os(macOS) guards elsewhere (e.g., in SettingsStore.swift), the NSScreen usage across multiple files (MouseTracker.swift, NotchContentViews.swift, MenuBarManager.swift) lacks these guards. If the codebase targets multiple platforms, wrap these NSScreen calls with #if os(macOS) to prevent compilation failures on other platforms.
🤖 Prompt for AI Agents
In Sources/Fluid/UI/MouseTracker.swift around lines 80 to 81, the code accesses
the macOS-only API NSScreen.main?.frame.height without conditional compilation;
wrap the NSScreen usage in a #if os(macOS) / #else / #endif block so non-macOS
platforms do not reference AppKit symbols. Concretely, move the guard that uses
NSScreen inside the #if os(macOS) branch and provide a safe fallback (e.g., skip
the NSScreen check or use a constant/default) in the #else branch so the
function still compiles on other platforms.
| @State private var isThinkingExpanded = false | ||
|
|
There was a problem hiding this comment.
Thinking section integration is coherent; fix empty_count lint
The new thinkingView and isThinkingExpanded state integrate cleanly with streamingThinkingText and showThinkingTokens, and auto‑collapse on new requests is a nice touch. To satisfy SwiftLint’s empty_count rule and match the rest of the file, switch the preview guard to isEmpty:
- if self.service.streamingThinkingText.count > 0 {
+ if !self.service.streamingThinkingText.isEmpty {Also applies to: 285-299, 324-371
🤖 Prompt for AI Agents
In Sources/Fluid/Views/CommandModeView.swift around lines 274-275 (and similarly
at 285-299 and 324-371), the preview guard uses a count-based emptiness check
that triggers SwiftLint's empty_count rule; change those guards to use isEmpty
instead of comparing count to zero so they match file style and satisfy the
linter (replace any `something.count == 0` or `something.count != 0` checks used
for emptiness with `something.isEmpty` or `!something.isEmpty` as appropriate).
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Sources/Fluid/UI/AISettingsView.swift (1)
1376-1388: Remove duplicate button code in add provider section.Lines 1376-1388 contain duplicate "Save Provider" and "Cancel" buttons. This appears to be a merge artifact or incomplete refactoring. The UI will render duplicate buttons, causing a confusing user experience.
🔎 Proposed fix
HStack(spacing: 8) { Button("Save Provider") { self.saveNewProvider() } .buttonStyle(GlassButtonStyle()) .disabled(self.newProviderName.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty || self.newProviderBaseURL.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty) - Button("Cancel") { self.showingSaveProvider = false; self.newProviderName = ""; self.newProviderBaseURL = ""; self.newProviderApiKey = ""; self.newProviderModels = "" } + Button("Cancel") { + self.showingSaveProvider = false + self.newProviderName = "" + self.newProviderBaseURL = "" + self.newProviderApiKey = "" + self.newProviderModels = "" + } .buttonStyle(GlassButtonStyle()) - .disabled(self.newProviderName.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty || self - .newProviderBaseURL.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty) - Button("Cancel") { - self.showingSaveProvider = false; self.newProviderName = ""; self.newProviderBaseURL = ""; self - .newProviderApiKey = ""; self.newProviderModels = "" - } - .buttonStyle(GlassButtonStyle()) }
🧹 Nitpick comments (11)
Sources/Fluid/UI/MeetingTranscriptionView.swift (1)
171-178: Force unwrapping issue resolved—nice work!The nil coalescing operator (
??) safely handles the optionalUTTyperesults and eliminates the SwiftLint violation flagged in the previous review. The fallback to.audiois sensible.One minor note: because
.audioappears at line 172 and is also used as a fallback for lines 175-177, you could end up with duplicate entries in the array if any of those file-extension lookups fail (unlikely, but possible). A cleaner approach that avoids this would be:🔎 Alternative using compactMap to avoid potential duplicates
.fileImporter( isPresented: self.$showingFilePicker, - allowedContentTypes: [ - .audio, - .movie, - .mpeg4Movie, - UTType(filenameExtension: "wav") ?? .audio, - UTType(filenameExtension: "mp3") ?? .audio, - UTType(filenameExtension: "m4a") ?? .audio, - ], + allowedContentTypes: [ + .audio, + .movie, + .mpeg4Movie, + ] + [ + UTType(filenameExtension: "wav"), + UTType(filenameExtension: "mp3"), + UTType(filenameExtension: "m4a"), + ].compactMap { $0 }, allowsMultipleSelection: falseSources/Fluid/Services/MenuBarManager.swift (2)
344-348: Force unwrapping violation resolved.The critical force unwrapping issue from the previous review has been successfully addressed using optional binding. The code is now safe and will not crash.
For improved clarity, consider the pattern suggested in the previous review: create a local constant, configure it, then assign and add in a single flow. This avoids the redundancy of assigning a non-optional value to an optional property and immediately unwrapping it.
🔎 Optional refinement for clarity
- self.statusMenuItem = NSMenuItem(title: "", action: nil, keyEquivalent: "") - self.statusMenuItem?.isEnabled = false - if let statusItem = statusMenuItem { - menu.addItem(statusItem) - } + let statusItem = NSMenuItem(title: "", action: nil, keyEquivalent: "") + statusItem.isEnabled = false + self.statusMenuItem = statusItem + menu.addItem(statusItem)
353-357: Force unwrapping violation resolved.The critical force unwrapping issue has been addressed. The code is now safe.
As with the status menu item above, the same optional refinement pattern could be applied here for consistency and clarity.
🔎 Optional refinement for clarity
- self.aiMenuItem = NSMenuItem(title: "", action: #selector(self.toggleAIProcessing), keyEquivalent: "") - self.aiMenuItem?.target = self - if let aiItem = aiMenuItem { - menu.addItem(aiItem) - } + let aiItem = NSMenuItem(title: "", action: #selector(self.toggleAIProcessing), keyEquivalent: "") + aiItem.target = self + self.aiMenuItem = aiItem + menu.addItem(aiItem)Sources/Fluid/Services/SimpleUpdater.swift (1)
146-146: Consider whetherPMKError.cancelledsemantically fits "up-to-date."Using
PMKError.cancelledto represent "already up-to-date" is unconventional, as cancellation errors typically signal user-initiated or system-level cancellation rather than a successful no-op state. Callers might reasonably interpretcancelledas meaning the user aborted the check, not that the app is current.Consider defining a custom error case (e.g.,
SimpleUpdateError.alreadyUpToDate) or using a successful result type to distinguish "no update needed" from actual cancellation scenarios.Sources/Fluid/Services/RewriteModeService.swift (1)
9-17: Thinking streaming integration is sound; consider minor extraParameters + Task timing tweaks
- The new
streamingThinkingText/thinkingBufferwiring,clearState()reset, and streaming callbacks are consistent with CommandMode and look correct from a UI/state perspective.extraParamsis always passed (possibly empty) toLLMClient.Config.extraParameters. If you want to distinguish “no extra params” from “empty dict”, you could passextraParams.isEmpty ? nil : extraParamsinstead; this also mirrors patterns suggested elsewhere.- The callbacks wrap updates in
Task { @MainActor in ... }, whilecallLLMclearsstreamingThinkingText/thinkingBufferafterawait LLMClient.shared.call(config). There’s a small chance of a late chunk Task running after the final clear; if this ever shows up as “ghost” thinking tokens, consider either:
- Doing UI updates directly on the main actor (marking the class or
callLLMappropriately) instead of spawning extra Tasks, or- Clearing state inside a final callback (or in
LLMClientafter all chunks have been delivered).These are refinements; current behavior is acceptable.
Also applies to: 105-112, 181-264
Sources/Fluid/Services/CommandModeService.swift (3)
60-67: Thinking metadata is well‑integrated; note that it isn’t persisted
- Adding
thinking: String?toMessageandLLMResponse, plus surfacing it inprocessNextTurnandagentMessageView, gives a clear, display‑only view of reasoning tokens without affecting tool calls or control flow.- The conversion helpers
messageToChatMessage/chatMessageToMessagedo not map the newthinkingfield, so thinking content is lost when chats are saved and reloaded. If that’s intentional (thinking as ephemeral UI), this is fine; if you want thinking to survive app restarts, you’ll need to extendChatMessage+ persistence accordingly.Also applies to: 241-276, 639-642, 760-762, 783-840
803-835: LLMClient integration + streaming/thinking handling looks correct; tighten extraParameters and finalization
- Using
settings.isReasoningModel(model)andgetReasoningConfig(forModel:provider:)to drivetemperatureandextraParamsis consistent with the rest of the app.- Similar to RewriteModeService,
extraParamsis always passed (even when empty). If an empty dict has different semantics for a given provider thannil, considerextraParams.isEmpty ? nil : extraParamswhen constructingLLMClient.Config.- Streaming buffers (
streamingBuffer,thinkingBuffer) plus 60 fps throttling for both content and thinking are a good balance between smoothness and performance.- You capture
finalThinkingfromresponse.thinking ?? joinedBufferafter a 50 ms delay, then clear both streaming states. This should usually flush all pending chunk Tasks, but it’s still possible (though unlikely) that a late Task runs after the clear and briefly repopulatesstreamingThinkingText. If that ever shows up visually, you could:
- Avoid creating extra Tasks and instead update directly on the main actor when
callLLMitself is main‑actor‑isolated, or- Move the “final clear” into a final callback in
LLMClientonce it knows the stream is fully done.Overall the flow and tool‑call mapping (
execute_terminal_command) are solid.Also applies to: 817-883, 893-941
326-371: Thinking UI/shimmer components are coherent; minor cleanup possible
- The new
thinkingViewin the chat area, with header shimmer (ThinkingShimmerLabel) and preview/expanded modes, wires cleanly intostreamingThinkingTextandshowThinkingTokens. Auto‑collapsing on new requests (isThinkingExpanded = false) is a nice touch.- In
ThinkingShimmerLabel,sparkleOpacityis initialized with 5 values but you only ever index 0..<3; either reduce the array length or use all entries for clarity.These are cosmetic; behavior is fine.
Also applies to: 669-723
Sources/Fluid/Views/RewriteModeView.swift (1)
15-19: Thinking view wiring is correct; consider reset behavior
- The
thinkingViewis correctly shown only whenservice.isProcessing,settings.showThinkingTokens, andstreamingThinkingTextare all satisfied; this matches the service’s new streaming fields.- The expandable thinking UI (full scroll vs 100‑char preview) is consistent with CommandMode’s UX.
- If you want behavior parity with CommandMode, you might also collapse
isThinkingExpandedautomatically when a new rewrite request starts (e.g., onisProcessingtoggling from false→true), but that’s a UX choice rather than a correctness issue.Also applies to: 224-229, 428-474
Sources/Fluid/Views/CommandModeView.swift (1)
669-723: ThinkingShimmerLabel animation is fine; trim unused opacity entriesThe shimmer and sparkle effects are lightweight and visually aligned with the rest of the UI. You can optionally shrink
sparkleOpacityto 3 elements to match theForEach(0..<3, …)usage.Sources/Fluid/ContentView.swift (1)
1177-1288: processTextWithAI: good live‑settings integration; guard against empty modelStrengths:
- Reads provider, models, API keys, and saved providers directly from
SettingsStoreeach time, so AISettings changes take effect immediately.- Correctly routes Apple Intelligence, distinguishes local endpoints (no API key required), and constructs a clean system prompt with contextual app info.
- Applies basic reasoning handling via
isReasoningModelandgetReasoningConfig, buildingextraParamsforLLMClient.Config.Two refinements to consider:
Handle missing model more explicitly
IfderivedSelectedModelends up empty (e.g., misconfigured custom provider with no models), the request goes out with an emptymodelstring, leading to a server‑side error. Safer would be:
- Either default to a known model for built‑ins and bail for customs, e.g.:
guard !derivedSelectedModel.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty else { return "Error: No model selected for provider \(derivedCurrentProvider)" }
- Or inject a sensible default when
currentSelectedProviderIDis a known provider.Align extraParameters handling with other services
As in the services, you might preferextraParameters: extraParams.isEmpty ? nil : extraParamswhen buildingLLMClient.Configto clearly signal “no extras” vs “empty dict”.Functionality is otherwise solid.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
.swiftlint.yml(1 hunks)Sources/Fluid/ContentView.swift(28 hunks)Sources/Fluid/Networking/FunctionCallingProvider.swift(2 hunks)Sources/Fluid/Networking/ModelDownloader.swift(5 hunks)Sources/Fluid/Services/CommandModeService.swift(11 hunks)Sources/Fluid/Services/LLMClient.swift(1 hunks)Sources/Fluid/Services/MenuBarManager.swift(2 hunks)Sources/Fluid/Services/RewriteModeService.swift(5 hunks)Sources/Fluid/Services/SimpleUpdater.swift(1 hunks)Sources/Fluid/Services/ThinkingParsers.swift(1 hunks)Sources/Fluid/Services/WhisperProvider.swift(4 hunks)Sources/Fluid/UI/AISettingsView.swift(27 hunks)Sources/Fluid/UI/MeetingTranscriptionView.swift(1 hunks)Sources/Fluid/Views/CommandModeView.swift(10 hunks)Sources/Fluid/Views/RewriteModeView.swift(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- Sources/Fluid/Services/WhisperProvider.swift
🧰 Additional context used
🧬 Code graph analysis (6)
Sources/Fluid/ContentView.swift (4)
Sources/Fluid/Services/DebugLogger.swift (3)
debug(151-153)info(139-141)error(147-149)Sources/Fluid/Services/LLMClient.swift (2)
isLocalEndpoint(546-584)call(123-151)Sources/Fluid/Networking/AIProvider.swift (1)
isLocalEndpoint(26-62)Sources/Fluid/Persistence/SettingsStore.swift (2)
isReasoningModel(584-593)getReasoningConfig(517-550)
Sources/Fluid/Networking/FunctionCallingProvider.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
info(139-141)
Sources/Fluid/Services/ThinkingParsers.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
debug(151-153)
Sources/Fluid/UI/AISettingsView.swift (5)
Sources/Fluid/Views/CommandModeView.swift (1)
defaultModels(627-633)Sources/Fluid/Views/RewriteModeView.swift (1)
defaultModels(316-323)Sources/Fluid/Persistence/SettingsStore.swift (2)
isReasoningModel(584-593)getReasoningConfig(517-550)Sources/Fluid/Services/LLMClient.swift (1)
isLocalEndpoint(546-584)Sources/Fluid/Networking/AIProvider.swift (1)
isLocalEndpoint(26-62)
Sources/Fluid/Services/LLMClient.swift (7)
Sources/Fluid/ContentView.swift (2)
isLocalEndpoint(1135-1171)body(1880-1885)Sources/Fluid/Networking/FunctionCallingProvider.swift (1)
isLocalEndpoint(147-177)Sources/Fluid/UI/AISettingsView.swift (1)
isLocalEndpoint(409-421)Sources/Fluid/Networking/AIProvider.swift (1)
isLocalEndpoint(26-62)Sources/Fluid/UI/GlossyEffects.swift (1)
body(60-73)Sources/Fluid/Services/ThinkingParsers.swift (2)
getExtraParameters(71-95)createParser(43-67)Sources/Fluid/Persistence/SettingsStore.swift (1)
isReasoningModel(584-593)
Sources/Fluid/Services/CommandModeService.swift (4)
Sources/Fluid/Services/ThreadSafeAudioBuffer.swift (1)
append(10-14)Sources/Fluid/Views/NotchContentViews.swift (3)
addCommandMessage(134-137)setCommandProcessing(140-142)updateCommandStreamingText(129-131)Sources/Fluid/Persistence/SettingsStore.swift (2)
isReasoningModel(584-593)getReasoningConfig(517-550)Sources/Fluid/Services/LLMClient.swift (1)
call(123-151)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build FluidVoice
🔇 Additional comments (35)
.swiftlint.yml (1)
33-33: LGTM!The addition of the
scratchexclusion is appropriate for preventing linting of temporary or experimental files, and aligns with the broader repository changes.Sources/Fluid/Services/SimpleUpdater.swift (1)
221-224: Force unwrapping issue resolved.The closure-based guard-let binding successfully addresses the critical SwiftLint
force_unwrappingviolations flagged in the previous review. The logic is preserved and the implementation is now safe and idiomatic.Sources/Fluid/Networking/FunctionCallingProvider.swift (2)
23-27: ChatMessage init & CodingKeys change are behavior‑preservingCollapsing
CodingKeysand the initializer signature is purely stylistic; serialization and defaults remain unchanged and safe.
188-225: LLM request construction and logging look consistentEndpoint normalization, local endpoint detection, reasoning‑model temperature handling, and the new concise info log are coherent and match other services. No functional issues spotted here.
Sources/Fluid/Services/RewriteModeService.swift (1)
80-81: Follow‑up rewrite prompt is explicit and consistentThe follow‑up instruction message clearly scopes the update to the previous result and constrains the model to output only updated text, which should reduce drift.
Sources/Fluid/Services/CommandModeService.swift (1)
523-526: Shimmer gradient forCommandShimmerTextis lightweight and appropriateThe inline linear gradient based on
shimmerPhaseis simple and runs at a reasonable 1.2 s loop; no obvious performance or correctness concerns.Sources/Fluid/Services/ThinkingParsers.swift (1)
5-96: Thinking parser design is solid; buffer/window choices look sane
- The
ThinkingParserprotocol +ThinkingParserStateandThinkingParserFactorygive a clear, extensible abstraction for model‑specific thinking patterns.- Both
StandardThinkingParserandNemoThinkingParsercorrectly limittagBufferto a small trailing window (~15 chars), preventing unbounded memory growth while still allowing partial tag detection.- Nemo’s finalize behavior (treat everything as content if no closing tag is seen and
finalState == .inThinking) is a good fallback for “no‑thinking” responses.NoThinkingParsercorrectly funnels all text into content.I don’t see correctness issues here; this should integrate cleanly with
LLMClient’s streaming path.Also applies to: 100-181, 186-266, 271-285
Sources/Fluid/Networking/ModelDownloader.swift (2)
31-52: URL initialization fix removes force‑unwraps without changing behaviorSwitching to guarded base URLs with
preconditionFailurefor the hard‑coded Hugging Face endpoints is aligned with SwiftLint and keeps the same practical behavior (these URLs are constants). Looks good.Also applies to: 59-80
145-146: Download flow and progress logging remain correct
- The combined overall progress print (with human‑readable bytes) is a small clarity win.
- Directory creation before moving files and in recursive download remains correct and robust.
- URLSession delegate callbacks still call the continuation exactly once (finish vs error), preserving correctness.
No issues found here.
Also applies to: 171-177, 190-191, 216-229
Sources/Fluid/Views/CommandModeView.swift (4)
274-276: Command chat thinking view is well‑gated and performant
- The top‑level
thinkingViewappears only while processing, whenshowThinkingTokensis enabled andstreamingThinkingTextis non‑empty, matching the service logic.- Collapsing
isThinkingExpandedwhen a new turn starts keeps the UI tidy.- 60 fps throttling in the underlying service plus modest maxHeight (200) should keep scrolling and layout overhead reasonable.
No functional issues here.
Also applies to: 285-301, 324-371
523-526: Model picker binding default is reasonableBinding
commandModeSelectedModelwith a getter that falls back to the first available model or"gpt-4o"avoids nils and keeps selection stable across provider changes.
729-731: MessageBubble thinking section is coherent; note per‑message state
- The new
thinkingSectionpresents stored thinking text compactly with a per‑messageisThinkingExpandedtoggle, which is good for long traces.- Expansion state is local to each bubble, which is generally desirable; just be aware that it resets when the view is re‑created (e.g., on reload or navigation).
Everything else here looks correct.
Also applies to: 760-762, 783-840
470-473: Pending command highlight overlay is a nice, low‑risk UX improvementThe orange stroke overlay around the pending command block clearly emphasizes the confirmation area without impacting behavior.
Sources/Fluid/ContentView.swift (11)
125-137: NavigationSplitView extraction intosplitViewis a clean refactorWrapping the split view in a local
AnyViewbefore applying environment/observers doesn’t change behavior and keeps the body more readable.
181-199: Audio device initialization deferral and default selection are safeDeferring audio init by 1.5 s and only setting
selectedInputUID/selectedOutputUIDwhen empty avoids the earlier CoreAudio race while respecting user‑chosen devices. Logic for restoring saved preferences without hijacking system routing looks correct.
307-456: Global hotkey event monitoring logs are targeted; hot paths stay leanThe added
DebugLogger.debugcalls are guarded by specific conditions (e.g., during shortcut recording or actual hotkey matches). Regular key events still short‑circuit without logging, so this shouldn’t materially affect performance, while giving you much better observability when needed.
562-608: Centralized mode transition handler is correct and reduces leaksThe new
handleModeTransitioncorrectly:
- Closes the expanded command notch when leaving command mode.
- Clears rewrite state when leaving rewrite mode.
- Sets the menu bar overlay mode appropriately for command, write/rewrite, or other views.
This should prevent stale UI from previous modes without wiping command history.
779-806: Microphone permission UI changes are cosmetic and accurateUsing themed colors in
microphonePermissionViewfor authorized vs non‑authorized states improves clarity with no behavior change.
1225-1288: Local endpoint + non‑streaming transcription path is reasonable
- Skipping API key validation for local endpoints and gating remote calls on
enableAIProcessing && (isLocal || !apiKey.isEmpty)correctly supports self‑hosted endpoints.- For transcription cleanup you explicitly disable streaming and rely on
LLMClient’s non‑streaming path, which keeps this flow simple and robust (thinking tokens are still extracted but unused here).No correctness issues here.
1313-1415: stopAndProcessTranscription: mode routing and AI gating are correct
- The function now cleanly distinguishes three flows: rewrite, command, and plain transcription, based on
isRecordingForRewrite/isRecordingForCommand.- AI post‑processing is only used when enabled and either a local endpoint is configured or a non‑empty API key is present, avoiding “silent” attempts with missing keys.
- The final typing decision (external vs in‑app) is well‑logged and uses frontmost app +
isTranscriptionFocused, which will help debug focus issues.Behavior looks correct end‑to‑end.
1419-1492: Rewrite/command voice flows and app‑context capture are well‑structured
processRewriteWithVoiceInstructionandprocessCommandWithVoiceboth wrap their service calls with menu‑bar processing indicators and robust logging.startRecording()now captures app context once at recording start and logs it, preventing context drift if the user switches apps mid‑session.These changes should improve UX and debuggability without introducing regressions.
1675-1805: Hotkey manager initialization + cancel callback are hardened
initializeHotkeyManagerIfNeededwires command/rewrite callbacks to the existing voice flows and logs key transitions (“Command mode triggered”, “Rewrite mode triggered…”).- The cancel callback now consistently:
- Closes the expanded notch.
- Resets mode flags and overlay modes.
- Closes mode views when open.
Combined with delayed health checks and optional reinitialization, this should make hotkey behavior more robust under flaky accessibility conditions.
1814-1831: hasReasoningConfigForCurrentModel mirrors SettingsStore logic correctlyThe helper reuses
providerKeyandgetReasoningConfig/hasCustomReasoningConfigand falls back to the same pattern matching asSettingsStore.isReasoningModel, which keeps UI indicators aligned with actual reasoning behavior.
1876-1885: CardAppearAnimation remains lightweightCombining scale/opacity effects with a single spring animation keyed on
appearis straightforward and performant; no issues here.Sources/Fluid/Services/LLMClient.swift (7)
1-36: LGTM: Well-structured error handling and singleton setup.The error types comprehensively cover network, HTTP, encoding, and response validation failures. The singleton pattern with
@MainActorensures thread-safe access for UI-bound operations.
40-64: Past lint issue resolved: Response and ToolCall types look good.The
toolCallsproperty is now a non-optional array (addressing the previousdiscouraged_optional_collectionviolation). The convenience methods onToolCallprovide a clean API for safe argument access.
68-116: Past lint issues resolved: Config struct is well-designed.The
toolsandextraParametersproperties are now non-optional collections with proper defaults (addressing previousdiscouraged_optional_collectionviolations). The config structure comprehensively supports retry logic, streaming callbacks, and model-specific parameters.
123-151: LGTM: Robust retry logic with exponential backoff.The retry implementation correctly distinguishes transient URLErrors and applies exponential backoff. Non-retryable errors are thrown immediately, avoiding unnecessary delays.
155-246: LGTM: Well-structured request building with layered parameters.The request construction properly layers model-specific parameters (from ThinkingParserFactory), user-provided parameters, and common parameters. The conditional authorization header for non-local endpoints is correct.
274-435: Past lint issue resolved: Streaming implementation is comprehensive.The
parsedToolCallsvariable is now a non-optional array (line 413, addressing the previous violation). The streaming state machine properly handles thinking/content transitions, model-specific reasoning fields, and real-time callbacks. The finalization logic correctly flushes any remaining buffer content.
477-526: LGTM: Comprehensive thinking tag extraction.The implementation handles both proper
<think>...</think>pairs and orphan closing tags (content before</think>without an opening tag). The cleanup of stray tags ensures clean output.Sources/Fluid/UI/AISettingsView.swift (4)
154-179: LGTM: Critical initialization order fix.Setting
openAIBaseURLbefore callingupdateCurrentProvider()(lines 157-169) ensures the provider is correctly detected from the URL. This prevents initialization bugs where the provider/model mapping would be incorrect on settings load. The debug logging at line 178 aids troubleshooting.
812-834: LGTM: Clean integration of "Show Thinking Tokens" toggle.The new toggle follows the established pattern, properly binds to
SettingsStore, and is conditionally displayed when AI processing is enabled. The description clearly explains the feature's scope (Command and Rewrite modes).
1218-1274: Past lint issue resolved: Reasoning config UI expanded with custom parameter support.The compact if/else blocks are now properly formatted across multiple lines (lines 1221-1227), addressing the previous
statement_positionviolation. The new custom parameter UI (lines 1247-1255, 1269-1274) provides flexibility for model-specific reasoning parameters beyond the presets.
604-618: LGTM: Improved test connection with centralized helpers.The debug logging (lines 604-606) aids troubleshooting, and the use of centralized
getReasoningConfigandisReasoningModelhelpers ensures consistent reasoning model detection across the codebase.
| private func isLocalEndpoint(_ urlString: String) -> Bool { | ||
| guard let url = URL(string: urlString), | ||
| let host = url.host else { return false } | ||
|
|
||
| let hostLower = host.lowercased() | ||
|
|
||
| // Localhost | ||
| if hostLower == "localhost" || hostLower == "127.0.0.1" { | ||
| return true | ||
| } | ||
|
|
||
| // 127.x.x.x | ||
| if hostLower.hasPrefix("127.") { | ||
| return true | ||
| } | ||
|
|
||
| // 10.x.x.x (Private Class A) | ||
| if hostLower.hasPrefix("10.") { | ||
| return true | ||
| } | ||
|
|
||
| // 192.168.x.x (Private Class C) | ||
| if hostLower.hasPrefix("192.168.") { | ||
| return true | ||
| } | ||
|
|
||
| // 172.16.x.x - 172.31.x.x (Private Class B) | ||
| if hostLower.hasPrefix("172.") { | ||
| let components = hostLower.split(separator: ".") | ||
| if components.count >= 2, | ||
| let secondOctet = Int(components[1]), | ||
| secondOctet >= 16 && secondOctet <= 31 | ||
| { | ||
| return true | ||
| } | ||
| } | ||
|
|
||
| return false | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Extract isLocalEndpoint to shared utility to eliminate duplication.
This method is duplicated across at least 5 files (AIProvider.swift, FunctionCallingProvider.swift, ContentView.swift, AISettingsView.swift, and here). Extract it to a shared utility class or extension to maintain DRY principles and ensure consistent behavior.
🔎 Suggested refactor
Create a shared utility file (e.g., Sources/Fluid/Utilities/NetworkHelpers.swift):
import Foundation
enum NetworkHelpers {
/// Check if a URL is a local/private endpoint
static func isLocalEndpoint(_ urlString: String) -> Bool {
guard let url = URL(string: urlString),
let host = url.host else { return false }
let hostLower = host.lowercased()
// Localhost
if hostLower == "localhost" || hostLower == "127.0.0.1" {
return true
}
// 127.x.x.x
if hostLower.hasPrefix("127.") {
return true
}
// 10.x.x.x (Private Class A)
if hostLower.hasPrefix("10.") {
return true
}
// 192.168.x.x (Private Class C)
if hostLower.hasPrefix("192.168.") {
return true
}
// 172.16.x.x - 172.31.x.x (Private Class B)
if hostLower.hasPrefix("172.") {
let components = hostLower.split(separator: ".")
if components.count >= 2,
let secondOctet = Int(components[1]),
secondOctet >= 16 && secondOctet <= 31
{
return true
}
}
return false
}
}Then replace all instances with NetworkHelpers.isLocalEndpoint(...) calls.
🤖 Prompt for AI Agents
In Sources/Fluid/Services/LLMClient.swift around lines 546–584 the
isLocalEndpoint(_:) implementation is duplicated across multiple files; extract
this logic into a shared utility (e.g.,
Sources/Fluid/Utilities/NetworkHelpers.swift) exposing a static function (e.g.,
NetworkHelpers.isLocalEndpoint(_:)) and replace the local implementation in this
file and the other files (AIProvider.swift, FunctionCallingProvider.swift,
ContentView.swift, AISettingsView.swift) with calls to that shared function;
ensure the new utility imports Foundation and preserves the exact host-parsing
behavior and tests compile after replacing all usages.
Description
Brief description of what this PR does.
Type of Change
Related Issues
Closes #(issue number)
#58
Testing
brew install swiftlint && swiftlint --strict --config .swiftlint.ymlbrew install swiftformat && swiftformat --config .swiftformat SourcesScreenshots / Video
Add screenshots or Video recording of the app after you have made your changes
Summary by CodeRabbit
New Features
Bug Fixes
Updates
✏️ Tip: You can customize this high-level summary in your review settings.