-
-
Notifications
You must be signed in to change notification settings - Fork 424
Description
Originally posted by @coderabbitai[bot] in #781 (comment)
🧩 Analysis chain
Potential race condition with metadata rendering
The commented TODO correctly identifies a concurrency issue: modifying m.fileMetaData in both this function and a separate goroutine could lead to race conditions.
Consider using a mutex to protect access to the shared state, or better yet, implementing a channel-based approach where the goroutine sends completed metadata back to the main thread:
// Add to struct:
+type metadataUpdate struct {
+ metadata [][2]string
+}
+metadataUpdateCh chan metadataUpdate
// In initialization:
+m.metadataUpdateCh = make(chan metadataUpdate)
// In goroutine:
+go func() {
+ metadata := m.generateMetaData() // Get metadata
+ m.metadataUpdateCh <- metadataUpdate{metadata: metadata}
+}()
// In update loop:
+case update := <-m.metadataUpdateCh:
+ m.fileMetaData.metaData = update.metadataLet's verify if this is truly an issue:
🏁 Script executed:
#!/bin/bash
# Check for potential race conditions in metadata handling
echo "Looking for concurrent access to fileMetaData..."
rg -A 5 "fileMetaData" | grep -v "\/\/"Length of output: 9660
🏁 Script executed:
#!/bin/bash
# Inspect the definition and implementation of returnMetaData
rg -n "func (m .*returnMetaData" -A 20 src/internal
# Inspect the structure of fileMetadata to see which fields are mutated
rg -n "type fileMetadata" -A 20 src/internal/type.goLength of output: 610
🏁 Script executed:
#!/bin/bash
# Locate and inspect the implementation of returnMetaData
rg -n "returnMetaData" -A 50 src/internalLength of output: 13455
Isolate metadata slice mutations to the main update loop
The returnMetaData goroutine and metadataRender both write to m.fileMetaData.metaData without synchronization, leading to data races against the view and other concurrent returnMetaData calls. Instead of mutating the shared slice directly, collect metadata in a local buffer and send it via the existing channelMessage—then only update m.fileMetaData.metaData inside the model’s Update handler for sendMetadata.
Affected locations:
- src/internal/function.go (
func (m *model) returnMetaData) - src/internal/model_render.go (
func (m *model) metadataRender) - src/internal/model.go (Update case for
sendMetadata)
Proposed diff in returnMetaData (src/internal/function.go):
func (m *model) returnMetaData() {
panel := m.fileModel.filePanels[m.filePanelFocusIndex]
- m.fileMetaData.metaData = m.fileMetaData.metaData[:0]
+ // Build metadata in a local slice to avoid racing on the shared state
+ metadata := make([][2]string, 0, len(panel.element))
// … append items into metadata instead of m.fileMetaData.metaData …
- message := channelMessage{
- messageType: sendMetadata,
- metadata: m.fileMetaData.metaData,
- }
+ message := channelMessage{
+ messageType: sendMetadata,
+ metadata: metadata,
+ }
channel <- message
}In metadataRender (src/internal/model_render.go), only append the “Loading metadata…” indicator to a local buffer or rely on the empty state in the view—avoid doing m.fileMetaData.metaData = append(...) here. All assignments to m.fileMetaData.metaData should happen in the Update handler:
// In Update (src/internal/model.go):
case sendMetadata:
- m.fileMetaData.metaData = msg.metadata
+ m.fileMetaData.metaData = append(([][2]string)(nil), msg.metadata...)With this change, you confine all shared‐state writes to the single‐threaded Update cycle and eliminate the race. Alternatively, you can guard fileMetadata with a sync.RWMutex, but the local‐slice approach fits the existing channel‐based design.
Committable suggestion skipped: line range outside the PR's diff.