Skip to content

[fal.ai] LoRA loading fails with Windows local path sent to Linux cloud worker #770

@livepeer-tessa

Description

@livepeer-tessa

Summary

When a user configures LoRA models on a Windows machine, the local Windows file paths (e.g. C:\Users\RONDO\.daydream-scope\models\lora\...) are being passed directly to the Linux fal.ai cloud worker, causing pipeline load failure.

Error

2026-03-31 00:54:40,475 - scope.server.pipeline_manager - ERROR - [e605847d] Failed to load pipeline longlive: LongLivePipeline.__init__: LoRA loading failed. File not found: C:\Users\RONDO\.daydream-scope\models\lora\Wan2.1-1.3b-lora-highresfix-v1_new.safetensors. Ensure the file exists in the models/lora/ directory.. If this error persists, consider removing the models directory '/data/models' and re-downloading models.
2026-03-31 00:54:40,476 - scope.server.pipeline_manager - ERROR - [e605847d] Failed to load pipeline: longlive
2026-03-31 00:54:40,980 - scope.server.pipeline_manager - ERROR - [e605847d] Some pipelines failed to load

The initial load params sent to the worker confirm the Windows path is being forwarded verbatim:

'loras': [
  {'path': 'C:\\Users\\RONDO\\.daydream-scope\\models\\lora\\Wan2.1-1.3b-lora-highresfix-v1_new.safetensors', 'scale': 0.7, 'merge_mode': 'permanent_merge'},
  {'path': 'C:\\Users\\RONDO\\.daydream-scope\\models\\lora\\daydream-scope-dissolve.safetensors', 'scale': 1.5, 'merge_mode': 'permanent_merge'}
]

App: github_f1lhgmk5v76a0ev1w0u378by-scope-app--prod (prod)
Session: e605847d
Timestamp: 2026-03-31 ~00:54 UTC

Root Cause

The client is sending absolute local file paths for LoRAs directly to the cloud runner. On Windows these are C:\... paths that don't exist on the Linux worker. Only LoRAs uploaded to or available in the cloud model directory (/data/models/lora/) should be referenced.

Expected Behavior

The server/cloud path should either:

  1. Reject local absolute paths (especially Windows paths) with a user-friendly error before attempting to load, or
  2. Translate them to relative model names that the cloud worker can resolve, or
  3. The frontend should not send local-only paths to the cloud runner at all — only cloud-available model references should be forwarded

Impact

  • Severity: Medium — pipeline fails to load entirely, user gets no AI output
  • Frequency: 2 occurrences in this 12h window (same session)
  • Affects Windows users with locally-configured LoRA models who try to use remote inference

Metadata

Metadata

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions