-
Notifications
You must be signed in to change notification settings - Fork 45
Closed
Labels
Milestone
Description
In case of an infeasible problem, the StorageScheduler falls back to using the fallback_charging_policy, while schedule data is still being attributed to the StorageScheduler. It would help debugging to be able to visually distinguish schedule data originating from one or the other, by having fallback schedules saved with their own distinct source.
I propose we refactor fallback_charging_policy to its own FallbackStorageScheduler, and think about triggering. For example:
- We could let the original scheduling job fail, and upon failure set up the fallback scheduling job. This would then also require something clever for retrieving schedules (getting a schedule using the ID of the failed job should result in obtaining the fallback job).
- Alternatively, we compute the fallback schedule within the same job (thereby dispensing with the issue of scheduling job IDs). We let the
StorageScheduler.computemethod fail, and trigger the fallback within theservices.scheduling.make_schedulemethod.