You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 20, 2022. It is now read-only.
Basically, the DB integration does something like this:
report everything is broken
report nothing is broken
This can be fixed by chunking up the stream by seconds and processing each chunk comparing it to the chunk before that. If the difference is too high, the chunk can be ignored all together.
For this to work, the replaying and persistence of events must move into the source such that this becomes possible:
First step is to detect when the API is disrupted. To do this it's possible to compare number of disrupted facilities and look for spikes. If first there are about 180 and suddenly there are 2200 disruptions it's a downtime. Similarly when it decreases to 0 from 180.
Once a disruption is detected, monitoring must continue until the disruption is resolved (disruptions are "around" 180 again). In this phase all events are filtered out and kept inside the monitor for later reference.
Then the last known good state and the current state must be compared. Occurred events will then be released based on the model below. It's currently not intended to generate new events on the fly so the actual ones that have been emitted must be used. If that is impossible, I need to re-think some parts.
MonitorP must implement the original idea from #4 :
<---> defines the time range in which the API is known to be disrupted.
|-------------------------|
|------------------------>| Case 1: Monitoring dis. only
|------------<----------->| Case 2: Disruption before, resolved after
|-----<------>------------| Case 3: Disruption before and after
|------------>------------| Case 4: Disrupted after
-> monitoring dis only = ignore
-> Case 2 -> we don't know when it really ended; have to include
Basically, the DB integration does something like this:
This can be fixed by chunking up the stream by seconds and processing each chunk comparing it to the chunk before that. If the difference is too high, the chunk can be ignored all together.
For this to work, the replaying and persistence of events must move into the source such that this becomes possible:
This also enables the usage of
appendMany
, which I assume is faster.As a follow up, the
Source m a
can then become a Functor and Monoid with which the handling later on is simplified.The text was updated successfully, but these errors were encountered: