Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use The Type System to Hold Cache Consistency #122

Open
TTWNO opened this issue Jul 7, 2023 · 2 comments · May be fixed by #138
Open

Use The Type System to Hold Cache Consistency #122

TTWNO opened this issue Jul 7, 2023 · 2 comments · May be fixed by #138
Milestone

Comments

@TTWNO
Copy link
Member

TTWNO commented Jul 7, 2023

Right now, under either the main or mother-of-all-refactors branch, we appear to have an issue where the cache isn't quite guaranteed to be correct based on the order of events.

If we assume that events (per application) arrive in order (and this seems fairly reasonable), any event which needs to read from state must hold a Mutex lock across the entire length of the operation.

A notable exception is any event which needs a reference to a specific cache item, where it would be fine to hold a RwReadLock across the length of the function, and a Mutex of the specific cache item.

That said, it's driving me a little crazy that we don't have a way to verify that these events are happening in order, especially since the implementation of the command pattern in mother-of-all-refactors, which takes and drops locks on different sections of the event processing pipeline. The pipeline is roughly in order:

  1. Event + State -> ReadOnlyStateView
  2. ReadOnlyStateView + Event -> Vec<Command>
  3. Command + State -> MutableStateView
  4. MutableStateView + Command -> executes the change of state

I thought this would be a good system. It separates the mutable and immutable borrowing, is extremely easy to test (because the logic of creating events is separate from the logic of applying the event; this is due to the command pattern), and seemed good on the surface.

The biggest issue here is that this separates the reading of the state at the beginning of the pipeline to the mutable reading of state at the end... The advantage kills us. Now, since we're locking and unlocking the same piece of data, another event can come down the pipe and read before the write of the previous event is complete.

I don't know the answer here. It's not like main has it any better. We just don't notice because it's so rare for this to happen since it requires conflicting events to come in basically one-after-another.

@TTWNO
Copy link
Member Author

TTWNO commented Jul 7, 2023

Ok, what about this:

  1. Event -> MixedStateView (MixedStateView is any state we will need to read or write to, and it holds a lock, not a reference to each piece). Let's call each piece of data P. So the MixedStateView should contain MutexGuard<P>.
  2. Event + (&P1, &P2, ...) -> Vec<Command> create a command list based on immutable references to the underlying data in the MutexGuard<_>.
  3. (for each) Command + MixedStateView -> execute changes (not sure about this one)

What we're missing here is a way to declare up-front what pieces of state will become necessary. Of course, how can we possibly know this? Especially with addons.

And then, what happens if we may conditionally need some piece of state, so we may or may not need it, but we still need to lock it just in case? That seems extreme.

@TTWNO
Copy link
Member Author

TTWNO commented Jul 7, 2023

At this point, I'm considering running everything on one, synchronous thread.

@TTWNO TTWNO added this to the 0.2.0 milestone Feb 29, 2024
@TTWNO TTWNO linked a pull request Feb 29, 2024 that will close this issue
11 tasks
@TTWNO TTWNO linked a pull request Mar 16, 2024 that will close this issue
10 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant