Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache static-esque game data on packet level #16

Open
emansom opened this issue Jan 15, 2022 · 2 comments
Open

Cache static-esque game data on packet level #16

emansom opened this issue Jan 15, 2022 · 2 comments

Comments

@emansom
Copy link

emansom commented Jan 15, 2022

When triggering commands that generate the same data each time, cache the packet response to not waste CPU and I/O resources. For some packets the database gets hit every time * N users, and this leads to excessive database load. All emulators currently available experience this design flaw.

Meth0d and/or matty13 built a novel (for the retro community) way of caching packets by key-value. An example of it's usage can be found here.

Data was cached on the protocol level, similar to how reverse http proxies like Varnish (L1 tier) and Apache Traffic Server (L2, L3 tier) used by Wikimedia and New York Times cache content to achieve high throughput and low latency responses.

Caching on the protocol level bypasses most if not all game logic on second (or first, depending on implementation, stale-while-revalidate is a well suited pattern that could be adopted for e.g. navigator; initializing cache on constructor/loading data, scheduling a refresh decoupled from response when client asks for it, limited per X time) packet received, this lowers database load and considerably (just ask popular retro's on their specced VPSes) during peak traffic and/or when dealing with scriptkiddies.

In this age of 1TB RAM servers and 2TB PMEM devices with ridiculous throughput and nanosecond latencies; what is being taught in computer science classes (memory being expensive, CPU re-calculation being cheap) no longer applies. It now makes more sense to calculate once and store in memory and to recalculate only when absolutely necessary. Such application architecture allows the CPU to spend most of it's time towards expensive game logic like pathfinding, instead of shuffling data around.

Implementing this would take a concurrent (being N thread R/W access safe) key-value store implementation like ristretto or a remote one like Redis.

Response packets could then be accessed by each game service, allowing each to implement custom behavior. (e.g. item inventory being cached per user by attaching user ID to the key).

@jtieri
Copy link
Owner

jtieri commented Jan 15, 2022

This x100

Now that I have started to get into the "meat" of the game logic (e.g. rooms, items, models, etc) it makes sense to start considering efficient caching infrastructure in the server.

I'm in the midst of trying to wrap my mind around how rooms are initialized and loaded currently, it seems like I could get carried away over engineering or making premature optimizations so I think my plan is to implement the necessary messages/commands that will be needed or at least gather the necessary information (headers, payload format etc) and then design the models for items, room entities, room models before going back over everything and making it better than "it just works"

AKA I don't know what the hell I'm doing right now so I wanna get rooms loading before going back over everything and making it efficient/pretty versus cheap hacks for the sake of progress. I appreciate the constant feedback and tips, you are giving me lots to work with here.

@yunginnanet
Copy link

might wanna look into bitcask for embedded k/v imo. I use it in almost everything I do. I'm actually working on implementing it into my fork of habbgo, not even for caching - but as a replacement for SQL via JSON.

another great option is bbolt, a lot of people prefer it due to it having more community packages related to it.

either way, emansom's idea is great

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants