New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Smart chunk unloading #3151

Closed
wants to merge 1 commit into
base: master
from

Conversation

Projects
None yet
10 participants
@SafwatHalaby
Member

SafwatHalaby commented Apr 19, 2016

Work in progress. This is a smarter replacement for #3142

  • The server has a MaxChunkRAM setting, configurable in settings.ini (default: 30 MiB)
  • The server has a MinimizeRam setting, configurable in settings.ini (default: 1)

Terminology:
Dirtiness:

  • Clean chunk: No changes were made, no need to save to disk before unloading.
  • Dirty chunk: A chunk which underwent changes. Must save to disk before unloading.

Usage:

  • Unused chunk: A chunk with no players (nor chunkStay, etc.) nearby, the server may unload this.
  • Used chunk: A chunk which is actively used by the game. The server must not unload this.

Other:

  • Saving a chunk: Turns dirty chunks into clean chunks by saving them to disk. After a save, the chunk is no longer dirty and the server may unload it.
  • Unloading a chunk: Removing a chunk from RAM and freeing that RAM. We can only unload chunks that are both unused and clean.

The server has two saving/unloading strategies.

Strategy 1: MinimizeRam is 0:
The server is RAM-greedy, and uses all the RAM specified in MaxChunkRAM as a chunk cache in order to improve performance. Chunks are saved/unloaded as late as possible. Disk I/O is significantly reduced. Performance is improved. RAM usage is typically static and will not go below or above MaxChunkRAM.

  • Dirty chunks are saved in 5 minute intervals to avoid data loss in case of a crash.
  • If current RAM usage exceeds MaxChunkRAM, a minimum amount of clean unused chunks is unloaded in order to preserve the MaxChunkRAM limit. The chunks are chosen randomly.
  • If there are no clean unused chunks left to unload and MaxChunkRAM still cannot be maintained, a save cycle is forced, allowing more chunks to be unloaded.
  • Other than the above three scenarios, chunks are never unloaded or saved.
  • The amount of RAM used by chunks is normally static and equals MaxChunkRAM
  • If the activity is too high and the MaxChunkRAM limit cannot be maintained even after all measures were taken, the "agressive" strategy is used. This happens when the RAM required by the used chunks exceeds MaxChunkRAM.

Strategy 2: MinimizeRam is 1: (Might remove this strategy)
The server is RAM-economic, attempts to never reach MaxChunkRAM. Clean unused Chunks are unloaded as soon as possible.

  • Dirty chunks are saved in 5 minute intervals to avoid data loss in case of a crash.
  • Clean unused chunks are unloaded as soon as possible, making MaxChunkRAM much harder to reach in comparison with the previous scheme.
  • If current RAM usage exceeds MaxChunkRAM, a save cycle is forced, allowing more chunks to be unloaded.
  • Other than the three above scenarios, chunks are never unloaded or saved.
  • If the activity is too high and the MaxChunkRAM limit cannot be maintained, the "agressive" strategy is used. This happens when the RAM required by the used chunks exceeds MaxChunkRAM.

"Agressive" strategy:

  • Dirty chunks are saved in 5 minute intervals to avoid data loss in case of a crash.
  • Whenever a chunk becomes unused, it is unloaded asap (and saved if dirty).
  • The agressive strategy is technically nonexistent and does not have its own code. It's just what happens when the server is constantly above MaxChunkRAM in either strategy.

Additionally, this closes #3166

Show outdated Hide outdated src/Chunk.cpp
Show outdated Hide outdated src/Chunk.h
Show outdated Hide outdated src/ChunkMap.cpp
Show outdated Hide outdated src/World.cpp
Show outdated Hide outdated src/World.cpp
Show outdated Hide outdated src/World.cpp
Show outdated Hide outdated src/World.cpp
@worktycho

This comment has been minimized.

Show comment
Hide comment
@worktycho

worktycho Apr 20, 2016

Member

This looks good. Other than my two comments regarding the trade-offs taken, this looks like a good feature.

Member

worktycho commented Apr 20, 2016

This looks good. Other than my two comments regarding the trade-offs taken, this looks like a good feature.

Show outdated Hide outdated src/Root.cpp
@Schwertspize

This comment has been minimized.

Show comment
Hide comment
@Schwertspize

Schwertspize Apr 20, 2016

Contributor

But is it possible? Do chunks in ram have a constant size, or a size with a not too wild deviation?

I think it is always possible. If the chunk size (In memory) varies from chunk to chunk, I would just say that we just configure a threshold in the config and if it is exceeded the beat chunk to be unloaded gets selected and unloaded and if the memory limit is still exceeded, recursion. Doesn't that work good enough?

Contributor

Schwertspize commented Apr 20, 2016

But is it possible? Do chunks in ram have a constant size, or a size with a not too wild deviation?

I think it is always possible. If the chunk size (In memory) varies from chunk to chunk, I would just say that we just configure a threshold in the config and if it is exceeded the beat chunk to be unloaded gets selected and unloaded and if the memory limit is still exceeded, recursion. Doesn't that work good enough?

@worktycho

This comment has been minimized.

Show comment
Hide comment
@worktycho

worktycho Apr 20, 2016

Member

Chunks can vary in size from a couple of Hundred bytes to ~160KiB. Average size and variance depend on the world generation. The problem with a loop is that you'd have to redo the total memory calculation for every iteration of the loop. You might get an estimate of per chunk usage if we made cChunkData expose the amount of memory used for storing segments though.

Member

worktycho commented Apr 20, 2016

Chunks can vary in size from a couple of Hundred bytes to ~160KiB. Average size and variance depend on the world generation. The problem with a loop is that you'd have to redo the total memory calculation for every iteration of the loop. You might get an estimate of per chunk usage if we made cChunkData expose the amount of memory used for storing segments though.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Apr 20, 2016

Member

@Schwertspize The problem is that calculating the allocated RAM accurately seems hard. @worktycho wrote this rough formula:

LoadedChunks * (
    sizeof(cChunk) + (Internal Allocations Factor))+ Allocated Segments * sizeof(cChunkData)
)

@worktycho , the "Allocated segments" part confuses me. What is it exactly? Did you mean this?

LoadedChunks * (
    sizeof(cChunk) + (Internal Allocations Factor))+ Allocated Sections * sizeof(cChunkData::sChunkSection)
)
Member

SafwatHalaby commented Apr 20, 2016

@Schwertspize The problem is that calculating the allocated RAM accurately seems hard. @worktycho wrote this rough formula:

LoadedChunks * (
    sizeof(cChunk) + (Internal Allocations Factor))+ Allocated Segments * sizeof(cChunkData)
)

@worktycho , the "Allocated segments" part confuses me. What is it exactly? Did you mean this?

LoadedChunks * (
    sizeof(cChunk) + (Internal Allocations Factor))+ Allocated Sections * sizeof(cChunkData::sChunkSection)
)
@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Apr 20, 2016

Member

@worktycho we don't have to calculate per loop. We can do this incrementally, like I did with the chunk counting.

Member

SafwatHalaby commented Apr 20, 2016

@worktycho we don't have to calculate per loop. We can do this incrementally, like I did with the chunk counting.

@worktycho

This comment has been minimized.

Show comment
Hide comment
@worktycho

worktycho Apr 20, 2016

Member

The thing about allocated segments is exactly what I meant.
The problem with doing it incrementally is that internal allocations factor. We can get an approximate memory usage by just having a constant value which estimates how much is allocated in internal allocations. This would probably get something like 10% error. If we are happy with that sort of error then doing it incrementally is easy, but you need to expose some way of getting the chunkdata segment counts for each chunk.
If we want anything more precise, we'd need to start tracking slt memory allocations, which would mean a per chunk allocator wrapper if we want per chunk usage statistics.

Member

worktycho commented Apr 20, 2016

The thing about allocated segments is exactly what I meant.
The problem with doing it incrementally is that internal allocations factor. We can get an approximate memory usage by just having a constant value which estimates how much is allocated in internal allocations. This would probably get something like 10% error. If we are happy with that sort of error then doing it incrementally is easy, but you need to expose some way of getting the chunkdata segment counts for each chunk.
If we want anything more precise, we'd need to start tracking slt memory allocations, which would mean a per chunk allocator wrapper if we want per chunk usage statistics.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Apr 20, 2016

Member

What are "Allocated segments"?

Member

SafwatHalaby commented Apr 20, 2016

What are "Allocated segments"?

@worktycho

This comment has been minimized.

Show comment
Hide comment
@worktycho

worktycho Apr 20, 2016

Member

Instances of cChunkData::cSection. They hold a 16x16x16 block section of a chunks block, meta and light data. At 10KiB each, they are the biggest single object in a chunk.

Member

worktycho commented Apr 20, 2016

Instances of cChunkData::cSection. They hold a 16x16x16 block section of a chunks block, meta and light data. At 10KiB each, they are the biggest single object in a chunk.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Apr 20, 2016

Member

I think we don't have to get more accurate than that. Totally ignoring Internal Allocations Factor should be fine too. We just need to warn the user that the limit is just an estimation and that chunks will consume more.

Member

SafwatHalaby commented Apr 20, 2016

I think we don't have to get more accurate than that. Totally ignoring Internal Allocations Factor should be fine too. We just need to warn the user that the limit is just an estimation and that chunks will consume more.

@Schwertspize

This comment has been minimized.

Show comment
Hide comment
@Schwertspize

Schwertspize Apr 20, 2016

Contributor

I'd say 10% error is good for checking if the next chunk should be freed too. Well, for the threshold we should use accurate, and check them only every 5 minutes or anything, but within that "iteration loop" to get below the threshold we can keep 10%. I mean it's not a maximum ram usage, it's the average ram usage and could peak below and above

Contributor

Schwertspize commented Apr 20, 2016

I'd say 10% error is good for checking if the next chunk should be freed too. Well, for the threshold we should use accurate, and check them only every 5 minutes or anything, but within that "iteration loop" to get below the threshold we can keep 10%. I mean it's not a maximum ram usage, it's the average ram usage and could peak below and above

@worktycho

This comment has been minimized.

Show comment
Hide comment
@worktycho

worktycho Apr 20, 2016

Member

If we want accurate measurement for the threshold, then we still need STL allocation tracking, but we only need to keep one per world allocator, rather than a per chunk allocator.

Member

worktycho commented Apr 20, 2016

If we want accurate measurement for the threshold, then we still need STL allocation tracking, but we only need to keep one per world allocator, rather than a per chunk allocator.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Apr 20, 2016

Member

Got the initial RAM implementation in place and rebased. The chunk unload code still does not rely on RAM as of now. Stay tuned.

Member

SafwatHalaby commented Apr 20, 2016

Got the initial RAM implementation in place and rebased. The chunk unload code still does not rely on RAM as of now. Stay tuned.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Apr 20, 2016

Member

I have a working implementation. We should see how bad the estimation is, and potentially improve it by:

  • Calling Increment/DecrementApproximateChunkRAM wherever appropriate
  • Taking Internal Allocations Factor into account.
  • @worktycho 's Internal allocator tracking thing?
Member

SafwatHalaby commented Apr 20, 2016

I have a working implementation. We should see how bad the estimation is, and potentially improve it by:

  • Calling Increment/DecrementApproximateChunkRAM wherever appropriate
  • Taking Internal Allocations Factor into account.
  • @worktycho 's Internal allocator tracking thing?
@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Apr 20, 2016

Member

The CI error is strange.

In file included from /home/travis/build/cuberite/cuberite/src/ChunkData.cpp:8:
In file included from /home/travis/build/cuberite/cuberite/src/World.h:29:
/home/travis/build/cuberite/cuberite/src/ClientHandle.h:21:10: fatal error: 

      'json/json.h' file not found

#include "json/json.h"
Member

SafwatHalaby commented Apr 20, 2016

The CI error is strange.

In file included from /home/travis/build/cuberite/cuberite/src/ChunkData.cpp:8:
In file included from /home/travis/build/cuberite/cuberite/src/World.h:29:
/home/travis/build/cuberite/cuberite/src/ClientHandle.h:21:10: fatal error: 

      'json/json.h' file not found

#include "json/json.h"
Show outdated Hide outdated src/ChunkData.cpp
Show outdated Hide outdated src/ChunkMap.cpp
@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Apr 20, 2016

Member

Can someone help with the CI? I don't think it's my PR that's the problem.

Member

SafwatHalaby commented Apr 20, 2016

Can someone help with the CI? I don't think it's my PR that's the problem.

@Schwertspize

This comment has been minimized.

Show comment
Hide comment
@Schwertspize

Schwertspize Apr 21, 2016

Contributor

Builds fine on my server (CentOS 6)

Contributor

Schwertspize commented Apr 21, 2016

Builds fine on my server (CentOS 6)

@Seadragon91

This comment has been minimized.

Show comment
Hide comment
@Seadragon91

Seadragon91 Apr 21, 2016

Contributor

I compiled fine with clang 3.5. Using clang 3.4 for compiling fails on my side.

Contributor

Seadragon91 commented Apr 21, 2016

I compiled fine with clang 3.5. Using clang 3.4 for compiling fails on my side.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Apr 21, 2016

Member

Was the clang 3.4 error identical to the CI error described above?

Member

SafwatHalaby commented Apr 21, 2016

Was the clang 3.4 error identical to the CI error described above?

@tigerw

This comment has been minimized.

Show comment
Hide comment
@tigerw

tigerw May 16, 2016

Member

yay, xoft isn't dead!

Member

tigerw commented May 16, 2016

yay, xoft isn't dead!

@@ -37,7 +35,7 @@ class cChunkData
struct sChunkSection;
cChunkData(cAllocationPool<cChunkData::sChunkSection> & a_Pool);
cChunkData(cAllocationPool<cChunkData::sChunkSection> & a_Pool, cMemoryCounter & a_WorldMemoryCounter);

This comment has been minimized.

@madmaxoft

madmaxoft May 16, 2016

Member

This could use a bit more commenting - what are the parameters, what they're used for etc.

@madmaxoft

madmaxoft May 16, 2016

Member

This could use a bit more commenting - what are the parameters, what they're used for etc.

@@ -513,6 +531,8 @@ class cChunkMap
cWorld * m_World;
size_t m_ChunkCounter;

This comment has been minimized.

@madmaxoft

madmaxoft May 16, 2016

Member

Again, NumChunks would be a better name than ChunkCounter

@madmaxoft

madmaxoft May 16, 2016

Member

Again, NumChunks would be a better name than ChunkCounter

@madmaxoft

This comment has been minimized.

Show comment
Hide comment
@madmaxoft

madmaxoft May 16, 2016

Member

xoft kinda wishes he was dead, rather than reading the entirety of the >220 comments here :P

Member

madmaxoft commented May 16, 2016

xoft kinda wishes he was dead, rather than reading the entirety of the >220 comments here :P

@tigerw

This comment has been minimized.

Show comment
Hide comment
@tigerw

tigerw May 16, 2016

Member

Well, just you wait for #3115.

Member

tigerw commented May 16, 2016

Well, just you wait for #3115.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby May 17, 2016

Member

Can anything terrible happen when servers crash mid-save?

Member

SafwatHalaby commented May 17, 2016

Can anything terrible happen when servers crash mid-save?

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby May 17, 2016

Member

I've removed my posts discussing the other possible strategies. I think I'll remove strategy 2 and get the "minimum viable product" merged for now.

Member

SafwatHalaby commented May 17, 2016

I've removed my posts discussing the other possible strategies. I think I'll remove strategy 2 and get the "minimum viable product" merged for now.

@worktycho

This comment has been minimized.

Show comment
Hide comment
@worktycho

worktycho May 18, 2016

Member

If a server crashes mid-save it can corrupt the save files. Unfortunately there is nothing we can do about that without completely redoing the chunk format.

Member

worktycho commented May 18, 2016

If a server crashes mid-save it can corrupt the save files. Unfortunately there is nothing we can do about that without completely redoing the chunk format.

@madmaxoft

This comment has been minimized.

Show comment
Hide comment
@madmaxoft

madmaxoft May 18, 2016

Member

@worktycho There is a way to make the server crashes mid-save less catastrophic. Currently the chunk data is overwritten in the file in-place. If we changed that so that the chunk data always uses free space in the file, then we lower the risk - the server first writes the new data, then updates the "pointer" to that data in the file's header, much smaller surface for problems. This is, however off-topic, let's not discuss it here further.

Member

madmaxoft commented May 18, 2016

@worktycho There is a way to make the server crashes mid-save less catastrophic. Currently the chunk data is overwritten in the file in-place. If we changed that so that the chunk data always uses free space in the file, then we lower the risk - the server first writes the new data, then updates the "pointer" to that data in the file's header, much smaller surface for problems. This is, however off-topic, let's not discuss it here further.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby May 19, 2016

Member

That's my main concern with this PR. It can save more often, which may increase the chance of chunk corruption. Especially when maxRam is set low.

Theoritical questions:

The storage thread is seldom the one that crashes. Can we somehow make the storage thread "defer" the termination in the event of a crash, finish saving the current chunk, and only stop afterwards? I am not very knowlegable with OS crash handling or multithreading, so this could be impossible. If the storage thread runs in an entirely different process, I'd imagine it would be possible? Enlighten me.

Member

SafwatHalaby commented May 19, 2016

That's my main concern with this PR. It can save more often, which may increase the chance of chunk corruption. Especially when maxRam is set low.

Theoritical questions:

The storage thread is seldom the one that crashes. Can we somehow make the storage thread "defer" the termination in the event of a crash, finish saving the current chunk, and only stop afterwards? I am not very knowlegable with OS crash handling or multithreading, so this could be impossible. If the storage thread runs in an entirely different process, I'd imagine it would be possible? Enlighten me.

@worktycho

This comment has been minimized.

Show comment
Hide comment
@worktycho

worktycho May 19, 2016

Member

I'm not sure about windows, but on Unix at least, you can intercept segfaults. But we would have to make sure that a segfault on a storage thread did not try to continue saving on that thread. And its dangerous to do unless you are triggering segfaults on purpose. Because lots of locks will be held be the dead thread.

Member

worktycho commented May 19, 2016

I'm not sure about windows, but on Unix at least, you can intercept segfaults. But we would have to make sure that a segfault on a storage thread did not try to continue saving on that thread. And its dangerous to do unless you are triggering segfaults on purpose. Because lots of locks will be held be the dead thread.

@tigerw

This comment has been minimized.

Show comment
Hide comment
@tigerw

tigerw May 24, 2016

Member

Ideally the server doesn't crash, but I guess that's an ideal.

Member

tigerw commented May 24, 2016

Ideally the server doesn't crash, but I guess that's an ideal.

@sphinxc0re

This comment has been minimized.

Show comment
Hide comment
@sphinxc0re

sphinxc0re Jun 4, 2016

Contributor

Any updates on this one? @LogicParrot

Contributor

sphinxc0re commented Jun 4, 2016

Any updates on this one? @LogicParrot

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Jul 27, 2016

Member

This isn't abandoned, just delayed. I'd like to study how serious is the likelihood of chunk corruption upon crashes when saves are done more often.

Member

SafwatHalaby commented Jul 27, 2016

This isn't abandoned, just delayed. I'd like to study how serious is the likelihood of chunk corruption upon crashes when saves are done more often.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Aug 4, 2016

Member

Fixing #2565 conflicts.

Member

SafwatHalaby commented Aug 4, 2016

Fixing #2565 conflicts.

@tigerw

This comment has been minimized.

Show comment
Hide comment
@tigerw

tigerw Aug 4, 2016

Member

Sorry.

Member

tigerw commented Aug 4, 2016

Sorry.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Aug 5, 2016

Member

I decided to divert my attention to implementing the rest of the mobs and hunting cClienthandle crash bugs for now, because those seem far higher priority than this PR.

In the meantime, I propose we alleviate the concern outlined in #3142 by following @tigerw 's advice and simply making the save interval shorter (e.g. 3 minutes) or configurable.

Member

SafwatHalaby commented Aug 5, 2016

I decided to divert my attention to implementing the rest of the mobs and hunting cClienthandle crash bugs for now, because those seem far higher priority than this PR.

In the meantime, I propose we alleviate the concern outlined in #3142 by following @tigerw 's advice and simply making the save interval shorter (e.g. 3 minutes) or configurable.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Aug 11, 2016

Member

And for Cuberite, already limited resources will be diverted to maintaining another extra component which is very unlikely to affect what a user sees, and very likely to cause more bugs.

The more I think about this, the more I lean toward @tigerw 's opinion. This is perfect in theory and it utilizes RAM optimally, but it adds code complexity and we're limited in developer time, and a much simpler solution can do the job well enough and solve e.g. the Raspberry Pi fast flying issues mentioned in #3142.

I don't think a user will notice any major difference between the solution outlined below and this PR, even though the solution below is extremely simple.

A difference will be noticable on high load, huge servers, but we're not there yet. And I think we should opt for the simpler solution for now. I'm closing this, but I'll keep the code, and perhaps this will be useful sometime in the future.

The alternative:
MaxUnusedChunks in settings.ini
If current unused chunks > MaxUnusedChunks, call a save cycle.
MaxUnusedChunks is set low for Raspberry Pis.

Member

SafwatHalaby commented Aug 11, 2016

And for Cuberite, already limited resources will be diverted to maintaining another extra component which is very unlikely to affect what a user sees, and very likely to cause more bugs.

The more I think about this, the more I lean toward @tigerw 's opinion. This is perfect in theory and it utilizes RAM optimally, but it adds code complexity and we're limited in developer time, and a much simpler solution can do the job well enough and solve e.g. the Raspberry Pi fast flying issues mentioned in #3142.

I don't think a user will notice any major difference between the solution outlined below and this PR, even though the solution below is extremely simple.

A difference will be noticable on high load, huge servers, but we're not there yet. And I think we should opt for the simpler solution for now. I'm closing this, but I'll keep the code, and perhaps this will be useful sometime in the future.

The alternative:
MaxUnusedChunks in settings.ini
If current unused chunks > MaxUnusedChunks, call a save cycle.
MaxUnusedChunks is set low for Raspberry Pis.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Aug 11, 2016

Member

We should salvage the Chunk.cpp state changes though, because I think they make the chunk code clearer.

Member

SafwatHalaby commented Aug 11, 2016

We should salvage the Chunk.cpp state changes though, because I think they make the chunk code clearer.

@bearbin

This comment has been minimized.

Show comment
Hide comment
@bearbin

bearbin Aug 11, 2016

Member

Your proposed system does have the downside of not allowing a chunk/RAM limit which would be attractive for GSPs.

Member

bearbin commented Aug 11, 2016

Your proposed system does have the downside of not allowing a chunk/RAM limit which would be attractive for GSPs.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Aug 11, 2016

Member

What's GSP?
I am perfectly aware the proposed system is not as good performance-wise, but it's an order of magnitude simpler and that is attractive.

Member

SafwatHalaby commented Aug 11, 2016

What's GSP?
I am perfectly aware the proposed system is not as good performance-wise, but it's an order of magnitude simpler and that is attractive.

@bearbin

This comment has been minimized.

Show comment
Hide comment
@bearbin

bearbin Aug 11, 2016

Member

https://www.spigotmc.org/wiki/glossary/#gsp

Most GSPs sell packages by RAM, and chunk numbers are effectively proportional to RAM consumption.

Member

bearbin commented Aug 11, 2016

https://www.spigotmc.org/wiki/glossary/#gsp

Most GSPs sell packages by RAM, and chunk numbers are effectively proportional to RAM consumption.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Aug 11, 2016

Member

Well, if MaxUnusedChunks is set to 0, freeing becomes automatically agressive, so the absolute minimum RAM is used. Couple that with setting a low player limit, and you get a reasonable -though not numerically inputted- RAM limit.

Member

SafwatHalaby commented Aug 11, 2016

Well, if MaxUnusedChunks is set to 0, freeing becomes automatically agressive, so the absolute minimum RAM is used. Couple that with setting a low player limit, and you get a reasonable -though not numerically inputted- RAM limit.

@SafwatHalaby

This comment has been minimized.

Show comment
Hide comment
@SafwatHalaby

SafwatHalaby Oct 15, 2017

Member

Ping. The "useful bits" are: Chunk has an enum (saving, loading, etc) rather than several flags. Makes chunk states clearer. If someone has the time, this is useful to extract and merge separately.

Member

SafwatHalaby commented Oct 15, 2017

Ping. The "useful bits" are: Chunk has an enum (saving, loading, etc) rather than several flags. Makes chunk states clearer. If someone has the time, this is useful to extract and merge separately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment