Fix #13027 (daca@home does not handle well when results are too large)#6711
Fix #13027 (daca@home does not handle well when results are too large)#6711danmar merged 2 commits intocppcheck-opensource:mainfrom
Conversation
|
This change makes no sense to me. You lowered the limit of the actual data of interest and left the others which are multitudes bigger intact. This also changes not much except that we again lose data generated by the clients. I spent 1+ hour trying to figure out why I was not getting the result for a package from the server (not helping that the daca server is constantly swapping because it is out of memory - one gig just doesn't cut it) and then came across this change... What does "does not handle well when results are too large" actually mean? It does not explain anything. |
|
Again - this is a really bad change as we lose a lot of daca data. Currently it looks like we only have very few long-running packages but as those also produce a lot of output the data does not exist as we delete it on upload. So that is very misleading. It wastes a lot of resources on the client side and we lose timing information and diffs. We should properly upload compressed data (no idea why I didn't think of this before) to make the transport more reliable. This would slightly improve the memory pressure on the server though (which is already running on empty as it only has a single gigabyte - you really need to add at least one more). |
If the upload result is too large it's better that the old result on the disk is removed. The old result on the disk will be invalid/deprecated in one way or another.