Skip to content

Fix #13027 (daca@home does not handle well when results are too large)#6711

Merged
danmar merged 2 commits intocppcheck-opensource:mainfrom
cppchecksolutions:fix-13027
Aug 18, 2024
Merged

Fix #13027 (daca@home does not handle well when results are too large)#6711
danmar merged 2 commits intocppcheck-opensource:mainfrom
cppchecksolutions:fix-13027

Conversation

@danmar
Copy link
Copy Markdown
Collaborator

@danmar danmar commented Aug 18, 2024

If the upload result is too large it's better that the old result on the disk is removed. The old result on the disk will be invalid/deprecated in one way or another.

@danmar danmar changed the title Fix #13027 (daca@home larger result data must be uploaded in chunks) Fix #13027 (daca@home does not handle well when results are too large) Aug 18, 2024
@danmar danmar merged commit 8349fe2 into cppcheck-opensource:main Aug 18, 2024
@danmar danmar deleted the fix-13027 branch August 22, 2024 13:34
@firewave
Copy link
Copy Markdown
Collaborator

This change makes no sense to me.

You lowered the limit of the actual data of interest and left the others which are multitudes bigger intact. This also changes not much except that we again lose data generated by the clients.

I spent 1+ hour trying to figure out why I was not getting the result for a package from the server (not helping that the daca server is constantly swapping because it is out of memory - one gig just doesn't cut it) and then came across this change...

What does "does not handle well when results are too large" actually mean? It does not explain anything.

@firewave
Copy link
Copy Markdown
Collaborator

Again - this is a really bad change as we lose a lot of daca data.

Currently it looks like we only have very few long-running packages but as those also produce a lot of output the data does not exist as we delete it on upload. So that is very misleading. It wastes a lot of resources on the client side and we lose timing information and diffs.

We should properly upload compressed data (no idea why I didn't think of this before) to make the transport more reliable. This would slightly improve the memory pressure on the server though (which is already running on empty as it only has a single gigabyte - you really need to add at least one more).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants