Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

receiveFile memory optimization: do not use bytes.buffer but write di… #9415

Merged
merged 1 commit into from
May 16, 2022

Conversation

ls0f
Copy link
Contributor

@ls0f ls0f commented May 16, 2022

receiveFile memory optimization: do not use bytes.buffer but write directly to file

Signed-off-by: ls0f lovedboy.tk@qq.com

Note on DCO:

If the DCO action in the integration test fails, one or more of your commits are not signed off. Please click on the Details link next to the DCO action for instructions on how to resolve this.

Checklist:

  • Either (a) I've created an enhancement proposal and discussed it with the community, (b) this is a bug fix, or (c) this does not need to be in the release notes.
  • The title of the PR states what changed and the related issues number (used for the release note).
  • I've included "Closes [ISSUE #]" or "Fixes [ISSUE #]" in the description to automatically close the associated issue.
  • I've updated both the CLI and UI to expose my feature, or I plan to submit a second PR with them.
  • Does this PR require documentation updates?
  • I've updated documentation as required by this PR.
  • Optional. My organization is added to USERS.md.
  • I have signed off all my commits as required by DCO
  • I have written unit and/or e2e tests for my change. PRs without these are unlikely to be merged.
  • My build is green (troubleshooting builds).

…rectly to file

Signed-off-by: ls0f <lovedboy.tk@qq.com>
@codecov
Copy link

codecov bot commented May 16, 2022

Codecov Report

Merging #9415 (50d9873) into master (c026189) will increase coverage by 0.00%.
The diff coverage is 40.00%.

@@           Coverage Diff           @@
##           master    #9415   +/-   ##
=======================================
  Coverage   45.77%   45.78%           
=======================================
  Files         220      220           
  Lines       26168    26165    -3     
=======================================
  Hits        11979    11979           
+ Misses      12531    12529    -2     
+ Partials     1658     1657    -1     
Impacted Files Coverage Δ
util/cmp/stream.go 52.59% <40.00%> (+0.44%) ⬆️
util/settings/settings.go 48.16% <0.00%> (ø)
server/server.go 54.28% <0.00%> (+0.15%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c026189...50d9873. Read the comment docs.

@crenshaw-dev
Copy link
Collaborator

Thanks for the optimization!

I suggested this when @leoluz wrote the stream code. This was his response:

I guess this is a choice we have to make that there isn't a clear best path to take. The choice is between performance vs memory consumption. The file being received is a gzip'ed archive containing text files basically which in the great majority of the cases will be really small. Changing this to write always on disk will slow down the stream reception to address a problem that would happen in 1-5% of cases. If this turn out to be a real problem we could change the gRPC API to send the file size in the header and allow the receiver to decide if it should use a memory buffer or a file buffer. Sounds like a premature optimization at this point. Wdyt?

@leoluz what are your thoughts now? Should we try to choose memory vs. disk based on file size?

@leoluz
Copy link
Collaborator

leoluz commented May 16, 2022

what are your thoughts now? Should we try to choose memory vs. disk based on file size?

@crenshaw-dev if that is becoming a real issue I think we should go in the direction suggested while back:

... we could change the gRPC API to send the file size in the header and allow the receiver to decide if it should use a memory buffer or a file buffer

@ls0f are you facing memory issues with large files? Can you pls provide more context in the PR description?

@ls0f
Copy link
Contributor Author

ls0f commented May 16, 2022

are you facing memory issues with large files? Can you pls provide more context in the PR description?

No, Actually I was reading the source code of argo-cd recently because we have a project that will use it.
When I read this code, I thought there might be an optimization here. If it's a monorepo, it may cause memory problems. (maybe I'm a little picky :) )

So it's not really a problem for me now. In addition, i think write file in append mode won't really reduce the efficiency too much

Anyway. if it'is unnessessary, the pr can be closed. If someone does facing this issue, consider fixing it.

@leoluz @crenshaw-dev

@leoluz
Copy link
Collaborator

leoluz commented May 16, 2022

In addition, i think write file in append mode won't really reduce the efficiency too much

@ls0f Looking at your implementation one more time, I agree with that. Spoke with @crenshaw-dev and we both agree that defaulting to file-system in append mode is preferable in this case.

Copy link
Collaborator

@leoluz leoluz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@leoluz leoluz merged commit c29651d into argoproj:master May 16, 2022
@ls0f ls0f deleted the fix/receivefile branch May 16, 2022 23:12
@crenshaw-dev crenshaw-dev added the cherry-pick/2.4 Candidate for cherry picking into the 2.4 release branch label May 31, 2022
crenshaw-dev pushed a commit that referenced this pull request May 31, 2022
…rectly to file (#9415)

Signed-off-by: ls0f <lovedboy.tk@qq.com>
@crenshaw-dev
Copy link
Collaborator

Cherry-picked to 2.4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cherry-pick/2.4 Candidate for cherry picking into the 2.4 release branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants