Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why does OpenSSL 3.0-alpha5 use more memory for TLS handshake than OpenSSL 1.1.1f #13270

Closed
dennisokko opened this issue Oct 29, 2020 · 12 comments
Labels
resolved: answered The issue contained a question which has been answered triaged: question The issue contains a question

Comments

@dennisokko
Copy link

dennisokko commented Oct 29, 2020

Hi,I want to know these questions.
Why does OpenSSL 3.0-alpha5 use more memory than OpenSSL 1.1.1f when use SSL_connect function for TLS handshake?
What code/features changes caused it?
How to reduce memory to the same as OpenSSL 1.1.1f?
Thanks in advance.

@dennisokko dennisokko added the issue: question The issue was opened to ask a question label Oct 29, 2020
@mattcaswell mattcaswell added triaged: question The issue contains a question and removed issue: question The issue was opened to ask a question labels Oct 29, 2020
@kaduk
Copy link
Contributor

kaduk commented Oct 29, 2020

It is generally expected that openssl 3.0 will have a larger memory footprint than 1.1.1, due to the re-architecture and needing to keep around some level of legacy support for a transition period until the old APIs can be removed.
This would account for most of the baseline difference in library footprint; if you are talking about a per-connection overhead that would probably have a different explanation (but I would not be surprised if there is also an increase in per-connection overhead).
If your application is compatible with doing so, configuring with no-deprecated will remove a large portion of the legacy support and thus reduce library footprint.

@kaduk kaduk added the resolved: answered The issue contained a question which has been answered label Oct 29, 2020
@dennisokko
Copy link
Author

dennisokko commented Oct 30, 2020

@kaduk Thank you for your reply.
I'm really saying that there is an increase in per-connection overhead.
I used the AES256-GCM-SHA384 Cipher Suite and the test data showed that there is an increase in per-connection overhead.It increased by 78%.
I want to know where the increase is caused and how to reduce to the same as OpenSSL 1.1.1f?

@levitte
Copy link
Member

levitte commented Oct 30, 2020

When diverse crypto implementations are fetched from the providers, their diverse methods are created on the fly and cached internally. That will probably be a noticeable chunk of memory. However, that's expected to be a one time overhead, not a per-connection one. I'd like to know if you do, say, 1000 connections to the same address, are you seeing a 78% memory increase 1000 times, or just the first time (or possibly a very limited amount of times)?

@dennisokko
Copy link
Author

@levitte hi, levitte.
Sorry, I didn't make it clear.
Just the first time it increased by 78%.
Re-establishing connection after first connection is similar to 1.1.1f.

@dennisokko
Copy link
Author

dennisokko commented Oct 30, 2020

Does this mean that the change of crypto library cause the increase of runtime memory in the first connection?

@levitte
Copy link
Member

levitte commented Oct 30, 2020

Does this mean that the change of crypto library cause the increase of runtime memory in the first connection?

Yes

@richsalz
Copy link
Contributor

It's building some tables of function pointers. I am surprised it is noticeable. A malloc-trace or some such would help figure things out.

@dennisokko
Copy link
Author

dennisokko commented Oct 31, 2020

@levitte Can i add feature options to reduce default provider to reduce the increase of runtime memory in the first connection,such as no-ec etc.
Do you have any recommendations?

@dennisokko
Copy link
Author

dennisokko commented Nov 2, 2020

Load the default provider is on the thread or a process?Load only once in a multi-thread?

@kaduk
Copy link
Contributor

kaduk commented Nov 2, 2020

@levitte Can i add feature options to reduce default provider to reduce the increase of runtime memory in the first connection,such as no-ec etc.
Do you have any recommendations?

A similar question was answered at #13219 (comment) but I note that this is known to only regain part of the memory footprint in question.

Load the default provider is on the thread or a process?Load only once in a multi-thread?

I believe it is per-process.

@dennisokko
Copy link
Author

After load default provider, per-connection increase 1KB runtime memory, is "struct ssl3_state_st *s3;" change in struct ssl_st cause it?

@levitte
Copy link
Member

levitte commented Nov 4, 2020

After load default provider, per-connection increase 1KB runtime memory, is "struct ssl3_state_st *s3;" change in struct ssl_st cause it?

I assume not. The structure itself is the same, it's just that in 3.0, it's included inside struct ssl_st, while in 1.1.1, it's a separately allocated chunk of memory, so only looking at that particular structure, it should take 4 or 8 bytes less (on pointer less, and that's forgetting memory allocation overhead), assuming the arrays in that structure are still the same size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
resolved: answered The issue contained a question which has been answered triaged: question The issue contains a question
Projects
None yet
Development

No branches or pull requests

5 participants