Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for tests architecture. #96
Proposal for tests architecture. #96
Changes from all commits
15e7bd5
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still have no idea how to easily make the test framework to split a message into different sets of skbs. Probably we can use
TCP_NODELAY
and write the message in different parts, but I'm not sure about bufferization on Python side. However, I believe the task is solvable in generic way.I suppose the the message integrity and probably skb splitting can be done on deproxy side. If we need the same for Nginx tests, then we can just implement the logic as a separate class/module.
All in all, with https://www.youtube.com/watch?v=oO-FMAdjY68 and the requirements above in mind, I believe that the test framework should use OO relatively complex class inheritance for the helping functionality while the tests should just on/off necessary flags for message integrity calculation, sending in different skb and other things. I.e. the helping framework must provide as declarative API as possible and it's not so important how complex it internally is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we had them. We need to discuss this in chat considering particular cases. There are different possibilities what happens when we're in the situation - we grow the framework that it can run many different things in future or we just keep case-specific login in the framework. Let's talk on the cases in the chat. The cases should make the framework design problems clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gen_response_func
andgen_request_func
are nice and must be quite useful, but it seems they also can be done as an extension (feature) of current framework.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update It seems the callbacks are only needed to generate different requests and responses for single deproxy/tempesta run. Actually
test_malformed_headers.py
andtest_tls_integrity.py
use current API for the same things: we haveset_response()
to dynamically change deproxy response also each response is trivially checked.Please review
test_tls_integrity
in #103 and let's discuss there if the architecture can benfit from the callbacks.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
set_response()
callback in the new framework much near to the behaviour I want. But it still has a lot of things to enhance (as I think). Tests in the current framework are not scalable, you cant run the same test on one or 1'000 connections simultaneously.How the current tests in the new framework run:
set_response()
you've mentioned above),Run
button: wait until all server connections are established and begin to start requests, wait until all messages are sent;test_tls_integrity
test. The tests contain no headers validation by default and it must be implemented.How to make test scalable:
Run
button:The second case is more scalable, the test itself is tolerant to number of concurrent connections. May be it's a my misbelief, but I think that scalability from one connection to thousands is important. Issue #107 is very tightly connected here. It's a huge work while we have a plenty of other tasks, I understand that. But I don't know more suitable tools here.
Sure, some special changes will be required to make tests scalable, e.g. in cache tests each client must request unique URIs; in scheduler tests we would like to see backend server ID in the response, anyway it's better than check request counters across all backends.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please have a look on
pv(1)
.TCP_NODELAY
also could be an answer.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we do the same client/connection/request tracking in current deproxy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each (or almost each) test case must configure an ADC and the configuration language and features are very different for different ADCs. Nginx already has it's own and pretty large test suite, but we can't just use it - mostly because of differences in configurations and features.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me it seems that the things can be done in deproxy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense for generic case, but not so much sense if a developer changes low level logic w/o changing any features and he's curious about network operation. I propose to be able to set this in config as well as for particular tests. In this way you can run for example a particular TLS test with message integrity check and/or fuzzing (strictly speaking, skb splitting isn't a real fuzzing) at the same time some tests oriented on the low level logic have sense only with one of few options enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the recent TLS changes affecting TCP transmission process we have to test TCP operation (@i-rinat has questions regarding TCP segment sized generated by Tempesta). The question will be even harder for HTTP/3. Probably ScaPy isn't a good option for this, because it'd be too tricky to generate TCP flow by hands. Probably iptables mangling and/or eBPF with usual TCP options like Nagle will suite the best. We already have something in
helpers/analyzer.py
andhelpers/flaky.py
.