New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additional test types #133

Open
bhauer opened this Issue Apr 11, 2013 · 67 comments

Comments

Projects
None yet
@bhauer
Member

bhauer commented Apr 11, 2013

We plan to add new test types over time. The following is a summary of tests we have presently and those we plan to specify and implement in the future.

  1. Present: JSON Serialization, in which a trivial newly-instantiated object is serialized to JSON.
  2. Present: Single database query, in which a single random row is fetched (via the framework's ORM) from a simple database table containing 10,000 rows and then serialized to JSON.
  3. Present: Multiple database queries, which is similar to the previous test but allowing the number of random rows to be specified as a URL parameter with the results rendered as a JSON list.
  4. Present: Server-side template and collections test. This involves retrieving a small number of rows from the database, sorting within the application code (not within the database), and rendering to HTML via server-side templates. No external assets will be referenced by the templates. This is detailed in issue #134.
  5. Present: Database update test. This is a variation of test 2. A single row will be fetched via the ORM, some trivial math will be applied to the random number field of the row, and then the object will be persisted using the ORM. This is intended to exercise the ORM's ability to persist rows, so the trivial math isn't applied directly to the row using SQL. This is detailed in issue #263.
  6. Present: Small plaintext responses. This is detailed in issue #290.
  7. Future: Caching test. Testing caching might begin with a variation of test 2 using the framework's caching capability, but we will also want to test caching results of more complex query operations. See #374. This is likely to be the next test type.
  8. Future: Server-side templates with assets. This will extend test 4 and add to-be-determined assets, at least composed of a style-sheet (CSS), but possibly also including JavaScript. Performance-wise, this likely won't differ much from test 4. However, it will be an opportunity for readers to dig into the code and observe the frameworks' variety of approaches for handling assets.
  9. Future: Compression tests. Add gzip or deflate compression to one or more test.
  10. Future: SSL tests. Add SSL to one or more tests. This is detailed in issue #3290.
  11. Future: WebSocket enabled tests. (High concurrency is desirable here.)
  12. Future: Tests that exercise requests made to external services and therefore must go idle until the external service provides a response. (High concurrency is desirable here.)
  13. Future: JSON responses with larger workloads (complex data structure serialization).
  14. Future: Transactional update test. See #326.
  15. Future: Large plaintext responses.
  16. Future: Complex routing map test. Require a given number of routes to be present to exercise the overhead of a larger routing map/table/tree.
  17. Future: Heavy model test, involving a larger number of entity objects and classes, as suggested by @methane in comments below.
  18. Future: CSRF protection and form processing test as suggested by @michaelhixson below. Note that @wg of Wrk fame has made a special version that selects from a list of requests and might allow us to run this test with Wrk.
  19. Future: Large static response test as suggested by @weltermann17 below.
  20. Future: Static file serving, to exercise the performance of the web-server. To be clear, this test would be expected to bypass the framework where applicable and be served directly by the web server or application server, whichever is available and best suited.
  21. Future: Penetration test(s). This would require additional client-side testing tools (beyond the load generator we use today), but would validate the security of the platform and framework combination.
  22. Future: TCP-heavy test. This test would mirror the Plaintext test but eliminate both pipelining and keep-alive—each request would need to be connected via a TCP socket and disconnected. It may be the case that such a test runs into networking-layer limits in the Linux TCP stack, so we'll need to be prepared to do some tuning there.

For the time being, we're still interested in relatively simple tests that exercise various components of the frameworks. But we're also interested in hearing your thoughts on more tests for the long term. If you have any ideas, please post them here.

@robertmeta

This comment has been minimized.

Show comment
Hide comment
@robertmeta

robertmeta Apr 12, 2013

Any chance of getting more significant levels of concurrency being tested? At least 1,000+ concurrency, and ideally 10,000+. Concurrency seems to be exceptionally under-represented in these tests.

robertmeta commented Apr 12, 2013

Any chance of getting more significant levels of concurrency being tested? At least 1,000+ concurrency, and ideally 10,000+. Concurrency seems to be exceptionally under-represented in these tests.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Apr 12, 2013

Member

Hi @robertmeta. Thanks for the input! Would you mind reviewing the thread on issue #49 regarding concurrency levels and perhaps add to that conversation? It's my opinion that higher concurrency levels beyond what we have provided here would be useful if we were ready to benchmark high-connection low-utilization Websockets. But presently we are testing high-traffic traditional HTTP where responding to requests as quickly as possible is the paramount objective.

As with anything though, I'm prepared to be proven wrong. :)

Member

bhauer commented Apr 12, 2013

Hi @robertmeta. Thanks for the input! Would you mind reviewing the thread on issue #49 regarding concurrency levels and perhaps add to that conversation? It's my opinion that higher concurrency levels beyond what we have provided here would be useful if we were ready to benchmark high-connection low-utilization Websockets. But presently we are testing high-traffic traditional HTTP where responding to requests as quickly as possible is the paramount objective.

As with anything though, I'm prepared to be proven wrong. :)

@bitemyapp

This comment has been minimized.

Show comment
Hide comment
@bitemyapp

bitemyapp Apr 12, 2013

Contributor

I'm with bhauer, this isn't, "how many users can we serve per server on our chat service".

Contributor

bitemyapp commented Apr 12, 2013

I'm with bhauer, this isn't, "how many users can we serve per server on our chat service".

@drewcrawford

This comment has been minimized.

Show comment
Hide comment
@drewcrawford

drewcrawford Apr 12, 2013

Some things I would like to see in the future:

  • msgpack tests. Msgpack is rapidly becoming an alternative to JSON particularly with non-browser clients.
  • multirow reads/writes, perhaps computing a mathematical function from a table or updating a hundred rows. Almost all of the requests I serve are multirow reads or writes, and frameworks have some per-row overhead usually
  • Making an "onward" request to another server. This tests the outbound HTTP stack.
  • A relationship (join) test, where you are using the ORM to relate two or more entities in a parent/child configuration. Frameworks take different approaches for eager vs lazy loading; the results may be interesting. Maybe construct a loop of entities and then check it for cycles.

drewcrawford commented Apr 12, 2013

Some things I would like to see in the future:

  • msgpack tests. Msgpack is rapidly becoming an alternative to JSON particularly with non-browser clients.
  • multirow reads/writes, perhaps computing a mathematical function from a table or updating a hundred rows. Almost all of the requests I serve are multirow reads or writes, and frameworks have some per-row overhead usually
  • Making an "onward" request to another server. This tests the outbound HTTP stack.
  • A relationship (join) test, where you are using the ORM to relate two or more entities in a parent/child configuration. Frameworks take different approaches for eager vs lazy loading; the results may be interesting. Maybe construct a loop of entities and then check it for cycles.
@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Apr 12, 2013

Member

Hi @drewcrawford. Thanks for the ideas!

No rush, since this is just long-term planning, but I am curious about your second idea concerning multi-row reads and writes. In my head I conceive of that as executing a single UPDATE but I am probably misunderstanding you. Would you be able to draft up some quick pseudo-code to allow me to visualize what you mean?

Your third idea was echoed by another reader, so that's got "high demand" from my perspective. :)

A test of relationships is a great idea too.

Member

bhauer commented Apr 12, 2013

Hi @drewcrawford. Thanks for the ideas!

No rush, since this is just long-term planning, but I am curious about your second idea concerning multi-row reads and writes. In my head I conceive of that as executing a single UPDATE but I am probably misunderstanding you. Would you be able to draft up some quick pseudo-code to allow me to visualize what you mean?

Your third idea was echoed by another reader, so that's got "high demand" from my perspective. :)

A test of relationships is a great idea too.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Apr 12, 2013

Member

A commenter on HN named Terretta suggested the following. I'm just copying this here for easy future reference.

  1. Exercising a randomized mix of reading and writing. I think you already said you were planning a CRUD test. Consider a tunable ratio here, something like 10000 R to 100 U to 10 C to 1 D.
  2. Exercising synchronous web service (JSONP) calls in two modes: (a) to some web service that is consistently fast and low latency, say, the initial JSON example from this test suite running in servlet mode, and (b) to a web service written in the same framework as the one being tested, again using the initial JSON example. (The idea here is that many frameworks fall on their faces when confronted with latency. This is why synthetic tests are usually so poorly predictive of real world behavior -- people forget that latency causes backlogs and backlogs cause all parts of the stack to misbehave in interesting ways.)
  3. Test async ability if the framework has it, with a system call (sleep?) that takes a randomized 0 - 60 seconds to return. Would help understand when a framework is likely to blow up calling out to a credit card processor, doing server side image processing, etc.
  4. Exercising authentication (standardize on bcrypt, but only create passwords on 1 in 10K requests), authorization, and session state, if offered.
  5. Exercising any built-in support for caching, where 1 in rand(X) requests invalidates the DB query cache, 1 in rand(X) requests invalidates the WS call cache, 1 in rand(X) requests invalidates the long term async system call cache, and 1 in rand(Y) requests blows away the whole cache.
  6. For the enterprise legacy integrators, it would also be interesting to test XML as well (in particular, SOAP), anywhere we're testing JSON.
Member

bhauer commented Apr 12, 2013

A commenter on HN named Terretta suggested the following. I'm just copying this here for easy future reference.

  1. Exercising a randomized mix of reading and writing. I think you already said you were planning a CRUD test. Consider a tunable ratio here, something like 10000 R to 100 U to 10 C to 1 D.
  2. Exercising synchronous web service (JSONP) calls in two modes: (a) to some web service that is consistently fast and low latency, say, the initial JSON example from this test suite running in servlet mode, and (b) to a web service written in the same framework as the one being tested, again using the initial JSON example. (The idea here is that many frameworks fall on their faces when confronted with latency. This is why synthetic tests are usually so poorly predictive of real world behavior -- people forget that latency causes backlogs and backlogs cause all parts of the stack to misbehave in interesting ways.)
  3. Test async ability if the framework has it, with a system call (sleep?) that takes a randomized 0 - 60 seconds to return. Would help understand when a framework is likely to blow up calling out to a credit card processor, doing server side image processing, etc.
  4. Exercising authentication (standardize on bcrypt, but only create passwords on 1 in 10K requests), authorization, and session state, if offered.
  5. Exercising any built-in support for caching, where 1 in rand(X) requests invalidates the DB query cache, 1 in rand(X) requests invalidates the WS call cache, 1 in rand(X) requests invalidates the long term async system call cache, and 1 in rand(Y) requests blows away the whole cache.
  6. For the enterprise legacy integrators, it would also be interesting to test XML as well (in particular, SOAP), anywhere we're testing JSON.
@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Apr 12, 2013

Member

Another HN commenter named kbenson suggested a goal of defining a simple blog-style application. This is ambitious but we've already passed the threshold at which we require community contributions in order to move forward (even adding one more simple test will require several pull requests from the community to see that test implemented in more than a small sampling of the frameworks).

With that in mind, I think it's a great item to have on the long-term plan. If we keep the requirements simple, it could be done.

Member

bhauer commented Apr 12, 2013

Another HN commenter named kbenson suggested a goal of defining a simple blog-style application. This is ambitious but we've already passed the threshold at which we require community contributions in order to move forward (even adding one more simple test will require several pull requests from the community to see that test implemented in more than a small sampling of the frameworks).

With that in mind, I think it's a great item to have on the long-term plan. If we keep the requirements simple, it could be done.

@drewcrawford

This comment has been minimized.

Show comment
Hide comment
@drewcrawford

drewcrawford Apr 12, 2013

Would you be able to draft up some quick pseudo-code to allow me to visualize what you mean?

a = 0
b = 1
for i from 1 to 100
    insert into table values(a+b)
    c = b
    b = a + b
    a = c

or

i = 0
not_quite_sum = 0
for row in table
    if i is even:
        not_quite_sum += row.field
    else:
        not_quite_sum -= row.field

The key insight being

  • there's a for loop
  • each pass of the for loop operates on one row
  • the overall operation is simple, but not so simple that it's natural to do in a SQL one-liner

The interesting thing about this test is that it does reads/writes in the same connection. Whereas in the single row access case the dominating factor might be setting up the connection or acquiring it from a shared pool, here the test is about how quick the ORM bindings are once they're in place and how fast you can move memory between the DB process and the application process.

drewcrawford commented Apr 12, 2013

Would you be able to draft up some quick pseudo-code to allow me to visualize what you mean?

a = 0
b = 1
for i from 1 to 100
    insert into table values(a+b)
    c = b
    b = a + b
    a = c

or

i = 0
not_quite_sum = 0
for row in table
    if i is even:
        not_quite_sum += row.field
    else:
        not_quite_sum -= row.field

The key insight being

  • there's a for loop
  • each pass of the for loop operates on one row
  • the overall operation is simple, but not so simple that it's natural to do in a SQL one-liner

The interesting thing about this test is that it does reads/writes in the same connection. Whereas in the single row access case the dominating factor might be setting up the connection or acquiring it from a shared pool, here the test is about how quick the ORM bindings are once they're in place and how fast you can move memory between the DB process and the application process.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Apr 12, 2013

Member

@drewcrawford, thanks! I understand what you have in mind now. For some reason, I read your original statement to imply something much more complicated.

But you simply mean for a test that operates over multiple rows in a single resultset, in the case of reading, with per-row functionality occurring within the application rather than within SQL functions. Your pseudo-code illustrates the idea well.

Member

bhauer commented Apr 12, 2013

@drewcrawford, thanks! I understand what you have in mind now. For some reason, I read your original statement to imply something much more complicated.

But you simply mean for a test that operates over multiple rows in a single resultset, in the case of reading, with per-row functionality occurring within the application rather than within SQL functions. Your pseudo-code illustrates the idea well.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer May 9, 2013

Member

Note that requirements for each test type are now posted at the results web site: http://www.techempower.com/benchmarks/#section=code

Member

bhauer commented May 9, 2013

Note that requirements for each test type are now posted at the results web site: http://www.techempower.com/benchmarks/#section=code

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer May 18, 2013

Member

I just edited this issue to indicate the updates test is "present," and to add quick notes about the need to implement plaintext tests (both small and large payloads) and a larger work-load JSON test (something involving a complex and large data structure).

Member

bhauer commented May 18, 2013

I just edited this issue to indicate the updates test is "present," and to add quick notes about the need to implement plaintext tests (both small and large payloads) and a larger work-load JSON test (something involving a complex and large data structure).

@michaelhixson

This comment has been minimized.

Show comment
Hide comment
@michaelhixson

michaelhixson Jul 12, 2013

Member

A test that exercises form rendering, validation, and CSRF protection could be interesting. I'm pretty sure most of the full stack frameworks have utilities for those. Maybe the test would have three parts? One server-side implementation, but three sets of wrk parameters: (a) GET the form, (b) POST the form with errors, (c) POST the form successfully.

Member

michaelhixson commented Jul 12, 2013

A test that exercises form rendering, validation, and CSRF protection could be interesting. I'm pretty sure most of the full stack frameworks have utilities for those. Maybe the test would have three parts? One server-side implementation, but three sets of wrk parameters: (a) GET the form, (b) POST the form with errors, (c) POST the form successfully.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Jul 26, 2013

Member

I've added test type 16, a more complex routing table/map/tree based on Christopher Lord's comment on the Google Group:

https://groups.google.com/d/msg/framework-benchmarks/r0B3tPaCMPs/_PG1_p1McbwJ

Member

bhauer commented Jul 26, 2013

I've added test type 16, a more complex routing table/map/tree based on Christopher Lord's comment on the Google Group:

https://groups.google.com/d/msg/framework-benchmarks/r0B3tPaCMPs/_PG1_p1McbwJ

@methane

This comment has been minimized.

Show comment
Hide comment
@methane

methane Jul 28, 2013

Contributor

Heavy model test:

  • Make 10 ActiveRecord or RowGateway classes. Instantiate them from each table.
  • Make additional 100 classes with single methods. Instantiate and call them while request.

This test may reveal cost of class loading [1], method calling and GC.

[1] Some languages like php loads class for each request.

Contributor

methane commented Jul 28, 2013

Heavy model test:

  • Make 10 ActiveRecord or RowGateway classes. Instantiate them from each table.
  • Make additional 100 classes with single methods. Instantiate and call them while request.

This test may reveal cost of class loading [1], method calling and GC.

[1] Some languages like php loads class for each request.

@weltermann17

This comment has been minimized.

Show comment
Hide comment
@weltermann17

weltermann17 Jul 30, 2013

Contributor

I think a pretty simple additional test would be serving static context of different sizes (100k, 1m, 10m, 100m?). Frameworks that perform similar to (maybe even better than) Apache httpd in this domain could make life a lot easier for full-blown web applications than those that serve small content extremely well but degrade significantly when content sizes get large. With our framework PLAIN, for instance, we generate dynamic content of 3D data (JT, 3DXML, CATIA) that quickly reaches sizes >100m. Streaming those to files and then serve them with an httpd or the like would be a big drawback in terms of performance and complexity.

Contributor

weltermann17 commented Jul 30, 2013

I think a pretty simple additional test would be serving static context of different sizes (100k, 1m, 10m, 100m?). Frameworks that perform similar to (maybe even better than) Apache httpd in this domain could make life a lot easier for full-blown web applications than those that serve small content extremely well but degrade significantly when content sizes get large. With our framework PLAIN, for instance, we generate dynamic content of 3D data (JT, 3DXML, CATIA) that quickly reaches sizes >100m. Streaming those to files and then serve them with an httpd or the like would be a big drawback in terms of performance and complexity.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Aug 2, 2013

Member

A note for any future SSL test: we should aim to ensure that the cipher being used ECDHE to ensure we're testing a proper production configuration with perfect forward secrecy. SSL tests will not be easy. SSL configuration on a single platform can be complicated; getting it right on several will take some effort.

Member

bhauer commented Aug 2, 2013

A note for any future SSL test: we should aim to ensure that the cipher being used ECDHE to ensure we're testing a proper production configuration with perfect forward secrecy. SSL tests will not be easy. SSL configuration on a single platform can be complicated; getting it right on several will take some effort.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Aug 2, 2013

Member

@methane I like your "heavy model" test. Do I have it summarized correctly below?

  • Create 10 ORM-wired entity classes. During the scope of each request fetch one row from each of the 10 tables (this isn't really a database-centric test, though, so perhaps we simplify things a bit and always fetch the same row?)
  • Create another 100 classes that are not wired to the ORM but have a method that must be called. Do we require an instance of each be instantiated during each request? What sort of operation would the method run? Something trivial, yes?
Member

bhauer commented Aug 2, 2013

@methane I like your "heavy model" test. Do I have it summarized correctly below?

  • Create 10 ORM-wired entity classes. During the scope of each request fetch one row from each of the 10 tables (this isn't really a database-centric test, though, so perhaps we simplify things a bit and always fetch the same row?)
  • Create another 100 classes that are not wired to the ORM but have a method that must be called. Do we require an instance of each be instantiated during each request? What sort of operation would the method run? Something trivial, yes?
@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Aug 2, 2013

Member

@weltermann17 Could you give me a little more detail about what you have in mind? I'm worried that a test of large static asset delivery would be fairly uninteresting because we'll saturate our gigabit Ethernet connectivity between the servers almost immediately (even lower down the performance ranks than we presently do with our intentionally small-payload plaintext and JSON tests).

But you mention very large dynamic responses, which could be interesting; assuming those large responses need to be computed on the fly in some manner.

Maybe I'm misunderstanding something?

Member

bhauer commented Aug 2, 2013

@weltermann17 Could you give me a little more detail about what you have in mind? I'm worried that a test of large static asset delivery would be fairly uninteresting because we'll saturate our gigabit Ethernet connectivity between the servers almost immediately (even lower down the performance ranks than we presently do with our intentionally small-payload plaintext and JSON tests).

But you mention very large dynamic responses, which could be interesting; assuming those large responses need to be computed on the fly in some manner.

Maybe I'm misunderstanding something?

@methane

This comment has been minimized.

Show comment
Hide comment
@methane

methane Aug 2, 2013

Contributor

@bhauer

(this isn't really a database-centric test, though, so perhaps we simplify things a bit and always fetch the same row?)

Yes. It may reveal the cost of Data Mapper. "Queries" test only use one table. "Complex Model" should use more tables and columns.

Do we require an instance of each be instantiated during each request?

Yes. Both of creating instance and calling method should be measured.

What sort of operation would the method run? Something trivial, yes?

It should not be optimized away by compiler.

Sample code:

@app.route('/complex-model')
def complex_model():
    entities = []
    entitles.append(Entity1.query.get(1))
    entitles.append(Entity2.query.get(1))
    # ...
    entitles.append(Entity10.query.get(1))

    msgs = []
    ModelClass1().method(msgs)
    ModelClass2().method(msgs)
    ModelClass3().method(msgs)
    # ...
    ModelClass10().method(msgs)
    return render_template('complex_model.tpl', entities=entities, messages=msgs)

class Entity1(Model):
    __table_name__ = 'entity1'
    id = Column(Integer, primary_key=True)
    col1 = Column(Integer)
    col2 = Column(Integer)
    col3 = Column(Integer)
    # ...
    col10 = Column(Integer)

# ... Entity10

class ModelClass1(object):
    def method(self, msgs):
        ModelClass1_1().method(msgs)
        ModelClass1_2().method(msgs)
        # ...
        ModelClass1_10().method(msgs)

# ... ModelClass10

class ModelClass1_1(object):
    def method(self, msgs):
        msgs.append("hello 1-1")

class ModelClass1_2(object):
    def method(self, msgs):
        msgs.append("hello 1-2")

# ... ModelClass10_10
Contributor

methane commented Aug 2, 2013

@bhauer

(this isn't really a database-centric test, though, so perhaps we simplify things a bit and always fetch the same row?)

Yes. It may reveal the cost of Data Mapper. "Queries" test only use one table. "Complex Model" should use more tables and columns.

Do we require an instance of each be instantiated during each request?

Yes. Both of creating instance and calling method should be measured.

What sort of operation would the method run? Something trivial, yes?

It should not be optimized away by compiler.

Sample code:

@app.route('/complex-model')
def complex_model():
    entities = []
    entitles.append(Entity1.query.get(1))
    entitles.append(Entity2.query.get(1))
    # ...
    entitles.append(Entity10.query.get(1))

    msgs = []
    ModelClass1().method(msgs)
    ModelClass2().method(msgs)
    ModelClass3().method(msgs)
    # ...
    ModelClass10().method(msgs)
    return render_template('complex_model.tpl', entities=entities, messages=msgs)

class Entity1(Model):
    __table_name__ = 'entity1'
    id = Column(Integer, primary_key=True)
    col1 = Column(Integer)
    col2 = Column(Integer)
    col3 = Column(Integer)
    # ...
    col10 = Column(Integer)

# ... Entity10

class ModelClass1(object):
    def method(self, msgs):
        ModelClass1_1().method(msgs)
        ModelClass1_2().method(msgs)
        # ...
        ModelClass1_10().method(msgs)

# ... ModelClass10

class ModelClass1_1(object):
    def method(self, msgs):
        msgs.append("hello 1-1")

class ModelClass1_2(object):
    def method(self, msgs):
        msgs.append("hello 1-2")

# ... ModelClass10_10
@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Aug 2, 2013

Member

@methane Thanks! I like that. Implementations will involve quite a bit of copying and pasting, but that's easy enough.

I'll get this and the others mentioned in the comments added to the list above.

Member

bhauer commented Aug 2, 2013

@methane Thanks! I like that. Implementations will involve quite a bit of copying and pasting, but that's easy enough.

I'll get this and the others mentioned in the comments added to the list above.

@weltermann17

This comment has been minimized.

Show comment
Hide comment
@weltermann17

weltermann17 Aug 2, 2013

Contributor

@bhauer
Thanks for your comment. I tested some frameworks locally serving a 50mb and a 250mb file. The frameworks perform quite differently: from 3.5 to 0.3 gigabyte/sec (on an i7/8core osx). A factor of 10. But you are absolutely right, with a 1 gb cable between client and server the throughput drops to 112 mb/sec for each of them. Still, the frameworks with a throughput of > 3 gb/s do a better job. If you want I can provide more details.
Creating huge dynamic responses in my opinion is very domain specific and should not be part of your test suite. Testing static content would eliminate the cost of creating it and concentrate on how well frameworks can provide it. This kind of test would show whether a framework utilizes the available network capacity and gains from scaling up or whether itself is the bottleneck in the end.

Contributor

weltermann17 commented Aug 2, 2013

@bhauer
Thanks for your comment. I tested some frameworks locally serving a 50mb and a 250mb file. The frameworks perform quite differently: from 3.5 to 0.3 gigabyte/sec (on an i7/8core osx). A factor of 10. But you are absolutely right, with a 1 gb cable between client and server the throughput drops to 112 mb/sec for each of them. Still, the frameworks with a throughput of > 3 gb/s do a better job. If you want I can provide more details.
Creating huge dynamic responses in my opinion is very domain specific and should not be part of your test suite. Testing static content would eliminate the cost of creating it and concentrate on how well frameworks can provide it. This kind of test would show whether a framework utilizes the available network capacity and gains from scaling up or whether itself is the bottleneck in the end.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Aug 3, 2013

Member

@weltermann17 Thanks for the follow-up. I've revised the list above to say static instead of dynamic.

I am surprised to hear there much in the way of differentiation between the frameworks when transferring large static files on gigabit Ethernet. You say your tests saturated the network but the frameworks with a higher network-unlimited throughput do a better job. In what way is the better job measurable? Network-limited, aren't the rps numbers roughly equivalent?

Member

bhauer commented Aug 3, 2013

@weltermann17 Thanks for the follow-up. I've revised the list above to say static instead of dynamic.

I am surprised to hear there much in the way of differentiation between the frameworks when transferring large static files on gigabit Ethernet. You say your tests saturated the network but the frameworks with a higher network-unlimited throughput do a better job. In what way is the better job measurable? Network-limited, aren't the rps numbers roughly equivalent?

@msmith-techempower

This comment has been minimized.

Show comment
Hide comment
@msmith-techempower

msmith-techempower Nov 19, 2014

Member

Again, I'm fine with docker as an option for unifying our, admittedly difficult, installation and setup (we already have a vagrant script that does this... though, I'm going to break it with the next merge).

The only thing I require is that we can install/setup/run the suite without the overhead of a virtualized or contained environment (read: bare metal).

Member

msmith-techempower commented Nov 19, 2014

Again, I'm fine with docker as an option for unifying our, admittedly difficult, installation and setup (we already have a vagrant script that does this... though, I'm going to break it with the next merge).

The only thing I require is that we can install/setup/run the suite without the overhead of a virtualized or contained environment (read: bare metal).

@edsiper

This comment has been minimized.

Show comment
Hide comment
@edsiper

edsiper Apr 22, 2015

Contributor

request: when performing plaintext test, do it in two modes: with pipelined requests and without pipelined requests. This will help to measure the internal architecture when switching between requests and processing outgoing data.

Contributor

edsiper commented Apr 22, 2015

request: when performing plaintext test, do it in two modes: with pipelined requests and without pipelined requests. This will help to measure the internal architecture when switching between requests and processing outgoing data.

@circlespainter

This comment has been minimized.

Show comment
Hide comment
@circlespainter

circlespainter Aug 9, 2015

Contributor

I'm with @bhauer and @hrj on the parameterized "sleep" test: it could be an easy way to test the specific strenghts of async frameworks as well as Quasar / Pulsar / Comsat ones using fibers (see #1719 and #1720), which offer greater benefits when there are more outstanding requests than OS threads a box can handle.

Contributor

circlespainter commented Aug 9, 2015

I'm with @bhauer and @hrj on the parameterized "sleep" test: it could be an easy way to test the specific strenghts of async frameworks as well as Quasar / Pulsar / Comsat ones using fibers (see #1719 and #1720), which offer greater benefits when there are more outstanding requests than OS threads a box can handle.

@Drawaes

This comment has been minimized.

Show comment
Hide comment
@Drawaes

Drawaes Oct 26, 2016

On SSL/TLS considering their importance and prevalence these days, has a test or test pack been devised yet for this?

Drawaes commented Oct 26, 2016

On SSL/TLS considering their importance and prevalence these days, has a test or test pack been devised yet for this?

@Drawaes

This comment has been minimized.

Show comment
Hide comment
@Drawaes

Drawaes Nov 14, 2016

Is anyone looking at SSL/TLS I am happy to help as much as I can, but I think with most companies really starting to push this, it is a very important test for now and more so into the future.

Drawaes commented Nov 14, 2016

Is anyone looking at SSL/TLS I am happy to help as much as I can, but I think with most companies really starting to push this, it is a very important test for now and more so into the future.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Nov 28, 2016

Member

I have added a "TCP heavy" test type number 22 in the list above. This is based on feedback from Daniel Nicoletti here: https://groups.google.com/d/msg/framework-benchmarks/2LRga8pkm6E/YOqNI58lAgAJ

Member

bhauer commented Nov 28, 2016

I have added a "TCP heavy" test type number 22 in the list above. This is based on feedback from Daniel Nicoletti here: https://groups.google.com/d/msg/framework-benchmarks/2LRga8pkm6E/YOqNI58lAgAJ

@cjnething

This comment has been minimized.

Show comment
Hide comment
@cjnething

cjnething Nov 29, 2016

Contributor

Hi @weltermann17 it looks like Scala (plain) was removed in this PR because it wasn't working and no one could find documentation or evidence of maintenance. Pinging @knewmanTE for further help/next steps.

Contributor

cjnething commented Nov 29, 2016

Hi @weltermann17 it looks like Scala (plain) was removed in this PR because it wasn't working and no one could find documentation or evidence of maintenance. Pinging @knewmanTE for further help/next steps.

@knewmanTE

This comment has been minimized.

Show comment
Hide comment
@knewmanTE

knewmanTE Nov 29, 2016

Contributor

@weltermann17 in an effort to clean up the suite, we removed Scala/plain because it was failing our suite's tests and appeared to be unmaintained. I apologize if this wasn't the case! If you are able to get the Scala/plain tests working again, please open up a pull request to get it back in! You can find the code for our last known implementation here, and you can check it out with that commit (beb9b3e). Though, to get it merged back into the main code base, you should be testing the framework on a branch that also contains the latest changes from master.

Contributor

knewmanTE commented Nov 29, 2016

@weltermann17 in an effort to clean up the suite, we removed Scala/plain because it was failing our suite's tests and appeared to be unmaintained. I apologize if this wasn't the case! If you are able to get the Scala/plain tests working again, please open up a pull request to get it back in! You can find the code for our last known implementation here, and you can check it out with that commit (beb9b3e). Though, to get it merged back into the main code base, you should be testing the framework on a branch that also contains the latest changes from master.

@greenlaw110

This comment has been minimized.

Show comment
Hide comment
Contributor

greenlaw110 commented Jan 30, 2017

@tmds

This comment has been minimized.

Show comment
Hide comment
@tmds

tmds Apr 11, 2017

Future: SSL tests. Add SSL to one or more tests.

+1

Future: WebSocket enabled tests. (High concurrency is desirable here.)

+1

tmds commented Apr 11, 2017

Future: SSL tests. Add SSL to one or more tests.

+1

Future: WebSocket enabled tests. (High concurrency is desirable here.)

+1

@msmith-techempower

This comment has been minimized.

Show comment
Hide comment
@msmith-techempower

msmith-techempower Jan 9, 2018

Member

Ugh... this issue has been open for so long.

We should break out each into its own issue for more simple management and clarity.

Member

msmith-techempower commented Jan 9, 2018

Ugh... this issue has been open for so long.

We should break out each into its own issue for more simple management and clarity.

@boazsegev

This comment has been minimized.

Show comment
Hide comment
@boazsegev

boazsegev Jan 20, 2018

Contributor

Why is SSL/TLS a framework concern? Isn't it a server/proxy concern?

In most production environments the server (nginx/apache) deals with the TLS/SSL (along with load balancing) and the framework deals with the data.

Pushing TLS/SSL into frameworks often involves reduced security and redundant code (doubling the server's job in the framework's codebase, but usually with less resources/knowledge).

Do we really want to impress upon developers TLS/SSL as a high value in the framework layer (vs. the server layer)?

Contributor

boazsegev commented Jan 20, 2018

Why is SSL/TLS a framework concern? Isn't it a server/proxy concern?

In most production environments the server (nginx/apache) deals with the TLS/SSL (along with load balancing) and the framework deals with the data.

Pushing TLS/SSL into frameworks often involves reduced security and redundant code (doubling the server's job in the framework's codebase, but usually with less resources/knowledge).

Do we really want to impress upon developers TLS/SSL as a high value in the framework layer (vs. the server layer)?

@greenlaw110

This comment has been minimized.

Show comment
Hide comment
@greenlaw110

greenlaw110 Jan 20, 2018

Contributor

@boazsegev Agreed. Most of the frameworks SHOULD NOT take SSL/TLS into their consideration at all. It might support SSL/TLS to help testing, however in the real production environmenht it's the frontend server taking those jobs

Contributor

greenlaw110 commented Jan 20, 2018

@boazsegev Agreed. Most of the frameworks SHOULD NOT take SSL/TLS into their consideration at all. It might support SSL/TLS to help testing, however in the real production environmenht it's the frontend server taking those jobs

@Drawaes

This comment has been minimized.

Show comment
Hide comment
@Drawaes

Drawaes Jan 20, 2018

So the only encryption you do is on the front end servers? "In production" many environments have TLS between layers and in service to service calls. Infact some industries (especially heavily regulated ones" require secure communication between these layers. To only secure "front end servers" and perimeter security is becoming a legacy way of thinking.

As someone who deals with and has looked into how these frameworks interact with openssl/boring/s2n/schannel there is a vast difference in performance.

Also http/2 basically forces connections to be TLS

Drawaes commented Jan 20, 2018

So the only encryption you do is on the front end servers? "In production" many environments have TLS between layers and in service to service calls. Infact some industries (especially heavily regulated ones" require secure communication between these layers. To only secure "front end servers" and perimeter security is becoming a legacy way of thinking.

As someone who deals with and has looked into how these frameworks interact with openssl/boring/s2n/schannel there is a vast difference in performance.

Also http/2 basically forces connections to be TLS

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Jan 20, 2018

Contributor

Agreed. Most of the frameworks SHOULD NOT take SSL/TLS into their consideration at all.

That fine. Don't submit a test case for a TLS benchmark for that framework. Or submit an additional variant just for https which is framework+frontend server variant (+ ngnix, etc)

It still should be measured as its a very important consideration for the webstack since 50% of web traffic now runs over TLS and by 2019 its estimated it will be 75%.

It seems non-sensible to not measure what it soon to be 75% of real world deployment scenarios for public web traffic.

Contributor

benaadams commented Jan 20, 2018

Agreed. Most of the frameworks SHOULD NOT take SSL/TLS into their consideration at all.

That fine. Don't submit a test case for a TLS benchmark for that framework. Or submit an additional variant just for https which is framework+frontend server variant (+ ngnix, etc)

It still should be measured as its a very important consideration for the webstack since 50% of web traffic now runs over TLS and by 2019 its estimated it will be 75%.

It seems non-sensible to not measure what it soon to be 75% of real world deployment scenarios for public web traffic.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Jan 20, 2018

Contributor

If a layer of ngnix; HAProxy; Apache; Squid; etc for TLS termination commonly performs badly, it should also encourage the frontend server to improve its performance; if it performs badly in isolation it will encourage the framework to look improving their integrations; if its not measured that's a huge gap in understanding missing.

Contributor

benaadams commented Jan 20, 2018

If a layer of ngnix; HAProxy; Apache; Squid; etc for TLS termination commonly performs badly, it should also encourage the frontend server to improve its performance; if it performs badly in isolation it will encourage the framework to look improving their integrations; if its not measured that's a huge gap in understanding missing.

@greenlaw110

This comment has been minimized.

Show comment
Hide comment
@greenlaw110

greenlaw110 Jan 20, 2018

Contributor

In production" many environments have TLS between layers and in service to service calls. Infact some industries (especially heavily regulated ones" require secure communication between these layers

Can't these services service requests from other service through the frontend servers? My understanding (I might be wrong) is app server always sit behind nignx (or apache httpd etc) and have it to deal with SSL/TLS and service static resources, which are not the strength of app server (normally).

If a layer of ngnix; HAProxy; Apache; Squid; etc for TLS termination commonly performs badly.

really? my thought is these frontend servers should do good job on these low level tasks.

if it performs badly in isolation it will encourage the framework to look improving their integrations

What kind of intergration? shouldn't that be simply a reverse proxy setup? How to improve the integration for that?

Contributor

greenlaw110 commented Jan 20, 2018

In production" many environments have TLS between layers and in service to service calls. Infact some industries (especially heavily regulated ones" require secure communication between these layers

Can't these services service requests from other service through the frontend servers? My understanding (I might be wrong) is app server always sit behind nignx (or apache httpd etc) and have it to deal with SSL/TLS and service static resources, which are not the strength of app server (normally).

If a layer of ngnix; HAProxy; Apache; Squid; etc for TLS termination commonly performs badly.

really? my thought is these frontend servers should do good job on these low level tasks.

if it performs badly in isolation it will encourage the framework to look improving their integrations

What kind of intergration? shouldn't that be simply a reverse proxy setup? How to improve the integration for that?

@Drawaes

This comment has been minimized.

Show comment
Hide comment
@Drawaes

Drawaes Jan 20, 2018

You want to route every service to service call through a front end server? And how do you secure the communication to said server?

So you have two options, a framework is usable as is or it has to be behind a proxy. If it's the later then there will also be a cost. This is the modern security reality so why wouldn't you measure it?

Drawaes commented Jan 20, 2018

You want to route every service to service call through a front end server? And how do you secure the communication to said server?

So you have two options, a framework is usable as is or it has to be behind a proxy. If it's the later then there will also be a cost. This is the modern security reality so why wouldn't you measure it?

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Jan 20, 2018

Contributor

If a layer of ngnix; HAProxy; Apache; Squid; etc for TLS termination commonly performs badly.

really? my thought is these frontend servers should do good job on these low level tasks.

Not saying they don't; but if you are using framework X and it needs a frontend server which one should you pair it with? Where are the benchmarks for it? Some frameworks already act as hardened edge servers and don't need a frontend server; does that make a difference?

shouldn't that be simply a reverse proxy setup? How to improve the integration for that?

What's the best reverse proxy setup to use for frontend server Y and framework X, what do they support: localhost vs named pipes vs unix sockets vs shared memory etc.

Here people are likely to hone choices down and use the best performing options; other people that do deployments will then look to the example implementations to improve their deployments when using that framework.

Contributor

benaadams commented Jan 20, 2018

If a layer of ngnix; HAProxy; Apache; Squid; etc for TLS termination commonly performs badly.

really? my thought is these frontend servers should do good job on these low level tasks.

Not saying they don't; but if you are using framework X and it needs a frontend server which one should you pair it with? Where are the benchmarks for it? Some frameworks already act as hardened edge servers and don't need a frontend server; does that make a difference?

shouldn't that be simply a reverse proxy setup? How to improve the integration for that?

What's the best reverse proxy setup to use for frontend server Y and framework X, what do they support: localhost vs named pipes vs unix sockets vs shared memory etc.

Here people are likely to hone choices down and use the best performing options; other people that do deployments will then look to the example implementations to improve their deployments when using that framework.

@greenlaw110

This comment has been minimized.

Show comment
Hide comment
@greenlaw110

greenlaw110 Jan 20, 2018

Contributor

@benaadams

What's the best reverse proxy setup to use for frontend server Y and framework X, what do they support: localhost vs named pipes vs unix sockets vs shared memory etc.

hmm... I don't know there are so many options honestly. Thanks for sharing the information.

@Drawaes

I am actually happy to see the data. My concern is most (maybe I am wrong) frameworks might choose the proxy way to handle SSL/TLS, in which case framework itself doesn't have too much room to improve if there is an issue. @benaadams mentioned a few options about integration, it would be pretty interesting to see how they works, but are they really framework specific? Or they are generic practices across frameworks.

Contributor

greenlaw110 commented Jan 20, 2018

@benaadams

What's the best reverse proxy setup to use for frontend server Y and framework X, what do they support: localhost vs named pipes vs unix sockets vs shared memory etc.

hmm... I don't know there are so many options honestly. Thanks for sharing the information.

@Drawaes

I am actually happy to see the data. My concern is most (maybe I am wrong) frameworks might choose the proxy way to handle SSL/TLS, in which case framework itself doesn't have too much room to improve if there is an issue. @benaadams mentioned a few options about integration, it would be pretty interesting to see how they works, but are they really framework specific? Or they are generic practices across frameworks.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Jan 20, 2018

Contributor

but are they really framework specific? Or they are generic practices across frameworks.

They may not be implemented, or the code path might a neglected one, or carry unnecessary overhead in a particular framework etc; don't know haven't seen benchmarks 😄

Why I'd most like to see TLS is because the encryption itself on a pre-negotiated connection is really fast on modern CPUs; however the integration of the crypto library; whether OpenSSL, boring SSL, libsodium; Libressl; etc (do not roll your own crypto); can be pretty crufty. Can be too many allocations, too much copying, too granular, not granular enough, all sorts of things.

The TechEmpower benchmarks have done wonders for the performance of frameworks http performance by having a set of standardized measurements independently done; and I'd like to see that effect replicated for https; whichever framework you are into. Its good for the web ecosystem as a whole.

Contributor

benaadams commented Jan 20, 2018

but are they really framework specific? Or they are generic practices across frameworks.

They may not be implemented, or the code path might a neglected one, or carry unnecessary overhead in a particular framework etc; don't know haven't seen benchmarks 😄

Why I'd most like to see TLS is because the encryption itself on a pre-negotiated connection is really fast on modern CPUs; however the integration of the crypto library; whether OpenSSL, boring SSL, libsodium; Libressl; etc (do not roll your own crypto); can be pretty crufty. Can be too many allocations, too much copying, too granular, not granular enough, all sorts of things.

The TechEmpower benchmarks have done wonders for the performance of frameworks http performance by having a set of standardized measurements independently done; and I'd like to see that effect replicated for https; whichever framework you are into. Its good for the web ecosystem as a whole.

@RX14

This comment has been minimized.

Show comment
Hide comment
@RX14

RX14 Jan 20, 2018

Contributor

I agree that HTTPS between microservices, or between the load balancer and web frontend is a good idea in general. Benchmarking this would bring visibility to the performance overhead of frameworks in this area (an aim of the test would be to be directly comparable to a http test so the https overhead could be measured for each framework).

Contributor

RX14 commented Jan 20, 2018

I agree that HTTPS between microservices, or between the load balancer and web frontend is a good idea in general. Benchmarking this would bring visibility to the performance overhead of frameworks in this area (an aim of the test would be to be directly comparable to a http test so the https overhead could be measured for each framework).

@boazsegev

This comment has been minimized.

Show comment
Hide comment
@boazsegev

boazsegev Jan 20, 2018

Contributor

In production" many environments have TLS between layers and in service to service calls. Infact some industries (especially heavily regulated ones" require secure communication between these layers...

I think is Redis a great use case example. Communication is performed securely using a secure tunnel (such as using Spiped, Ghostunnel, etc'.

The separation of concerns between security and services is important. It allows security patches to be easily implemented across the board without patching each framework / service separately. Also, the security implementation is often superior to a BYO solution.

What's the best reverse proxy setup to use for frontend server Y and framework X, what do they support: localhost vs named pipes vs unix sockets vs shared memory etc.

I would love have benchmarks about TLS/SSL, but I'm not sure the benchmarks should include any frameworks. Once we use a framework's internal TLS/SSL in a benchmark, we're telling developers that this is a production grade setup.

I'm not even sure I want to know how many applications in the wild use security implementations with known vulnerabilities... it a source for worry for me on SO whenever someone asks about implementing TLS/SSL in a backend application server.

Than again, I'll probably implement TLS/SSL and let security be damned - the pressure to add TLS/SSL is too great and (re)explaining why security concerns should be separated from the framework is getting tiring.

Contributor

boazsegev commented Jan 20, 2018

In production" many environments have TLS between layers and in service to service calls. Infact some industries (especially heavily regulated ones" require secure communication between these layers...

I think is Redis a great use case example. Communication is performed securely using a secure tunnel (such as using Spiped, Ghostunnel, etc'.

The separation of concerns between security and services is important. It allows security patches to be easily implemented across the board without patching each framework / service separately. Also, the security implementation is often superior to a BYO solution.

What's the best reverse proxy setup to use for frontend server Y and framework X, what do they support: localhost vs named pipes vs unix sockets vs shared memory etc.

I would love have benchmarks about TLS/SSL, but I'm not sure the benchmarks should include any frameworks. Once we use a framework's internal TLS/SSL in a benchmark, we're telling developers that this is a production grade setup.

I'm not even sure I want to know how many applications in the wild use security implementations with known vulnerabilities... it a source for worry for me on SO whenever someone asks about implementing TLS/SSL in a backend application server.

Than again, I'll probably implement TLS/SSL and let security be damned - the pressure to add TLS/SSL is too great and (re)explaining why security concerns should be separated from the framework is getting tiring.

@Drawaes

This comment has been minimized.

Show comment
Hide comment
@Drawaes

Drawaes Jan 20, 2018

Right an article on a site dedicated to edge services (F5) ... Also most of your "edge" services will use the exact same underlying lib. Eg ngnix using openssl. And if we take openssl it's often part of the os.

So the big issue is ... How did they implement it? Did they do a lot of copying and allocations? Are there added bottle necks?

As someone who works in a very secure environment we are required to have a high security edge layer (no one said I was ditching that and we actually have many layers there) but we are also required to have ssl/TLS "Inside" that security ring. To assume a security layer and you are done in banking, health, government, education is dangerous and in some cases illegal

Drawaes commented Jan 20, 2018

Right an article on a site dedicated to edge services (F5) ... Also most of your "edge" services will use the exact same underlying lib. Eg ngnix using openssl. And if we take openssl it's often part of the os.

So the big issue is ... How did they implement it? Did they do a lot of copying and allocations? Are there added bottle necks?

As someone who works in a very secure environment we are required to have a high security edge layer (no one said I was ditching that and we actually have many layers there) but we are also required to have ssl/TLS "Inside" that security ring. To assume a security layer and you are done in banking, health, government, education is dangerous and in some cases illegal

@boazsegev

This comment has been minimized.

Show comment
Hide comment
@boazsegev

boazsegev Jan 20, 2018

Contributor

Also most of your "edge" services will use the exact same underlying lib. Eg ngnix using openssl. And if we take openssl it's often part of the os.

And... if memory serves, openssl uses insecure defaults... but maybe that changed.

As someone who works in a very secure environment...

In your experience, do you use a framework's SSL/TLS implementation for production (to secure internal network communication), or do you tunnel the communication through secure channeling (i.e., spiped)?

I agree that if secure production environments rely on the framework's SSL/TLS layer, than it's important to both test and benchmark that option.

Contributor

boazsegev commented Jan 20, 2018

Also most of your "edge" services will use the exact same underlying lib. Eg ngnix using openssl. And if we take openssl it's often part of the os.

And... if memory serves, openssl uses insecure defaults... but maybe that changed.

As someone who works in a very secure environment...

In your experience, do you use a framework's SSL/TLS implementation for production (to secure internal network communication), or do you tunnel the communication through secure channeling (i.e., spiped)?

I agree that if secure production environments rely on the framework's SSL/TLS layer, than it's important to both test and benchmark that option.

@hiyelbaz

This comment has been minimized.

Show comment
Hide comment
@hiyelbaz

hiyelbaz Jan 24, 2018

websockets +1

hiyelbaz commented Jan 24, 2018

websockets +1

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer Feb 14, 2018

Member

Commenters above who've thoughts about TLS/SSL tests please see and weigh in on the stand-alone issue (#3290).

Member

bhauer commented Feb 14, 2018

Commenters above who've thoughts about TLS/SSL tests please see and weigh in on the stand-alone issue (#3290).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment