Skip to content

perf.md

Hongkai Liu edited this page Jun 8, 2016 · 7 revisions

Performance Engineering

Tools

Computing Resources

$ nproc 
4
$ cat /proc/meminfo
MemTotal:        8011216 kB
$ uname -m
x86_64
$ cat /etc/*-release
CentOS Linux release 7.2.1511 (Core) 

Tomcat Config

The subscribe.war and publish.war are deployed in the same Tomcat instance.

apache-tomcat-7.0.69/conf/server.xml

<Connector port="8080" maxThreads="1500" protocol="org.apache.coyote.http11.Http11NioProtocol"
               connectionTimeout="20000"
               redirectPort="8443" />

Input Samples

The messages used in the tests are duplicated from

{
  "meta": {
    "type": "EiffelActi\"\"vityStartedEvent",
    "version": "1.0",
    "time": 1234567890,
    "domainId": "example.domain",
    "id": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeee0^^^^"
  },
  "data": {
    "executionUri": "https://my.jenkins.host/myJob/43",
    "liveLogs": [
      {
       "name": "My build log",
       "uri": "file:///tmp/logs/data.log"
      }
    ]
  },
  "links": {
    "activityExecution": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeee1",
    "previousActivityExecution": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeee2"
  }
}

Performance Test Src The receiver part is in module performance-test Note that the rate there should match the real sending rate which can be controlled by the number of threads in JMeter test plan above.

// msg rate per sec
private static final long RATE = 10;

The number of concurrent consumers is configured by TestNG annotation:

@Test(threadPoolSize = 500, invocationCount = 500) public void testMethod()

Besides the assertions in the source code, RabbitMQ Management Plugin also gives us an idea how fast the messages are taken by the consumers.

Performance Test Results

concurrent consumers message rate result throughput (k msgs/sec)
10 100 pass 1
10 500 pass 5
100 10 pass 1
100 100 pass 10
100 500 fail (153/500) 50

Note that cpu is in high usage in the case 100 X 500 for both the tomcat server and the one running the tests. Also network might become the bottleneck too. See the following snapshot:

Clone this wiki locally