Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Long Waiting time for the Lock in JmsPoolConnectionFactory createJmsPoolConnection #26

Closed
leoheidi opened this issue Jun 18, 2019 · 2 comments

Comments

@leoheidi
Copy link

Hi IBM Team,

We are looking for your help in urgent manner due to the issue we are experiencing in the production environment with high volume of traffics.

The problem is that the bottleneck happens to get a connection from the pool over 10 seconds or a minute when there is a load. From the dynatrace, we realize that the bottleneck occurred at;

Class: JmsPoolConnectionFactory
Method: private synchronized JmsPoolConnection createJmsPoolConnection
Reported Issue: Lock wait time - Time that the code is blocked, either because it has to wait prior to entering a synchronized code block or also because it waits to acquire a SpinLock

Our MQ version is v9 with following configuration.

ibm.mq.channel=QCHAN.SERVER
ibm.mq.connName=myhost.company.com(1414)
ibm.mq.pool.enabled=true
ibm.mq.pool.idleTimeout=60000
ibm.mq.pool.maxConnections=500

  • ibmmq-jms-spring version(s) that are affected by this issue.

    com.ibm.mq
    mq-jms-spring-boot-starter
    2.1.1
  • Java version (including vendor and platform).
    Open JDK 1.8 running on Pivotal Cloud Foundry

Best regards,

Michael

@ibmmqmet
Copy link
Collaborator

Creating a new MQ client connection to a queue manager is always going to be a relatively slow operation - especially if TLS is involved. That's why it should be done as rarely as possible.

There's likely some level of serialisation in the MQ JMS implementation during creation of connections and sessions. And I would not be surprised if there is also a higher-level serialisation around the creation of connections in the JMS Connection Pool implementation - though that too is outside the control of this module.

Are you trying to create lots of real connections simultaneously to fill the pool?

Traces can be used to determine what's happening at the MQ level (both on the Java side and in the qmgr) but it's not easy to decode those.

@leoheidi
Copy link
Author

Sorry about the late response and thank you so much for the information. We have finally figured out the root cause at the cloud with TCP/IP TIME_WAIT. The value was shorter than MQ Server side timeout so the new connection request with the same port was not made immediately until the MQ server close the previous connection with that port.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants