Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to change poolSize for Axis2 RabbitMQ Transport #335

Closed
PasinduGunarathne opened this issue Oct 9, 2023 · 0 comments · Fixed by #337 · May be fixed by #336
Closed

Unable to change poolSize for Axis2 RabbitMQ Transport #335

PasinduGunarathne opened this issue Oct 9, 2023 · 0 comments · Fixed by #337 · May be fixed by #336

Comments

@PasinduGunarathne
Copy link

Description:

Found below issues in Axis2 RabbitMQ Transport,

  • The thread named HotDeploymentSchedulerThread does not have a timeout if it would come to the “Waiting on Condition” state
  • The following parameter is not set properly to customize RabbitMQ connections for a particular ConnectionFactory
    public RabbitMQConnectionPool(RabbitMQConnectionFactory factory, int poolSize) {
        super(factory);
        this.setTestOnBorrow(true);
        this.setMaxTotal(poolSize);
        this.setMaxTotalPerKey(poolSize);
    }
  • The Toml file configuration for increasing pool size value does not reflect in the codebase
[transport.rabbitmq]
sender_enable = true
listener_enable = true

[[transport.rabbitmq.listener]]
name = "AMQPConnectionFactory"
parameter.hostname = "localhost"
parameter.port = 5672
parameter.username = "guest"
parameter.password = "guest"
parameter.retry_interval = "10s"
parameter.retry_count = 5
parameter.connection_pool_size = 100

Steps to reproduce:

Please follow the below steps to reproduce the above mentioned issues.

  1. Create 10 sample RabbitMQ proxy listeners
  2. Update the MI v4.1.0 to update level 52
  3. Setup a RabbitMQ v3.12.6 or v3.11.13
  4. Enable RabbitMQ Listener configuration
  5. Deploy above created RabbitMQ proxy listeners

Expected behaviour:

  • should be able to deploy all the RabbitMQ proxy listeners without any issue
  • If the RabbitMQ proxy listeners were in separate CAR files(such as 5 CARs each CAR has 2 RabbitMQ proxy listeners) all the CAR files should be deployed

Current behaviour:

  • only 8 RabbitMQ proxy listeners will be deployed
  • If the RabbitMQ proxy listeners were in separate CAR files(such as 5 CARs each CAR has 2 RabbitMQ proxy listeners) then only 4 CARs will be deployed

Thank you,
Pasindu G.

malakaganga added a commit to malakaganga/wso2-axis2-transports-1 that referenced this issue Oct 25, 2023
…oxies

After reviewing the Axis2 Transport code, it became evident that maintaining
a shared connection pool is unnecessary.This is because for every deployed
Listener Proxy, a new ServiceTaskManager is instantiated through RabbitMQEndpoint.
So for each Proxy a unique instance of ServiceTaskManager is available
and the Connection is hold by ServiceTaskManager.
So parallelization of connections through a pool is not needed at this
level since parrellization is happen at MessageListenerTask (inner class of ServiceTaskManager)

Here also we no longer need to implement caching or pooling
since each MessageListenerTask has its own channel created on the connection
of ServiceTaskManager and  RabbitMQ supports creating multiple channels over a
single connection. This is one of the design choices of the RabbitMQ protocol, AMQP.
Therefore, by this I changed the design to hand over connection management to reside within
the ServiceTaskManager rather than for each STM having a same shared connection pool.

Fixes: wso2#335
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment