Unexpected limits on number of connections per client #108

Closed
benbertola opened this Issue Aug 30, 2012 · 1 comment

Projects

None yet

2 participants

@benbertola

We are using the Astyanax client to connect to Cassandra and I am seeing an unexpected limit placed on the number of connections a given JVM can make to a cluster and I was wondering if anyone can explain why. It seems like Astyanax is placing some sort of artificial limit on the number of connections but I cannot figure out where.

When running the app against a standalone Cassandra, my app is usually only creating up to 100 connections, regardless of the limits set in the ConnectionPoolConfiguration. When running against a 3 data center cluster from our QA environment, it only creates up to 300 connections, even with a 500 connection max.

Here is a simple unit test, which when I run against a standalone Cassandra only creates 66 connections.

import org.junit.Assert;
import org.junit.Test;

import com.netflix.astyanax.AstyanaxContext;
import com.netflix.astyanax.Cluster;
import com.netflix.astyanax.connectionpool.ConnectionPoolConfiguration;
import com.netflix.astyanax.connectionpool.NodeDiscoveryType;
import com.netflix.astyanax.connectionpool.impl.ConnectionPoolConfigurationImpl;
import com.netflix.astyanax.connectionpool.impl.ConnectionPoolType;
import com.netflix.astyanax.impl.AstyanaxConfigurationImpl;
import com.netflix.astyanax.thrift.ThriftFamilyFactory;

public class ConnectionPoolLimitsIntegrationTest {
InstrumentedConnectionPoolMonitor cpMonitor = new InstrumentedConnectionPoolMonitor();
ThriftFamilyFactory thriftFactory = ThriftFamilyFactory.getInstance();

@Test
public void testConnectionPoolConfigurationTestProperties() throws InterruptedException {

    AstyanaxContext<Cluster> clusterContext;
    clusterContext = new AstyanaxContext.Builder()
            .forCluster("Velociraptor")
            .withAstyanaxConfiguration(
                    new AstyanaxConfigurationImpl().setDiscoveryType(
                            NodeDiscoveryType.RING_DESCRIBE)
                            .setConnectionPoolType(
                                    ConnectionPoolType.TOKEN_AWARE))
            .withConnectionPoolConfiguration(
                    getConnectionPoolConfiguration())
            .withConnectionPoolMonitor(cpMonitor)
            .buildCluster(thriftFactory);
    clusterContext.start();

    System.out.println("Connections Created: " + cpMonitor.getConnectionCreatedCount());
    Assert.assertEquals(cpMonitor.getConnectionCreatedCount(), 100);
}

private ConnectionPoolConfiguration getConnectionPoolConfiguration() {
    return new ConnectionPoolConfigurationImpl("VelocityConnectionPool")
            .setPort(9160).setMaxConnsPerHost(500)
            .setSeeds("10.36.44.23")
            .setInitConnsPerHost(100);
}

}

@opuneet
Contributor
opuneet commented Aug 2, 2013

@benbertola This is a really nice question, good observation. Just like a cached thread pool, Astyanax focuses on re-using available connections, if any, before creating a new one.

You can look at the class SimpleHostConnectionPool to see the details on the implementation. All available connections are parked in a queue, and when an operation completes, the connection used to execute it is returned to the queue of conns. Now when a new operation starts, it looks for available conns from the queue. If none are available and if the active conns are under the permissible limit, then a new one is created asynchronously.

But the operation does not wait for the conn, it issues the create ascyn conn task and then goes back to waiting on the queue for an available conn. Now it could get unblocked by it's own conn create task, or by a conn returned by another operation.

Make sense?

Judging by the inactivity over the past months here, I'm assuming that this isn't a problem for you anymore. I'm closing this issue, thanks.

@opuneet opuneet closed this Aug 2, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment