Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scheduled Executor wrong number of tasks returned on getAllScheduled #9784

tkountis opened this issue Jan 30, 2017 · 2 comments


None yet
2 participants
Copy link

commented Jan 30, 2017

When scheduling tasks with multiple executors, calling getAllScheduled on one of them returns tasks scheduled per member but for all executors rather than for the one we called the getAllScheduled from.


public void wrongTaskCount()
    throws ExecutionException, InterruptedException {

    String runsCounterName = "runs";
    HazelcastInstance instance = Hazelcast.newHazelcastInstance(null);
    ICountDownLatch runsLatch = instance.getCountDownLatch(runsCounterName);

    int numOfSchedulers = 10;
    int numOfTasks = 10;
    int expectedTotal = numOfSchedulers * numOfTasks;


    for (int i = 0; i < numOfSchedulers; i++) {
        IScheduledExecutorService s = instance.getScheduledExecutorService("scheduler_" + i);

        for (int k = 0; k < numOfTasks; k++) {
            s.scheduleAtFixedRate(new ICountdownLatchRunnableTask(runsCounterName), 0, 2, SECONDS);

    runsLatch.await(10, SECONDS);

    int actualTotal = 0;
    for (int i = 0; i < numOfSchedulers; i++) {
        actualTotal += countScheduledTasksOn(instance.getScheduledExecutorService("scheduler_" + i));

    assertEquals(expectedTotal, actualTotal, 0);

@tkountis tkountis added this to the 3.8 milestone Jan 30, 2017

@tkountis tkountis self-assigned this Jan 30, 2017


This comment has been minimized.

Copy link
Contributor Author

commented Jan 30, 2017

Credits to @Danny-Hazelcast for finding it.


This comment has been minimized.

Copy link

commented Jan 31, 2017

this test

show this issue is fixed by #9785

however we have a new issue
#9788 related to cluster kill

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.