Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes ConfigMap config store never completes when API server is unreachable #371

Open
InfoSec812 opened this issue May 23, 2018 · 2 comments

Comments

@InfoSec812
Copy link

When a Vert.x application making use of Kubernetes ConfigMaps is unable to connect to the Kubernetes API master it will hang indefinitely without failing... There needs to be a mechanism for it to fail after a specified amount of time.

package com.redhat.qcon

import io.vertx.config.ConfigRetrieverOptions
import io.vertx.config.ConfigStoreOptions;
import io.vertx.core.json.JsonObject
import io.vertx.reactivex.config.ConfigRetriever;
import io.vertx.reactivex.core.AbstractVerticle;
import io.vertx.core.Future;

public class DemonstrateConfigMapProblem extends AbstractVerticle {

    @Override
    void start(Future startFuture) {
        ConfigRetrieverOptions retrieverOptions = new ConfigRetrieverOptions()
        ConfigStoreOptions confOpts = new ConfigStoreOptions()
                .setType("configmap")
                .setConfig(new JsonObject()
                .put("name", "insult-config")
                .put("optional", true)
        );
        retrieverOptions.addStore(confOpts);
        ConfigRetriever.create(vertx, retrieverOptions)
                .rxGetConfig()
                .doOnError(startFuture.&fail)
                .subscribe({c -> startFuture.complete()})
    }
}
@InfoSec812 InfoSec812 changed the title Kubernetes ConfigMap config store should have a timeout option Kubernetes ConfigMap config store never completes when API server is unreachable May 23, 2018
@cescoffier
Copy link
Member

This issue needs to be handled in vertx service discovery and vertx config. Do you know a way to reproduce it?

@InfoSec812
Copy link
Author

InfoSec812 commented May 24, 2018

The snipped above should reproduce it quite handily. Just run it WITH the KUBERNETES_NAMESPACE env var set, but somewhere that there is no Kube master reachable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants