Skip to content

Latest commit

 

History

History
121 lines (87 loc) · 8.44 KB

FAQ.md

File metadata and controls

121 lines (87 loc) · 8.44 KB

Frequently Asked Questions:

How do I use KubernetesClient with IPv6 Kubernetes Clusters?

We're aware of this issue in Fabric8 Kubernetes Client. Unfortunately, this is caused by the OkHttp transitive dependency. You can check the suggested workaround here:

Using KubernetesClient with IPv6 based Kubernetes Clusters

What artifact(s) should my project depend on?

Fabric8 version 6 introduces more options with regards to dependencies.

  1. Have compile dependencies on kubernetes-client or openshift-client - this is no different than what was done with version 5 and before. If you have done custom development involving effectively internal classes, you'll need to still use this option.

  2. Have compile dependencies on kubernetes-client-api or openshift-client-api, and a runtime dependency on kubernetes-client or openshift-client. This option will provide your application with a cleaner compile time classpath.

Furthermore you will also have choices in the HttpClient that is utilized.

By default kubernetes-client has a runtime dependency on OkHttp (kubernetes-httpclient-okhttp). If you need to directly manipulate OkHttp, you add a compile dependency to kubernetes-httpclient-okhttp.

If you wish to use another HttpClient implementation typically you will exclude kubernetes-httpclient-okhttp and include the other runtime or compile dependency instead.

What is Bouncy Castle Optional dependency and When is it required?

BouncyCastle is a Java library that complements the default Java Cryptographic Extension (JCE) and it is required for using some KubernetesClient features. To use support for EC Keys you must explicitly add this dependency to classpath. For example, in case of a Maven project add the required dependencies to pom.xml file such as:

<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcpkix-jdk18on</artifactId>
    <version>${bouncycastle.version}</version>
</dependency>

I've tried adding a dependency to kubernetes-client, but I'm still getting weird class loading issues, what gives?

More than likely your project already has transitive dependencies to a conflicting version of the Fabric8 Kubernetes Client. For example spring-cloud-dependencies already depends upon the client. You should fully override the client version in this case via the kubernetes-client-bom:

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>io.fabric8</groupId>
      <artifactId>kubernetes-client-bom</artifactId>
      <version>${fabric8.client.version}</version>
      <scope>import</scope>
      <type>pom</type>
    </dependency>
  ...

That will align all of the transitive Fabric8 Kubernetes Client dependencies to you chosen version, fabric8.client.version. Please be aware though that substituting a different major version of the client may not be fully compatible with the existing usage in your dependency - if you run into an issue, please check for or create an issue for an updated version of that project's library that contains a later version of the Fabric8 Kubernetes Client.

What threading concerns are there?

There has been a lot of changes under the covers with thread utilization in the Fabric8 client over the 5.x and 6.x releases. So the exact details of what threads are created / used where will depend on the particular release version.

At the core the thread utilization will depend upon the HTTP client implementation. Per client OkHttp maintains a pool of threads for task execution. It will dedicate 2 threads out of that pool per WebSocket connection. If you have a lot of WebSocket usage (Informer or Watches) with OkHttp, you can expect to see a large number of threads in use.

With the JDK, Jetty, or Vert.x http clients they will maintain a smaller worker pool for all tasks that is typically sized by default based upon your available processors, and typically a selector / coordinator thread(s). It does not matter how many Informers or Watches you run, the same threads are shared.

To ease developer burden the callbacks on Watchers and ResourceEventHandlers will not be done directly by an http client thread, but the order of calls will be guaranteed. This will make the logic you include in those callbacks tolerant to some blocking without compromising the on-going work at the HTTP client level. However you should avoid truly long running operations as this will cause further events to queue and may eventually case memory issues.

Note: It is recommended with any HTTP client implementation that logic you supply via Watchers, ExecListeners, ResourceEventHandlers, Predicates, Interceptors, LeaderCallbacks, etc. does not execute long running tasks.

On top of the http client threads the Fabric8 client maintains a task thread pool for scheduled tasks and for tasks that are called from WebSocket operations, such as handling input and output streams and ResourceEventHandler call backs. This thread pool defaults to an unlimited number of cached threads, which will be shutdown when the client is closed - that is a sensible default with as the amount of concurrently running async tasks will typically be low. If you would rather take full control over the threading use KubernetesClientBuilder.withExecutor or KubernetesClientBuilder.withExecutorSupplier - however note that constraining this thread pool too much will result in a build up of event processing queues.

Finally the fabric8 client will use 1 thread per PortForward and an additional thread per socket connection - this may be improved upon in the future.

What additional logging is available?

Like many java application the Fabric8 Client utilizes slf4j. You may configure support for whatever underlying logging framework suits your needs. The logging contexts for the Fabric8 Client follow the standard convention of the package structure - everything from within the client will be rooted at the io.fabric8 context. Third-party dependencies, including the chosen HTTP client implementation, will have different root contexts.

If you are using pod exec, which can occur indirectly via pod operations like copying files, and not seeing the expected behavior - please enable debug logging for the io.fabric8.kubernetes.client.dsl.internal context. That will provide the stdErr and stdOut as debug logs to further diagnose what is occurring.

How to use KubernetesClient in OSGi?

Fabric8 Kubernetes Client provides ManagedKubernetesClient and ManagedOpenShiftClient as OSGi Declarative Service. In order to use it, you must have Service Component Runtime (SCR) feature enabled in your OSGi runtime. In Apache Karaf, you can add this to Karaf Maven Plugin configuration:

<plugin>
  <groupId>org.apache.karaf.tooling</groupId>
  <artifactId>karaf-maven-plugin</artifactId>
  <version>${karaf.version}</version>
  <configuration>
    <startupFeatures>
      <feature>scr</feature>
    </startupFeatures>
  </configuration>
</plugin>

You would need to provide component configuration files for the client you're using. For example, in case of Apache Karaf, place configuration files in this directory:

src/
└── main
    └── resources
        ├── assembly
        │ └── etc
        │     ├── io.fabric8.kubernetes.client.cfg
        │     ├── io.fabric8.openshift.client.cfg

Once added KubernetesClient declarative services would be exposed automatically on startup. Then you can inject it in your project. Here is an example using Apache camel's @BeanInject annotation:

  @BeanInject
  private KubernetesClient kubernetesClient

Why am I getting Exception when using * in NO_PROXY ?

Starting Fabric8 Kubernetes Client v6.1.0, we've change NO_PROXY matching as simple as possible and not support any meta characters. It honors the GNU WGet Spec.

So instead of providing NO_PROXY like this:

(Unsupported) ❌

NO_PROXY: localhost,127.0.0.1,*.google.com, *.github.com

we should provide it like this:

(Supported) ✔️

NO_PROXY: localhost,127.0.0.1,.google.com,.github.com