New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support TLS for spark dependencies #294
Comments
Dependent issue in spark-dependencies #294 |
@jpkrohling this is not ES specific. The dependency job supports multiple backends. |
Hi @pavolloffay are there any updates on this? |
There aren't ane news. If anybody has free cycles feel free to take it. |
Spent a bit doing some digging through things and came across this page https://hub.helm.sh/charts/jaegertracing/jaeger The page mentions that in order to have a version of the spark job working one would need to get the certificates from the elasticsearch cluster and do
And then use the spec:
jobTemplate:
spec:
template:
spec:
containers:
- name: jaeger-spark
args:
- --java.opts=-Djavax.net.ssl.trustStore=/tls/trust.store -Djavax.net.ssl.trustStorePassword=<some pass>
volumeMounts:
- name: jaeger-tls
mountPath: /tls
subPath:
readOnly: true
volumes:
- name: jaeger-tls
configMap:
name: jaeger-tls
... Which all sounds like it is in accordance with the issue description. |
Correct the certs have to be imported into java keystore in the spark image. |
My idea was to implement a script in the spark image that would do that if the certs are specified as env vars. |
Hey guys, is there any work around for this atm? I'm trying to follow the thread of TLS related changes and options. Is there any skip-tls JAVA_OPT or something that can be used temporarily? To maybe help others: |
This would probably a question to ask in the repository that holds the code for the spark dependencies processor. https://github.com/jaegertracing/spark-dependencies |
Hi, maybe this is stupid question, but i couldn't run a spark job with following configuration: spark:
enabled: true
cmdlineParams:
java.opts: "-Djavax.net.ssl.trustStore=/tls/trust.store -Djavax.net.ssl.trustStorePassword=changeit"
extraConfigmapMounts:
- name: jaeger-tls
mountPath: /tls
subPath: ""
configMap: jaeger-tls
readOnly: true Here is error: /entrypoint.sh: 37: exec: --java.opts=-Djavax.net.ssl.trustStore=/tls/trust.store -Djavax.net.ssl.trustStorePassword=changeit: not found Can someone show me, what i'm doing wrong? or maybe there are any workaround, how to get spark job works. |
@aleksandrovpa I don't have much time to look at this at the moment, but you may find what you need in these two items: |
@aleksandrovpa we're just composing the cron job manually in order to bootstrap the trust store through an init container. You can check this out https://github.com/MS3Inc/tavros/blob/main/tests/integration/targets/playbooks/provision_playbook/example.com/platform/jaeger/default/cronjob-spark-dependencies.yaml#L39 |
Thanks @jorgex1 for your comment, it was very helpful |
is it currently the only solution to write our own job ? i see that the dependencies definition now allow to mount volumes but i am not sure how to create the java trustStore without an init container like the one in @jam01 link |
I achieved the desired result (given you have pem certificate as a secret) by adding an alpine java init container to the spark dependencies pod, which will create me a truststore, which I can then mount to the spark container and provide the java opts means to it: apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: jaeger-spark
spec:
schedule: "30 */12 * * *"
jobTemplate:
spec:
template:
spec:
initContainers:
- name: create-jks-truststore
image: openjdk:11
volumeMounts:
- name: truststore-dir
mountPath: "/target"
- name: es-cert
mountPath: "/src"
readOnly: true
command:
- "/bin/sh"
- "-c"
- "rm -rf /target/* && keytool -import -file /src/cert.pem -storetype JKS -keystore /target/truststore.jks -storepass password -noprompt"
containers:
- name: jaeger-spark
image: jaegertracing/spark-dependencies:latest
env:
- name: STORAGE
value: "elasticsearch"
- name: ES_NODES
value: "https://elasticsearch.default:9200"
- name: ES_NODES_WAN_ONLY
value: "false"
- name: ES_USERNAME
value: user
- name: ES_PASSWORD
value: password
- name: JAVA_OPTS
value: "-Djavax.net.ssl.trustStore=/elasticsearch/truststore.jks -Djavax.net.ssl.trustStorePassword=password"
volumeMounts:
- name: truststore-dir
mountPath: "/elasticsearch"
- name: temp-dir
mountPath: "/tmp"
restartPolicy: OnFailure
volumes:
- name: truststore-dir
emptyDir: {}
- name: temp-dir
emptyDir: {}
- name: es-cert
secret:
secretName: my_secret_with_elasticsearch_certificate_pem
items:
- key: elasticsearch_certificate
path: "cert.pem" Hope this helps someone. |
It is much simple: entrypoint.sh from docker image is a shell script. So it used JAVA_OPTS env to pass options to java process. You need to pass env variable spark:
extraEnv:
- name: "JAVA_OPTS"
value: "-Djavax.net.ssl.trustStore=/tls/trust.store -Djavax.net.ssl.trustStorePassword=changeit" |
Spark dependencies job uses Java keystore for certificates. The docker image allows to configure java opts with ssl configuration -
JAVA_OPTS=-Djavax.net.ssl.
.The certs can be mounted via volume and volume mounts which are part of
JaegerCommonSpec
. The issue that these certs have to be imported into java keystore/thruststore.https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift/ suggests using init container for the job.
Todos:
The text was updated successfully, but these errors were encountered: