-
-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running script exporter on Kubernetes #33
Comments
Hi @pawelrys, if these logs are only available If these logs are also written to stdout / stderr then you can also run the script_exporter as privileged pod and run your scripts on the host, because these logs should be available at ---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: script-exporter
spec:
selector:
matchLabels:
app: script-exporter
template:
metadata:
labels:
app: script-exporter
spec:
containers:
- image: ricoberger/script_exporter
name: script-exporter
volumeMounts:
- mountPath: /var/log
name: varlog
- name: config
mountPath: /etc/script_exporter
volumes:
- hostPath:
path: /var/log
name: varlog
- name: config
configMap:
name: script-exporter
defaultMode: 0777 It just contains the basic fields to get an idea of it. You may also have to add some other fields from the example Deployment: https://github.com/ricoberger/script_exporter/blob/master/examples/kubernetes.yaml. |
I would close this issue. If you still have some problems with running the exporter on Kubernetes please let me know. |
Hi, I have a problem with understanding how your script should work on Kubernetes. It could be caused by my small knowledge about the Kubernetes environment, but I hope you could something explain to me. Using it locally isn't a problem, but on Kubernetes, it is.
Suppose I have a cluster named my-cluster and in there a few samples pods which exhibit hello world page. My job is to get data about specific files from the containers on which programs are running, for example, on path ~/var/app/log I have two files: log_1.log and log_2.log (in every container). I would like to calculate how many days is between creating/updating log_1.log and log_2.log, export it to Prometheus and creating a diagram in Grafana about this information in every container.
Should I install the script exporter on every container and exhibit the information about differences in files, or can I create the script exporter as the next pod in my cluster and get access to the filesystem in every container to get the required data? If can I do it in a second way, could you explain how it should look?
Thank you very much in advance for your time.
Paweł
The text was updated successfully, but these errors were encountered: