Complete example of Elastic Agent sending data to Logstash in Kubernetes, with Elasticsearch output using Fleet-compatible data streams.
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ Sample App │ │ Elastic Agent │ │ Logstash │
│ (Pods/Logs) │────▶│ (Deployment) │────▶│ (Deployment) │
└──────────────────┘ └──────────────────┘ └────────┬─────────┘
Port 5044 │
▼
┌──────────────────┐
│ Elasticsearch │
│ (Data Streams) │
└──────────────────┘
- ✅ Logstash with Beats input (port 5044) for Fleet/Elastic Agent
- ✅ Elasticsearch output with username/password authentication
- ✅ SSL verification disabled for self-signed certificates
- ✅ Persistent queue (4GB) for data durability
- ✅ Fleet-compatible data stream format
- ✅ Elastic Agent as Deployment (not DaemonSet)
- ✅ Kubernetes metadata enrichment
- ✅ Sample application for testing
- Kubernetes cluster (local or cloud)
kubectlconfigured to access your cluster- Elasticsearch cluster accessible from Kubernetes
Edit elasticsearch-secret.yaml with your Elasticsearch details:
stringData:
ELASTICSEARCH_HOST: "https://your-elasticsearch:9200"
ELASTICSEARCH_USERNAME: "elastic"
ELASTICSEARCH_PASSWORD: "your-password"# Apply in order
kubectl apply -f elasticsearch-secret.yaml
kubectl apply -f logstash-config.yaml
kubectl apply -f logstash-pvc.yaml
kubectl apply -f logstash-deployment.yaml
kubectl apply -f logstash-service.yaml
kubectl apply -f elastic-agent-rbac.yaml
kubectl apply -f elastic-agent-config.yaml
kubectl apply -f elastic-agent-deployment.yaml
kubectl apply -f sample-app-deployment.yamlOr deploy all at once:
kubectl apply -f .# Check all pods are running
kubectl get pods
# Check Logstash logs
kubectl logs -l app=logstash --tail=50
# Check Elastic Agent logs
kubectl logs -l app=elastic-agent --tail=50
# Check sample app logs
kubectl logs -l app=sample-app --tail=20To use Logstash as an output in Fleet:
- Internal Cluster Access: Use
logstash:5044 - External Access: Change service type in
logstash-service.yaml:LoadBalancerfor cloud providersNodePortfor on-premises
In Fleet UI → Settings → Outputs:
- Type: Logstash
- Hosts:
logstash:5044(or external IP/hostname) - SSL: Disabled (as configured)
| File | Description |
|---|---|
elasticsearch-secret.yaml |
Elasticsearch credentials |
logstash-config.yaml |
Logstash pipeline configuration |
logstash-pvc.yaml |
Persistent volume for queue |
logstash-deployment.yaml |
Logstash deployment |
logstash-service.yaml |
Service for Fleet output (port 5044) |
elastic-agent-rbac.yaml |
RBAC for Elastic Agent |
elastic-agent-config.yaml |
Elastic Agent configuration |
elastic-agent-deployment.yaml |
Elastic Agent deployment |
sample-app-deployment.yaml |
Sample log generator |
Optimized configuration in logstash-config.yaml:
queue.type: persisted
queue.max_bytes: 4gb # Maximum queue size
queue.checkpoint.writes: 1024 # Checkpoint every 1024 events
queue.checkpoint.acks: 1024 # Checkpoint every 1024 acks
queue.checkpoint.interval: 1000 # Checkpoint timeout (ms)
queue.drain: true # Drain queue before shutdownWhy these settings?
- 4GB max_bytes: Sufficient buffer for ~5-10 million events
- checkpoint.writes: 1024: Balance between durability and performance
- drain: true: Prevents data loss during restarts
In logstash-config.yaml, uncomment SSL settings:
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/logstash/certs/logstash.crt"
ssl_key => "/etc/logstash/certs/logstash.key"
}Modify in logstash-config.yaml:
data_stream_type => "logs"
data_stream_dataset => "your-dataset"
data_stream_namespace => "your-namespace"Increase replicas (note: each replica needs its own PVC):
spec:
replicas: 3kubectl describe pod -l app=logstash
kubectl logs -l app=logstash --previous- Verify Logstash service:
kubectl get svc logstash - Check Logstash is listening:
kubectl exec -it <logstash-pod> -- ss -tlnp
- Check Logstash output logs for errors
- Verify Elasticsearch credentials in the secret
- Test Elasticsearch connectivity from Logstash pod
kubectl delete -f .