New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creation of ES instance fails on OCP 4.1.2 #166
Comments
@ewolinetz does this have anything to do with openshift/cluster-logging-operator#205 ? |
fyi this worked previously in an OCP 4.1 cluster on AWS |
I get the same error when deployed locally on 3.11
ES operator shows an error, not sure if that is related though:
|
@richm @ewolinetz As a potential workaround for the UUID issue, it seems that the CR can specify the GenUUID value to be used, instead of a random value being created by the es-operator. Do you see any issues with this approach? We only expect to have a single ES cluster (per tenant) so hopefully using some stable UUID value would not be a problem. Although I guess each node in the CR should have a different value? |
No, that should work fine so long as you don't attempt to change it after the initial creation.
Yes, each node should be unique within a CR so that the Operator can correctly check if node configurations have changed for that node. |
I think this issue can be closed. The main reason why it was opened is that change in master broke our test pipeline. We switched our test to use The check on |
When I try to create an elasticsearch backed Jaeger Operator instance using on of the example yaml files (https://github.com/jaegertracing/jaeger-operator/blob/master/deploy/examples/simple-prod-deploy-es.yaml) the operator instance never starts as the elasticsearch pod never gets out of pending state.
There is nothing in the log, but I get the following output from oc get elasticsearch -o yaml:
ovpn-118-43:jaeger-operator kearls$ oc get elasticsearch -o yaml
apiVersion: v1
items:
kind: Elasticsearch
metadata:
creationTimestamp: "2019-06-27T12:56:31Z"
generation: 8
labels:
app: jaeger
app.kubernetes.io/co app.kubernetes.io/co app.kubernetes.io/co app.kubernetes.io/component: elasticsearch
app.kubernetes.io/instance: simple-prod
app.kubernetes.io/name: elasticsearch
app.kubernetes.io/part-of: jaeger
name: elasticsearch
namespace: fud
ownerReferences:
controller: true
kind: Jaeger
name: simple-prod
uid: fbd9d6b0-98da-11e9-8a21-fa163e292f36
resourceVersion: "412161"
selfLink: /apis/logging.openshift.io/v1/namespaces/fud/elasticsearches/elasticsearch
uid: fc4ed617-98da-11e9-8a21-fa163e292f36
spec:
managementState: Managed
nodeSpec:
resources: {}
nodes:
resources: {}
roles:
storage: {}
redundancyPolicy: ""
status:
cluster:
activePrimaryShards: 0
activeShards: 0
initializingShards: 0
numDataNodes: 0
numNodes: 0
pendingTasks: 0
relocatingShards: 0
status: ""
unassignedShards: 0
clusterHealth: ""
conditions:
message: Previously used GenUUID "x1s6chde" is no longer found in Spec.Nodes
reason: Invalid Spec
status: "True"
type: InvalidUUID
nodes:
message: '0/5 nodes are available: 5 node(s) didn''t match node selector.'
reason: Unschedulable
status: "True"
type: Unschedulable
deploymentName: elasticsearch-cdm-x1s6chde-1
upgradeStatus: {}
pods:
client:
failed: []
notReady:
ready: []
data:
failed: []
notReady:
ready: []
master:
failed: []
notReady:
ready: []
shardAllocationEnabled: shard allocation unknown
kind: List
metadata:
resourceVersion: ""
selfLink: ""
The text was updated successfully, but these errors were encountered: