-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support of CRD #11
Comments
Thanks, this is a use case I want to be able to support. I've got an integration test up and running in #14 which recreates the problem and I'll look at fixing it up. Any help or thoughts welcome, it looks like this error is raised by this line using the universal decoder. Need to work out why it's not so universal! |
Just wanted to bump that adding this functionality would be quite useful, I hit this following the cert-manager tutorial in https://itnext.io/automated-tls-with-cert-manager-and-letsencrypt-for-kubernetes-7daaa5e0cae4 (after discovering the lack of a generic resource in the official Kubernetes provider), when attempting to apply the following YAML:
Applying manually with
|
I made a bit of progress here working around the issues without doing any reading about the structure of the While my workaround's got things created I then have trouble reading the objects back correctly. After doing some more reading it looks like I need to move the code over to using the |
Eagerly looking forward to this -- I also just tried and got bit by a CRD yaml definition. |
I've had a bit of time on a few flights recently to try and complete this migration to the Currently, due to some knock on effects of the changes my test suite isn't working so do please be careful when testing this and avoid anything production shaped. |
Hi @lawrencegripper -- thanks for that. It appears to be working for me. I'm testing a CRD for CouchbaseCluster and I'm able to do it with the embedded yaml now. Hoping you get the tests sorted out and can release this as a new version! |
Awesome 👍 Very happy a couple of late nights and airplane journeys paid off! |
@davisford quick heads up that this has a couple of bugs that I'm working through as part of tests. Primarily if the "RevisionVersion" changes TF apply will try and re-deploy the CRD. The problem is that updates to the CRD's status fields will cause this to occur. Previously I had some nice reflection logic in place to skip these fields from the comparison but the change to the |
Finished this up now, no guarentee's but new build should be live shortly for testing. |
Awesome thank you trying release |
@lawrencegripper I'm getting this error now with the latest release, whereas it used to to work when I was on the previously cited branch:
I don't think anything has changed with the version, etc. for the YAML. This is a CRD that is defined here, and this is my terraform file: resource "k8sraw_yaml" "cb-server-cluster" {
yaml_body = <<YAML
apiVersion: couchbase.com/v1
kind: CouchbaseCluster
metadata:
name: ${var.cb-cluster-name}
spec:
baseImage: ${var.cb-cluster-image}
version: ${var.cb-cluster-image-version}
authSecret: ${var.cb-operator-secret-name}
exposeAdminConsole: true
adminConsoleServices:
- data
cluster:
dataServiceMemoryQuota: 256
indexServiceMemoryQuota: 256
searchServiceMemoryQuota: 256
eventingServiceMemoryQuota: 256
analyticsServiceMemoryQuota: 1024
indexStorageSetting: memory_optimized
autoFailoverTimeout: 120
autoFailoverMaxCount: 3
autoFailoverOnDataDiskIssues: true
autoFailoverOnDataDiskIssuesTimePeriod: 120
autoFailoverServerGroup: false
buckets:
- name: default
type: couchbase
memoryQuota: 128
replicas: 1
ioPriority: high
evictionPolicy: fullEviction
conflictResolution: seqno
enableFlush: true
enableIndexReplica: false
- name: test
type: couchbase
memoryQuota: 128
replicas: 1
ioPriority: high
evictionPolicy: fullEviction
conflictResolution: seqno
enableFlush: true
enableIndexReplica: false
servers:
- size: 3
name: all_services
services:
- data
- index
- query
- search
- eventing
- analytics
YAML
} Maybe I'll try rolling back to the old one I was using and see if that fixes the issue. |
The CRD is defined in my cluster: ml-dford:localhost dford$ kc describe crd couchbaseclusters.couchbase.com
Name: couchbaseclusters.couchbase.com
Namespace:
Labels: <none>
Annotations: <none>
API Version: apiextensions.k8s.io/v1beta1
Kind: CustomResourceDefinition
Metadata:
Creation Timestamp: 2019-04-12T18:09:26Z
Generation: 1
Resource Version: 987
Self Link: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/couchbaseclusters.couchbase.com
UID: 1bdf7bf7-5d4e-11e9-987c-000c290425d0
Spec:
Conversion:
Strategy: None
Group: couchbase.com
Names:
Kind: CouchbaseCluster
List Kind: CouchbaseClusterList
Plural: couchbaseclusters
Short Names:
couchbase
cbc
Singular: couchbasecluster
Scope: Namespaced
Validation:
Open APIV 3 Schema:
Properties:
Spec:
Properties:
Admin Console Services:
Items:
Enum:
data
index
query
search
eventing
analytics
Type: string
Type: array
Anti Affinity:
Type: boolean
Auth Secret:
Min Length: 1
Type: string
Base Image:
Type: string
Buckets:
Items:
Properties:
Conflict Resolution:
Enum:
seqno
lww
Type: string
Enable Flush:
Type: boolean
Enable Index Replica:
Type: boolean
Eviction Policy:
Enum:
valueOnly
fullEviction
noEviction
nruEviction
Type: string
Io Priority:
Enum:
high
low
Type: string
Memory Quota:
Minimum: 100
Type: integer
Name:
Pattern: ^[a-zA-Z0-9._\-%]*$
Type: string
Replicas:
Maximum: 3
Minimum: 0
Type: integer
Type:
Enum:
couchbase
ephemeral
memcached
Type: string
Required:
name
type
memoryQuota
Type: object
Type: array
Cluster:
Properties:
Analytics Service Memory Quota:
Minimum: 1024
Type: integer
Auto Failover Max Count:
Maximum: 3
Minimum: 1
Type: integer
Auto Failover On Data Disk Issues:
Type: boolean
Auto Failover On Data Disk Issues Time Period:
Maximum: 3600
Minimum: 5
Type: integer
Auto Failover Server Group:
Type: boolean
Auto Failover Timeout:
Maximum: 3600
Minimum: 5
Type: integer
Cluster Name:
Type: string
Data Service Memory Quota:
Minimum: 256
Type: integer
Eventing Service Memory Quota:
Minimum: 256
Type: integer
Index Service Memory Quota:
Minimum: 256
Type: integer
Index Storage Setting:
Enum:
plasma
memory_optimized
Type: string
Search Service Memory Quota:
Minimum: 256
Type: integer
Required:
dataServiceMemoryQuota
indexServiceMemoryQuota
searchServiceMemoryQuota
eventingServiceMemoryQuota
analyticsServiceMemoryQuota
indexStorageSetting
autoFailoverTimeout
autoFailoverMaxCount
Type: object
Disable Bucket Management:
Type: boolean
Expose Admin Console:
Type: boolean
Exposed Features:
Items:
Enum:
admin
xdcr
client
Type: string
Type: array
Log Retention Count:
Minimum: 0
Type: integer
Log Retention Time:
Pattern: ^\d+(ns|us|ms|s|m|h)$
Type: string
Paused:
Type: boolean
Server Groups:
Items:
Type: string
Type: array
Servers:
Items:
Properties:
Name:
Min Length: 1
Pattern: ^[-_a-zA-Z0-9]+$
Type: string
Pod:
Properties:
Automount Service Account Token:
Type: boolean
Couchbase Env:
Items:
Properties:
Name:
Type: string
Value:
Type: string
Type: object
Type: array
Labels:
Type: object
Node Selector:
Type: object
Resources:
Properties:
Limits:
Properties:
Cpu:
Type: string
Memory:
Type: string
Storage:
Type: string
Type: object
Requests:
Properties:
Cpu:
Type: string
Memory:
Type: string
Storage:
Type: string
Type: object
Type: object
Tolerations:
Items:
Properties:
Effect:
Type: string
Key:
Type: string
Operator:
Type: string
Toleration Seconds:
Type: integer
Value:
Type: string
Required:
key
operator
value
effect
Type: object
Type: array
Volume Mounts:
Properties:
Analytics:
Items:
Type: string
Type: array
Data:
Type: string
Default:
Type: string
Index:
Type: string
Logs:
Type: string
Type: object
Type: object
Server Groups:
Items:
Type: string
Type: array
Services:
Items:
Enum:
data
index
query
search
eventing
analytics
Type: string
Min Length: 1
Type: array
Size:
Minimum: 1
Type: integer
Required:
size
name
services
Type: object
Min Length: 1
Type: array
Software Update Notifications:
Type: boolean
Tls:
Properties:
Static:
Properties:
Member:
Properties:
Server Secret:
Type: string
Type: object
Operator Secret:
Type: string
Type: object
Type: object
Version:
Pattern: ^([\w\d]+-)?\d+\.\d+.\d+(-[\w\d]+)?$
Type: string
Volume Claim Templates:
Items:
Properties:
Metadata:
Properties:
Name:
Type: string
Required:
name
Type: object
Spec:
Properties:
Resources:
Properties:
Limits:
Properties:
Storage:
Type: string
Required:
storage
Type: object
Requests:
Properties:
Storage:
Type: string
Required:
storage
Type: object
Type: object
Storage Class Name:
Type: string
Required:
resources
storageClassName
Type: object
Required:
metadata
spec
Type: object
Type: array
Required:
baseImage
version
authSecret
cluster
servers
Version: v1
Versions:
Name: v1
Served: true
Storage: true
Status:
Accepted Names:
Kind: CouchbaseCluster
List Kind: CouchbaseClusterList
Plural: couchbaseclusters
Short Names:
couchbase
cbc
Singular: couchbasecluster
Conditions:
Last Transition Time: 2019-04-12T18:09:26Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: <nil>
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Stored Versions:
v1
Events: <none> |
@lawrencegripper can confirm, when I swap back in what changed? |
Interesting, I'll add this one into the integration tests when I get a moment (bit back to back at the moment) and try and track down the cause. Off the top of my head the only major changes should have been in the testing code. I'd expect to see that error if the CRD wasn't defined in the cluster and we attempted to create an instance of it - must be something up that I've nudged when making changes for the testing. |
@davisford could you send me the output of |
@lawrencegripper I will get that output, but I just discovered something else. So, I'm running with the older |
|
|
I built your I wonder if there's a sequence problem here? I'm blasting all of it with Terraform to create them all. Some things I'm using init containers to wait until the cluster is up before trying to add an RBAC user via a job, and start the sync gateway containers. I wonder if submitting the cluster yaml job immediately precedes the creation of the CRD from the operator? That's why it is failing. Maybe I need to put an init container or pause for a bit while the operator does its thing first? |
Yup I think you've hit the nail on the head with that one. The operator hasn't yet created the CRD so when this provider tries to create it it doesn't exist. I tested out by adding a test case here and with the CRD in place it does work correctly. Options:
|
PR with couchbase test here: #18 |
@davisford so I've done some more testing and it looks like even with I've added retry like so, this will do an exponential backoff style retry and all creates which resolves the issue. provider "k8sraw" {
create_retry_count = 15
}
Just merging the change now so should be a build shortly for you to test. |
Nice...I won't be able to get to it until Monday, but thanks for the great support. I'm now utilizing init containers on jobs, etc. to get what I need. Spinning up a couch cluster is rather complicated. You first have to spawn the operator, then feed it a job to create the cluster (i.e. the CRD FWIW -- if anyone else lands here trying to build couch terraform scripts, at least as of today - the whole thing can't be done without a couple extra jobs. I added the k8s feature to TTL expire jobs, b/c I got tired of having to manually delete all these zombie pods, but then I realized if you do so, Handling things like one-off jobs in Terraform is an interesting puzzle to solve. Wonder if anyone else has run into the problem and how they are addressing it? |
Makes sense, thanks for coming back and sharing. I'll close this one off now as while we haven't solved for couchbase the provider does now support CRDs for other users. |
Hi,
I tried to create a backend config with the provider, but it seems not to support custom resource. Do you plan to add this support?
Thanks!
Output:
The text was updated successfully, but these errors were encountered: