Skip to content

Commit

Permalink
test(kubernetes): Add integration tests on caching views (#4759)
Browse files Browse the repository at this point in the history
* test(kubernetes): Add integration tests on caching views

There are currently almost no tests on any of the data providers
in the Kubernetes provider. Before doing significant work in this
area, we should add some test coverage to validate that the upcoming
changes don't break anything.

As a lot of the classes are tightly-coupled without clearly defined
interfaces between them, I decided to write an integration test that
runs the caching agent for a few resources, then calls the data
providers to ensure that the resulting data is as expected.

This tests the contract that should mostly stay consistent during
the upcoming refactor. Even if we change what data is stored in the
cache, we'll need to make sure that these providers still return
the same data after a caching cycle is run.

Eventually we should write more unit tests, once there are stronger
contracts between these providers and between them and the cache.
But at this point I think a test at this level will give the most
confidence that the upcoming changes don't break anything.

This is a very large class, as I only wanted to set up the data
once, and the various providers all return different view of the
same data so they can share some of the assert functions. Maybe
there's a better way to split this up, but for the moment this
seemed reasonable.

I used SoftAssertions so that you get a report of all the failures
for each test at the end (rather than failing at the first issue)
which should make debugging easier.

Finally, I had to make a few classes public. These aren't really
intended to be used widely, but it's impossible to appropriately
create a new KubernetesV2Credentials without them. Eventually
we could/should make it easier to create said credentials for
testing purposes, but for now this seemed better than making the
test less realistic.  (Also as these are Spring components, and
Spring can find package-private things, they are in a sense a bit
exposed already. In fact the reason we need them exposed is because
we're wiring up things as Spring would on startup.)

* test(kubernetes): Address code review comments in tests

Add a test that passes details=false to getCluster, which was missing
before, and add a comment noting that the version of the call without
the flag defaults it to true.

Fix a potential exception in the load balancer assertions (so we
just assert the size is wrong and move on instead of getting a
NoSuchElementException) that I ran into when testing the above
change.

Remove a mistakenly copied comment.

* test(kubernetes): Invert boolean so it matches method under test

It was confusing that the cluster assertions expected a flag summary
which is false when details are included, but the methods we're testing
expect a flag includeDetails which is true when details are included.

Update the assertion function to take a flag includeDetails (which is
the inverse of summary) for clarity.

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
  • Loading branch information
ezimanyi and mergify[bot] committed Jul 28, 2020
1 parent 2b78b46 commit 6837db3
Show file tree
Hide file tree
Showing 14 changed files with 1,954 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -36,23 +36,23 @@
*/
@Component
@NonnullByDefault
final class GlobalKubernetesKindRegistry {
public final class GlobalKubernetesKindRegistry {
private final ImmutableMap<KubernetesKind, KubernetesKindProperties> nameMap;

/**
* Creates a {@link GlobalKubernetesKindRegistry} populated with default {@link
* KubernetesKindProperties}.
*/
@Autowired
GlobalKubernetesKindRegistry() {
public GlobalKubernetesKindRegistry() {
this(KubernetesKindProperties.getGlobalKindProperties());
}

/**
* Creates a {@link GlobalKubernetesKindRegistry} populated with the supplied {@link
* KubernetesKindProperties}.
*/
GlobalKubernetesKindRegistry(Iterable<KubernetesKindProperties> kubernetesKindProperties) {
public GlobalKubernetesKindRegistry(Iterable<KubernetesKindProperties> kubernetesKindProperties) {
this.nameMap =
StreamSupport.stream(kubernetesKindProperties.spliterator(), false)
.collect(toImmutableMap(KubernetesKindProperties::getKubernetesKind, p -> p));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
@NonnullByDefault
@RequiredArgsConstructor(access = AccessLevel.PRIVATE)
@Slf4j
final class KubernetesKindRegistry {
public final class KubernetesKindRegistry {
private final Map<KubernetesKind, KubernetesKindProperties> kindMap = new ConcurrentHashMap<>();
private final GlobalKubernetesKindRegistry globalKindRegistry;
private final Function<KubernetesKind, Optional<KubernetesKindProperties>> crdLookup;
Expand Down Expand Up @@ -109,7 +109,7 @@ ImmutableSet<KubernetesKind> getGlobalKinds() {
public static class Factory {
private final GlobalKubernetesKindRegistry globalKindRegistry;

Factory(GlobalKubernetesKindRegistry globalKindRegistry) {
public Factory(GlobalKubernetesKindRegistry globalKindRegistry) {
this.globalKindRegistry = globalKindRegistry;
}

Expand Down

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
apiVersion: v1
kind: Pod
metadata:
annotations:
artifact.spinnaker.io/location: backend-ns
artifact.spinnaker.io/name: backend
artifact.spinnaker.io/type: kubernetes/replicaSet
artifact.spinnaker.io/version: v014
moniker.spinnaker.io/application: backendapp
moniker.spinnaker.io/cluster: replicaSet backend
moniker.spinnaker.io/sequence: "14"
creationTimestamp: "2020-07-24T14:08:00Z"
generateName: backend-v014-
labels:
app: nginx
app.kubernetes.io/managed-by: spinnaker
app.kubernetes.io/name: backendapp
moniker.spinnaker.io/sequence: "14"
name: backend-v014-xkvwh
namespace: backend-ns
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: backend-v014
uid: ded56bd9-2034-4196-a7e4-b6b736c997ba
resourceVersion: "83985048"
selfLink: /api/v1/namespaces/backend-ns/pods/backend-v014-xkvwh
uid: d05606fe-aa69-4f16-b56a-371c2313fe9c
spec:
containers:
- image: gcr.io/my-gcr-repository/backend-service@sha256:2eefbb528a4619311555f92ea9b781af101c62f4c70b73c4a5e93d15624ba94c
imagePullPolicy: IfNotPresent
name: backend-service
ports:
- containerPort: 4000
protocol: TCP
resources:
requests:
cpu: 10m
memory: 8Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts: []
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers: []
nodeName: gke-spinnaker-e2-small-c528c905-f1ub
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations: []
volumes: []
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-07-24T14:08:11Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-07-24T14:08:25Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-07-24T14:08:25Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-07-24T14:08:00Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://ab3d6b767a3dbb4524897ff8f6af035e2cfed8a58aa1451869e4377ee0489fa9
image: sha256:6146cbec26fd547a5975fb6a48c860455a13a50bc9a61c398c8bd0b41af8dbe7
imageID: gcr.io/my-gcr-repository/backend-service@sha256:2eefbb528a4619311555f92ea9b781af101c62f4c70b73c4a5e93d15624ba94c
lastState: {}
name: backend-service
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2020-07-24T14:08:23Z"
hostIP: 10.128.0.25
initContainerStatuses: []
phase: Running
podIP: 10.52.2.9
podIPs:
- ip: 10.52.2.9
qosClass: Burstable
startTime: "2020-07-24T14:08:00Z"
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
apiVersion: v1
kind: Pod
metadata:
annotations:
artifact.spinnaker.io/location: backend-ns
artifact.spinnaker.io/name: backend
artifact.spinnaker.io/type: kubernetes/replicaSet
artifact.spinnaker.io/version: v015
moniker.spinnaker.io/application: backendapp
moniker.spinnaker.io/cluster: replicaSet backend
moniker.spinnaker.io/sequence: "15"
creationTimestamp: "2020-07-24T17:59:52Z"
generateName: backend-v015-
labels:
app: nginx
app.kubernetes.io/managed-by: spinnaker
app.kubernetes.io/name: backendapp
load-balancer: backend
moniker.spinnaker.io/sequence: "15"
name: backend-v015-vhglj
namespace: backend-ns
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: backend-v015
uid: 518fdd80-8949-47c4-806e-1fd3ac1e1d3c
resourceVersion: "83984595"
selfLink: /api/v1/namespaces/backend-ns/pods/backend-v015-vhglj
uid: 45db7673-e3d2-4746-9ecd-38f868f853e5
spec:
containers:
- image: gcr.io/my-gcr-repository/backend-service@sha256:51f29a570a484fbae4da912199ff27ed21f91b1caf51564a9d3afe3a201c1f32
imagePullPolicy: IfNotPresent
name: backend-service
ports:
- containerPort: 4000
protocol: TCP
resources:
requests:
cpu: 10m
memory: 8Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts: []
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers: []
nodeName: gke-spinnaker-e2-small-c528c905-w20h
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations: []
volumes: []
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-07-24T17:59:56Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-07-24T18:00:07Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-07-24T18:00:07Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-07-24T17:59:52Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://a003e133b2b7d9e72dc2276776274f299e2267ce718d7493e5710bcfe68040dc
image: sha256:8dc352f819381bfb316dc470a30515e8538aace729e456c63eba775da7c5edf6
imageID: gcr.io/my-gcr-repository/backend-service@sha256:51f29a570a484fbae4da912199ff27ed21f91b1caf51564a9d3afe3a201c1f32
lastState: {}
name: backend-service
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2020-07-24T18:00:05Z"
hostIP: 10.128.0.14
initContainerStatuses: []
phase: Running
podIP: 10.52.1.15
podIPs:
- ip: 10.52.1.15
qosClass: Burstable
startTime: "2020-07-24T17:59:53Z"
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
artifact.spinnaker.io/location: backend-ns
artifact.spinnaker.io/name: backend
artifact.spinnaker.io/type: kubernetes/replicaSet
artifact.spinnaker.io/version: v014
moniker.spinnaker.io/application: backendapp
moniker.spinnaker.io/cluster: replicaSet backend
moniker.spinnaker.io/sequence: "14"
traffic.spinnaker.io/load-balancers: '["service backendlb"]'
creationTimestamp: "2020-07-15T01:39:59Z"
generation: 2
labels:
app.kubernetes.io/managed-by: spinnaker
app.kubernetes.io/name: backendapp
moniker.spinnaker.io/sequence: "14"
name: backend-v014
namespace: backend-ns
resourceVersion: "83985046"
selfLink: /apis/apps/v1/namespaces/backend-ns/replicasets/backend-v014
uid: ded56bd9-2034-4196-a7e4-b6b736c997ba
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
annotations:
artifact.spinnaker.io/location: backend-ns
artifact.spinnaker.io/name: backend
artifact.spinnaker.io/type: kubernetes/replicaSet
artifact.spinnaker.io/version: v014
moniker.spinnaker.io/application: kubernetes
moniker.spinnaker.io/cluster: replicaSet backend
moniker.spinnaker.io/sequence: "14"
creationTimestamp: null
labels:
app: nginx
app.kubernetes.io/managed-by: spinnaker
app.kubernetes.io/name: kubernetes
moniker.spinnaker.io/sequence: "14"
spec:
containers:
- image: gcr.io/my-gcr-repository/backend-service@sha256:2eefbb528a4619311555f92ea9b781af101c62f4c70b73c4a5e93d15624ba94c
imagePullPolicy: IfNotPresent
name: backend-service
ports:
- containerPort: 4000
protocol: TCP
resources:
requests:
cpu: 10m
memory: 8Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
fullyLabeledReplicas: 1
observedGeneration: 2
readyReplicas: 1
replicas: 1
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
artifact.spinnaker.io/location: backend-ns
artifact.spinnaker.io/name: backend
artifact.spinnaker.io/type: kubernetes/replicaSet
artifact.spinnaker.io/version: v015
moniker.spinnaker.io/application: backendapp
moniker.spinnaker.io/cluster: replicaSet backend
moniker.spinnaker.io/sequence: "15"
traffic.spinnaker.io/load-balancers: '["service backendlb"]'
creationTimestamp: "2020-07-24T17:59:52Z"
generation: 1
labels:
app.kubernetes.io/managed-by: spinnaker
app.kubernetes.io/name: backendapp
moniker.spinnaker.io/sequence: "15"
name: backend-v015
namespace: backend-ns
resourceVersion: "83984596"
selfLink: /apis/apps/v1/namespaces/backend-ns/replicasets/backend-v015
uid: 518fdd80-8949-47c4-806e-1fd3ac1e1d3c
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
annotations:
artifact.spinnaker.io/location: backend-ns
artifact.spinnaker.io/name: backend
artifact.spinnaker.io/type: kubernetes/replicaSet
artifact.spinnaker.io/version: v015
moniker.spinnaker.io/application: kubernetes
moniker.spinnaker.io/cluster: replicaSet backend
moniker.spinnaker.io/sequence: "15"
creationTimestamp: null
labels:
app: nginx
app.kubernetes.io/managed-by: spinnaker
app.kubernetes.io/name: kubernetes
load-balancer: backend
moniker.spinnaker.io/sequence: "15"
spec:
containers:
- image: gcr.io/my-gcr-repository/backend-service@sha256:51f29a570a484fbae4da912199ff27ed21f91b1caf51564a9d3afe3a201c1f32
imagePullPolicy: IfNotPresent
name: backend-service
ports:
- containerPort: 4000
protocol: TCP
resources:
requests:
cpu: 10m
memory: 8Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
fullyLabeledReplicas: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1
Loading

0 comments on commit 6837db3

Please sign in to comment.