-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reboot of the first control plane with dynamicConfig breaks Calico & Helm Charts #3304
Comments
Hi @CmdrSharp I tried to reproduce this but I couldn't get it. I used 1.27.4 but at the end of the day it should be the same. I used a similar configuration with both calico and dynamic config:
I tried rebooting both nodes one by one and completely shut off both in parallel, however this didn't happen to me. |
That's odd, I've had a 100% successrate at reproducing it so far. I'll get you the output you asked for first thing tomorrow! |
@juanluisvaladas Here is the requested output ls /var/lib/k0s/manifests/
/etc/k0s/k0s.yaml # generated-by-k0sctl 2023-07-20T15:37:03+02:00
apiVersion: k0s.k0sproject.io/v1beta1
kind: ClusterConfig
metadata: {}
spec:
api:
address: REDACTED.52
sans:
- REDACTED.52
- REDACTED.53
- REDACTED.54
- 127.0.0.1
extensions:
helm:
charts:
- chartname: metallb/metallb
name: metallb
namespace: metallb
order: 0
values: |
speaker:
logLevel: warn
- chartname: external-secrets/external-secrets
name: external-secrets
namespace: external-secrets
order: 1
values: |
replicaCount: 2
leaderElect: true
podDisruptionBudget:
enabled: true
minAvailable: 1
recreatePods: true
version: 0.9.0
repositories:
- name: metallb
url: https://metallb.github.io/metallb
- name: external-secrets
url: https://charts.external-secrets.io
network:
calico:
envVars:
FELIX_FEATUREDETECTOVERRIDE: ChecksumOffloadBroken=true
nodeLocalLoadBalancing:
enabled: true
provider: calico
storage: {} k get clusterconfig -n kube-system k0s -o yaml apiVersion: k0s.k0sproject.io/v1beta1
kind: ClusterConfig
metadata:
creationTimestamp: "2023-07-20T13:37:21Z"
generation: 1
name: k0s
namespace: kube-system
resourceVersion: "201"
uid: f69fcea5-4cf3-4cc8-966c-910aba70dd6b
spec:
extensions:
helm:
charts:
- chartname: metallb/metallb
name: metallb
namespace: metallb
order: 0
timeout: 0
values: |
speaker:
logLevel: warn
version: ""
- chartname: external-secrets/external-secrets
name: external-secrets
namespace: external-secrets
order: 1
timeout: 0
values: |
replicaCount: 2
leaderElect: true
podDisruptionBudget:
enabled: true
minAvailable: 1
recreatePods: true
version: 0.9.0
concurrencyLevel: 5
repositories:
- caFile: ""
certFile: ""
insecure: false
keyfile: ""
name: metallb
password: ""
url: https://metallb.github.io/metallb
username: ""
- caFile: ""
certFile: ""
insecure: false
keyfile: ""
name: external-secrets
password: ""
url: https://charts.external-secrets.io
username: ""
storage:
create_default_storage_class: false
type: external_storage
network:
calico:
envVars:
FELIX_FEATUREDETECTOVERRIDE: ChecksumOffloadBroken=true
flexVolumeDriverPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
mode: vxlan
mtu: 1450
overlay: Always
vxlanPort: 4789
vxlanVNI: 4096
wireguard: false
withWindowsNodes: false
dualStack: {}
kubeProxy:
iptables:
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
minSyncPeriod: 0s
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
metricsBindAddress: 0.0.0.0:10249
mode: iptables
kuberouter:
autoMTU: true
hairpin: Enabled
ipMasq: false
metricsPort: 8080
mtu: 0
peerRouterASNs: ""
peerRouterIPs: ""
nodeLocalLoadBalancing:
enabled: true
envoyProxy:
apiServerBindPort: 7443
image:
image: quay.io/k0sproject/envoy-distroless
version: v1.24.1
konnectivityServerBindPort: 7132
type: EnvoyProxy
podCIDR: 10.244.0.0/16
provider: calico |
Hmm I tried this with just 2 controller nodes and when I shut down one the cluster is read only. Since this is related to dynamic config my test is probably invalid. I'll retry this with three control plane nodes. |
Hi, I tried this again with 3 nodes and I can't reproduce it. I will try now forcing 1.27.3 |
I'll retry this myself yet again just to verify. Tearing down the current one that it's happening on and re-building.
Will get back to you during the afternoon. |
I will test this with flatcar, seems to be something specific to it. |
Had no issues reproducing it. What I did notice is that the issue actually occurs before the control node comes back. It happens the second that the first control plane node goes k0sctl output during installation⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███ ⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███ ⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███ ⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███ ⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████ k0sctl v0.15.0 Copyright 2022, k0sctl authors. Anonymized telemetry of usage will be sent to the authors. By continuing to use k0sctl you agree to these terms: https://k0sproject.io/licenses/eula INFO ==> Running phase: Connect to hosts INFO [ssh] REDACTED.54:22: connected INFO [ssh] REDACTED.35:22: connected INFO [ssh] REDACTED.34:22: connected INFO [ssh] REDACTED.37:22: connected INFO [ssh] REDACTED.52:22: connected INFO [ssh] REDACTED.36:22: connected INFO [ssh] REDACTED.53:22: connected INFO ==> Running phase: Detect host operating systems INFO [ssh] REDACTED.52:22: is running Flatcar Container Linux by Kinvolk 3510.2.4 (Oklo) INFO [ssh] REDACTED.54:22: is running Flatcar Container Linux by Kinvolk 3510.2.4 (Oklo) INFO [ssh] REDACTED.34:22: is running Flatcar Container Linux by Kinvolk 3510.2.4 (Oklo) INFO [ssh] REDACTED.36:22: is running Flatcar Container Linux by Kinvolk 3510.2.4 (Oklo) INFO [ssh] REDACTED.53:22: is running Flatcar Container Linux by Kinvolk 3510.2.4 (Oklo) INFO [ssh] REDACTED.35:22: is running Flatcar Container Linux by Kinvolk 3510.2.4 (Oklo) INFO [ssh] REDACTED.37:22: is running Flatcar Container Linux by Kinvolk 3510.2.4 (Oklo) INFO ==> Running phase: Acquire exclusive host lock INFO ==> Running phase: Prepare hosts INFO ==> Running phase: Gather host facts INFO [ssh] REDACTED.34:22: using calico-dev01-w01 as hostname INFO [ssh] REDACTED.37:22: using calico-dev01-w04 as hostname INFO [ssh] REDACTED.54:22: using calico-dev01-cp03 as hostname INFO [ssh] REDACTED.52:22: using calico-dev01-cp01 as hostname INFO [ssh] REDACTED.35:22: using calico-dev01-w02 as hostname INFO [ssh] REDACTED.36:22: using calico-dev01-w03 as hostname INFO [ssh] REDACTED.53:22: using calico-dev01-cp02 as hostname INFO ==> Running phase: Validate hosts INFO ==> Running phase: Gather k0s facts INFO ==> Running phase: Validate facts INFO ==> Running phase: Download k0s on hosts INFO [ssh] REDACTED.35:22: downloading k0s v1.27.3+k0s.0 INFO [ssh] REDACTED.34:22: downloading k0s v1.27.3+k0s.0 INFO [ssh] REDACTED.36:22: downloading k0s v1.27.3+k0s.0 INFO [ssh] REDACTED.52:22: downloading k0s v1.27.3+k0s.0 INFO [ssh] REDACTED.54:22: downloading k0s v1.27.3+k0s.0 INFO [ssh] REDACTED.53:22: downloading k0s v1.27.3+k0s.0 INFO [ssh] REDACTED.37:22: downloading k0s v1.27.3+k0s.0 INFO ==> Running phase: Configure k0s INFO [ssh] REDACTED.52:22: validating configuration INFO [ssh] REDACTED.53:22: validating configuration INFO [ssh] REDACTED.54:22: validating configuration INFO [ssh] REDACTED.53:22: configuration was changed INFO [ssh] REDACTED.52:22: configuration was changed INFO [ssh] REDACTED.54:22: configuration was changed INFO ==> Running phase: Initialize the k0s cluster INFO [ssh] REDACTED.52:22: installing k0s controller INFO [ssh] REDACTED.52:22: waiting for the k0s service to start INFO [ssh] REDACTED.52:22: waiting for kubernetes api to respond INFO ==> Running phase: Install controllers INFO [ssh] REDACTED.52:22: generating token INFO [ssh] REDACTED.53:22: writing join token INFO [ssh] REDACTED.53:22: installing k0s controller INFO [ssh] REDACTED.53:22: starting service INFO [ssh] REDACTED.53:22: waiting for the k0s service to start INFO [ssh] REDACTED.53:22: waiting for kubernetes api to respond INFO [ssh] REDACTED.52:22: generating token INFO [ssh] REDACTED.54:22: writing join token INFO [ssh] REDACTED.54:22: installing k0s controller INFO [ssh] REDACTED.54:22: starting service INFO [ssh] REDACTED.54:22: waiting for the k0s service to start INFO [ssh] REDACTED.54:22: waiting for kubernetes api to respond INFO ==> Running phase: Install workers INFO [ssh] REDACTED.34:22: validating api connection to https://REDACTED.52:6443 INFO [ssh] REDACTED.35:22: validating api connection to https://REDACTED.52:6443 INFO [ssh] REDACTED.36:22: validating api connection to https://REDACTED.52:6443 INFO [ssh] REDACTED.37:22: validating api connection to https://REDACTED.52:6443 INFO [ssh] REDACTED.52:22: generating token INFO [ssh] REDACTED.34:22: writing join token INFO [ssh] REDACTED.35:22: writing join token INFO [ssh] REDACTED.37:22: writing join token INFO [ssh] REDACTED.36:22: writing join token INFO [ssh] REDACTED.36:22: installing k0s worker INFO [ssh] REDACTED.34:22: installing k0s worker INFO [ssh] REDACTED.37:22: installing k0s worker INFO [ssh] REDACTED.35:22: installing k0s worker INFO [ssh] REDACTED.36:22: starting service INFO [ssh] REDACTED.34:22: starting service INFO [ssh] REDACTED.37:22: starting service INFO [ssh] REDACTED.35:22: starting service INFO [ssh] REDACTED.36:22: waiting for node to become ready INFO [ssh] REDACTED.35:22: waiting for node to become ready INFO [ssh] REDACTED.37:22: waiting for node to become ready INFO [ssh] REDACTED.34:22: waiting for node to become ready INFO ==> Running phase: Release exclusive host lock INFO ==> Running phase: Disconnect from hosts INFO ==> Finished in 2m19s INFO k0s cluster version 1.27.3+k0s.0 is now installed INFO Tip: To access the cluster you can now fetch the admin kubeconfig using: INFO k0sctl kubeconfig Pods prior to restart of control planeNAMESPACE NAME READY STATUS RESTARTS AGE external-secrets external-secrets-769df6c8cd-bktv5 1/1 Running 0 3m40s external-secrets external-secrets-769df6c8cd-r72q6 1/1 Running 0 3m40s external-secrets external-secrets-cert-controller-57b8c96ffb-r5r48 1/1 Running 0 3m40s external-secrets external-secrets-webhook-75c54b49d7-qpjtd 1/1 Running 0 3m40s kube-system calico-kube-controllers-6d48c8cf5c-rb7bh 1/1 Running 0 4m1s kube-system calico-node-9lkck 1/1 Running 0 3m32s kube-system calico-node-dfs2j 1/1 Running 0 3m43s kube-system calico-node-jfshz 1/1 Running 0 3m16s kube-system calico-node-l6d5n 1/1 Running 0 3m16s kube-system calico-node-mrknm 1/1 Running 0 3m16s kube-system calico-node-pqsmx 1/1 Running 0 3m19s kube-system calico-node-r9hj9 1/1 Running 0 3m16s kube-system coredns-878bb57ff-qkdk5 1/1 Running 0 3m25s kube-system coredns-878bb57ff-wd58c 1/1 Running 0 4m1s kube-system konnectivity-agent-8p2dm 1/1 Running 0 3m16s kube-system konnectivity-agent-g76h6 1/1 Running 0 3m19s kube-system konnectivity-agent-jrdnx 1/1 Running 0 3m15s kube-system konnectivity-agent-kg8xx 1/1 Running 0 3m16s kube-system konnectivity-agent-n8qr7 1/1 Running 0 3m16s kube-system konnectivity-agent-szrc7 1/1 Running 0 3m16s kube-system konnectivity-agent-x5n45 1/1 Running 0 3m24s kube-system kube-proxy-5cn9q 1/1 Running 0 3m16s kube-system kube-proxy-8qlxt 1/1 Running 0 3m32s kube-system kube-proxy-c5b72 1/1 Running 0 3m16s kube-system kube-proxy-csftt 1/1 Running 0 3m16s kube-system kube-proxy-mjpk6 1/1 Running 0 3m43s kube-system kube-proxy-r86fx 1/1 Running 0 3m19s kube-system kube-proxy-v9vnd 1/1 Running 0 3m16s kube-system metrics-server-7f86dff975-zfw86 1/1 Running 0 4m1s kube-system nllb-calico-dev01-cp01 1/1 Running 0 2m19s kube-system nllb-calico-dev01-cp02 1/1 Running 0 2m12s kube-system nllb-calico-dev01-cp03 1/1 Running 0 2m6s kube-system nllb-calico-dev01-w01 1/1 Running 0 2m3s kube-system nllb-calico-dev01-w02 1/1 Running 0 113s kube-system nllb-calico-dev01-w03 1/1 Running 0 106s kube-system nllb-calico-dev01-w04 1/1 Running 0 2m10s metallb metallb-controller-5cd9b4944b-24tn8 1/1 Running 0 3m41s metallb metallb-speaker-82qj7 4/4 Running 0 2m32s metallb metallb-speaker-9kpwd 4/4 Running 0 3m metallb metallb-speaker-bwdxd 4/4 Running 0 2m32s metallb metallb-speaker-dxvdp 4/4 Running 0 3m1s metallb metallb-speaker-tzwhd 4/4 Running 0 2m58s metallb metallb-speaker-vvjtf 4/4 Running 0 2m51s metallb metallb-speaker-xx2fd 4/4 Running 0 2m59s Nodes when cp01 is downNAME STATUS ROLES AGE VERSION calico-dev01-cp01 NotReady control-plane 5m54s v1.27.3+k0s calico-dev01-cp02 Ready control-plane 5m37s v1.27.3+k0s calico-dev01-cp03 Ready control-plane 5m24s v1.27.3+k0s calico-dev01-w01 Ready 5m21s v1.27.3+k0s calico-dev01-w02 Ready 5m21s v1.27.3+k0s calico-dev01-w03 Ready 5m21s v1.27.3+k0s calico-dev01-w04 Ready 5m21s v1.27.3+k0s Pods when cp01 is downNAMESPACE NAME READY STATUS RESTARTS AGE external-secrets external-secrets-769df6c8cd-4g2td 0/1 Terminating 0 4m11s external-secrets external-secrets-769df6c8cd-dv2fx 0/1 Terminating 1 4m11s external-secrets external-secrets-cert-controller-57b8c96ffb-8f6jz 0/1 Terminating 0 4m11s external-secrets external-secrets-webhook-75c54b49d7-nsnfj 0/1 Terminating 0 4m11s kube-system calico-kube-controllers-6d48c8cf5c-bl89b 1/1 Terminating 0 4m32s kube-system coredns-878bb57ff-74gq8 1/1 Running 0 4m32s kube-system coredns-878bb57ff-xkh4k 1/1 Running 0 3m56s kube-system konnectivity-agent-49v5n 1/1 Running 0 3m47s kube-system konnectivity-agent-5rmcb 0/1 ContainerCreating 0 51s kube-system konnectivity-agent-77ppm 1/1 Running 0 3m47s kube-system konnectivity-agent-7xwqh 1/1 Running 0 3m47s kube-system konnectivity-agent-h44mf 1/1 Terminating 0 3m55s kube-system konnectivity-agent-ksmnd 1/1 Running 0 3m47s kube-system konnectivity-agent-rd4t4 1/1 Running 0 3m46s kube-system kube-proxy-28hck 1/1 Running 0 3m51s kube-system kube-proxy-2k7x5 1/1 Running 0 3m47s kube-system kube-proxy-9qbf7 1/1 Running 0 3m47s kube-system kube-proxy-d7476 1/1 Running 0 4m26s kube-system kube-proxy-j8np6 1/1 Running 0 4m4s kube-system kube-proxy-jhvvp 1/1 Running 0 3m47s kube-system kube-proxy-qlc4z 1/1 Running 0 3m47s kube-system kube-router-54pz9 0/1 Pending 0 51s kube-system kube-router-6rssr 1/1 Running 0 52s kube-system kube-router-d6jqt 1/1 Running 0 52s kube-system kube-router-gj5n4 1/1 Running 0 51s kube-system kube-router-kbmdw 1/1 Running 0 52s kube-system kube-router-qkhz2 1/1 Running 0 52s kube-system kube-router-qsvcw 1/1 Running 0 51s kube-system metrics-server-7f86dff975-9pjs8 1/1 Running 0 4m32s kube-system nllb-calico-dev01-cp01 1/1 Running 0 2m59s kube-system nllb-calico-dev01-cp02 1/1 Running 0 2m40s kube-system nllb-calico-dev01-cp03 1/1 Running 0 2m27s kube-system nllb-calico-dev01-w01 1/1 Running 0 2m23s kube-system nllb-calico-dev01-w02 1/1 Running 0 2m34s kube-system nllb-calico-dev01-w03 1/1 Running 0 2m25s kube-system nllb-calico-dev01-w04 1/1 Running 0 2m42s |
This breaks my theory. I'll reproduce this later today. |
I tried reproducing this with flatcar using both controller and controller+worker and I couldn't reproduce this. How are you deploying flactar exactly? I tried on AWS.
|
I realized in the flatcar tests I accidentally forgot to enable dynamicConfig. This was not forgotten in the other distribution tests. I'm retrying now again. |
I deploy Flatcar on VMware using Pulumi. Ignition config looks something like this: const configuration = {
"ignition": {
"version": "3.3.0"
},
"passwd": {
"users": [
{
"name": "redacted",
"groups": [
"sudo",
"docker",
],
"sshAuthorizedKeys": [
"redacted"
]
}
]
},
"storage": {
"files": [
{
"filesystem": "root",
"path": "/etc/hostname",
"mode": 420,
"contents": {
"source": `data:,${this.hostname}`
}
},
{
"path": "/etc/flatcar/update.conf",
"mode": 644,
"contents": {
"source": "data:,SERVER=disabled"
}
},
{
"path": "/etc/systemd/network/00-vmware.network",
"contents": {
"compression": "gzip",
"source": `data:;base64,${networkConfiguration}`
}
}
]
}
}; NetworkConfig looks like this:
I'll go ahead and try this with |
Killing
This time, other workloads continue as expected with no issue. All is fine after booting the node back up and it becoming
The issue appears related to dynamicConfig specifically. |
Hi, I completely identified the issue and worked in a fix. Also I now that I understand it I managed to reproduce this on ubuntu (just takes longer than a simple reboot, but a full poweroff and a few more seconds eventually make this happen as well. |
Oh, good timing! Do you mind sharing a bit more about the issue for the curious? :) |
When using dynamicConfig k0s always creates /var/lib/k0s/manifests/kuberouter The reason why a reboot is not affecting it on ubuntu is I guess related to some lock stored in the filesystem which on flatcar disappears. I literally rebooted it on a loop and went for lunch for about an hour and didn't manage to reproduce. A full poweroff and a couple minutes gets it done. There are at least three independent issues here: 1 and 2 I will solve as these are very obviously wrong and undesired behavior so I expect to fix them today and get merged this week or early next week. As for 3 I think we'll need to do some discussion. Regarding the helm charts issue. I don't know if it will be immediately fixed as it's a part of the code I'm not very familiar with and I haven't looked into it deeply.. |
Interesting! When you did your tests with reboots - did you ever let the node become
This seems reasonable; either that, or a controller should generate them independently based on the
Understood. Is this something @mikhail-sakhnov might know more about? |
@juanluisvaladas Should this really have been closed? |
No, I closed it accidentally |
OK so status of things, The change in k0stl ENTIRELY fixes the problem. However this is needs changes in k0s itself: As for the helm chart I think the solution is not to start component managers until the configuration is fully initalized. In my test cluster with the k0sctl of my PR I see the dynamicConfig doesn't have kuberouter and the manifests are generated the way anyone would expect. |
Good stuff! I'll look forward to the changes being merged :) |
Every controller with dynamiConfig needs a properly configured .spec.network, this is required because otherwise the component managers for network components may start synchronizing before getting the configuration dynamically. Partially fixes k0sproject/k0s#3304 This doesn't impact negatively worker nodes. Signed-off-by: Juan Luis de Sousa-Valadas Castaño <jvaladas@mirantis.com>
Not sure this should've been closed @juanluisvaladas @kke |
Github triggered on "Partially fixes" |
The issue is marked as stale since no activity has been recorded in 30 days |
The issue is marked as stale since no activity has been recorded in 30 days |
The issue is marked as stale since no activity has been recorded in 30 days |
Hi, I think this can be entirely closed now. I guess it could happen theoretically if someone wanted to automate deployments of k0s but I think it's an edge case it should be taken care of in the automation itslef. |
Before creating an issue, make sure you've checked the following:
Platform
Version
1.27.3+k0s.0
Sysinfo
`k0s sysinfo`
What happened?
After installing a fresh k0s cluster, I decided to test resiliency by both gracefully and forcefully shutting down nodes. The goal was to verify that the cluster recovers when the nodes get brought back into the cluster.
Removing workers was no issue, so moving into the control plane nodes, I decided to take out the third control plane node first (gracefully). Doing so was no issue; it became
NotReady
, and thekonnectivity-agent
on it immediately went toTerminating
. I then powered the node back on and the workload recovered fine.I now move on to do the same to the second control plane node (cp02), and it also works fine.
However, doing so to the first control plane node breaks things.
Immediately as it goes offline, Calico seemingly (mostly) uninstalls and gets replaced by
kube-router
seemingly out of nowhere.To test this further, I re-created the cluster and this time immediately took away
cp01
, leaving the other two. The issue still immediately occurs.Something appears to cause the CNI to get replaced (at least if it was Calico) if the primary control plane node ever goes offline. Bringing
cp01
back does not recover the situation.To be sure, I checked that the
ClusterConfig
object contains the correct configuration still, and it does.Steps to reproduce
Expected behavior
It should behave no differently to if the second or third control plane node is the one to disappear; Calico should remain functioning and
kube-router
should not be installed.Actual behavior
Most (but not all) of Calico gets removed and replaced by
kube-router
- though this installation remains broken as well. It also seemsmetallb
is removed automatically.Remaining workloads after
cp01
has powered off:We can also see that most of the calico deployment has been removed, including its RBAC.
Screenshots and logs
After cp01 gets powered back on:
Additional context
k0sctl.yaml
List of pods in fresh cluster (prior to the issue)
The text was updated successfully, but these errors were encountered: