You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes the network gives a 504 gate-way time-out.
(Heavily happened on the 13th and 14th of June 2020 and in the week before as well.)
There was nor explicit error I’ve seen. Except an event I got displayed by: kubectl -n human-connection get events --sort-by=.metadata.creationTimestamp
It said: Warning – FailedScheduling – pod/nitro-neo4j-86596f49cd-snjn4 – 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient memory.
At the end I had to restart everything several time by deleting all the network pods on after the other.
Matt and I solved a memory problem of the Neo4j database by applying memory requests and limits to the containers of the pods. We labeled a new node with enough memory and assigned the Neo4j pod to the label of this node, as to say to the node.
See issue #2628 and PR #2629 .
I have the impression that this could help us to solve the network problem, because.
It is recommended anyways:
Tirokk
changed the title
🐛 [Bug] The Network Gives A 504 Gate-Away Time-Out From Time To Time
[WIP] 🐛 [Bug] The Network Gives A 504 Gate-Away Time-Out From Time To Time
Jun 17, 2020
Tirokk
changed the title
[WIP] 🐛 [Bug] The Network Gives A 504 Gate-Away Time-Out From Time To Time
🐛 [Bug] The Network Gives A 504 Gate-Away Time-Out From Time To Time
Jun 17, 2020
🐛 Bugreport
Sometimes the network gives a 504 gate-way time-out.
(Heavily happened on the 13th and 14th of June 2020 and in the week before as well.)
There was nor explicit error I’ve seen. Except an event I got displayed by:
kubectl -n human-connection get events --sort-by=.metadata.creationTimestamp
It said:
Warning – FailedScheduling – pod/nitro-neo4j-86596f49cd-snjn4 – 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient memory.
At the end I had to restart everything several time by deleting all the network pods on after the other.
Matt and I solved a memory problem of the Neo4j database by applying memory requests and limits to the containers of the pods. We labeled a new node with enough memory and assigned the Neo4j pod to the label of this node, as to say to the node.
See issue #2628 and PR #2629 .
I have the impression that this could help us to solve the network problem, because.
It is recommended anyways:
We have to define the following settings to all deployments (Webapp, Backend, Neo4j):
Steps to reproduce the behavior
Happens from time to time …
Expected behavior
Network should not give an error …
Additional context
None.
The text was updated successfully, but these errors were encountered: