You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We keep getting 502 Bad Gateway and 504 Gate Timeout errors in production. The timeout typically happens on the time series endpoint when you load the application. This seems to trigger an out of memory error that causes Kubernetes to kill the container. While Kubernetes is managing the containers, you start to see the bad gateway errors for the entire application and all of the data endpoints.
The text was updated successfully, but these errors were encountered:
The working theory is that the history endpoint runs out of RAM because we have no limits on how far back we pull data, so we end up with all of the data in the store, which just increases over time. Meaning this endpoint will require increasing amounts of RAM over time.
Limit the amount of data read in for the historical data to just two
weeks prior to the initialization time. This should reduce memory usage
in production and allow the application to continue working, solving
#443 (I hope).
In future, we may make this range configurable by users, instead of
hard-coding a two week limit.
Describe the bug
We keep getting 502 Bad Gateway and 504 Gate Timeout errors in production. The timeout typically happens on the time series endpoint when you load the application. This seems to trigger an out of memory error that causes Kubernetes to kill the container. While Kubernetes is managing the containers, you start to see the bad gateway errors for the entire application and all of the data endpoints.
The text was updated successfully, but these errors were encountered: