Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Orchestrator running in HAProxy has intermittent runtime errors #144

Closed
jpswinski opened this issue Nov 15, 2022 · 1 comment
Closed

Orchestrator running in HAProxy has intermittent runtime errors #144

jpswinski opened this issue Nov 15, 2022 · 1 comment

Comments

@jpswinski
Copy link
Member

[info] 318/133201 (1) : Loading orchestrator...
[NOTICE] (1) : New worker (8) forked
[NOTICE] (1) : Loading success.
Lua applet http '<lua.orchestrator_lock>': [state-id 0] runtime error: execution timeout from /usr/local/etc/haproxy/orchestrator.lua:256: in function line 226.
[ALERT] (8) : Lua applet http '<lua.orchestrator_lock>': [state-id 0] runtime error: execution timeout from /usr/local/etc/haproxy/orchestrator.lua:256: in function line 226.
Lua applet http '<lua.orchestrator_lock>': [state-id 0] runtime error: execution timeout from /usr/local/etc/haproxy/orchestrator.lua:249: in function line 226.
[ALERT] (8) : Lua applet http '<lua.orchestrator_lock>': [state-id 0] runtime error: execution timeout from /usr/local/etc/haproxy/orchestrator.lua:249: in function line 226.

@jpswinski
Copy link
Member Author

This was a bug in the orchestrator code. When the number of locks was max'ed out, the code looped back to the beginning of the list of nodes yet set the index incorrectly and entered an infinite loop. HAProxy then killed it because it ran longer than the timeout.

The result of the bug was that locks were not unlocked and would have to timeout. This also slowed things down a lot as there weren't as many available locks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant