Mochiweb branches were broken for public and test rebar config. This comes from the migration from the mochi account to the internal heroku account. New branches were created but the account name wasn't switched. Recon is a library to help with devops tasks in production.
Rather than configuring specific apps in many places (bin/logplex, bin/devel_logplex, logplex_app.erl), configurations are moved to a sys.config file that can be loaded by adding `-config sys` to the `erl` exectuable, or loaded automatically when generating an OTP release.
The current logplex version shows a point of contention for logs through using io:format/2. Although it is unlikely lager will help a lot with it given we don't log directly to disk (and this is where it shines in comparison to other logging engines), it's worth trying to see if things are improving with it. Custom log formats are used to make sure the production log format remains 100% identical to the former one. They will, however, be different during test runs because no specific care has been taken to make the lager config be compatible in test cases.
An inactive drain or buffer (Receives no request from the outside world) should be sent to hibernation in order to trigger a full-sweep GC, compact the memory of the process, and reduce the overall load of the system, and possibly reducing memory fragmentation of the VM at the cost of slightly more CPU when it triggers. The timeout is implemented using the gen_fsm timeout option, which automatically resets timeout timers when a message is processed by the process. This should allow to generally catch any kind of inactivity and force hibernation of the processes. Note: it is not yet known if the timeout value of 5 seconds or the amount of timers setup/cancellations will have an impact of any significance on an active system or not. The values may need to be tweaked or the effort redirected towards manual GC if refc binaries keep on hogging the memory after this.
The logplex_msg_buffer module is used extensively by drain processes that buffer request and need to be the least blocking possible under heavy load. The current implementation would recalculate the entire queue length on every call, which became both time consuming and CPU intensive when the buffer was full, which happens when you have to count lengths even more often. This patch makes it so that we have an explicit counter for the buffer so that we don't need to recalculate it all the time, lowering the contention for runtime for a given process. The module includes conversion clauses for all functions part of the API so that the code can be hot-reloaded without stopping, and just adapt to the new format.
When there's a timer being set for a reconnection, we force hibernation in order to do a fullsweep GC of the drain processes. This might incur a certain cost for very busy-but-disconnected processes, forcing a short pause, but the backoff timers for reconnections will act as rate limiters on this.
With IO being blocking for individual processes due to Erlang's IO protocol and logplex using io:format/2 to log information, it is possible that a node that does a lot of logging has bad tail latencies on its API as reported by issues #49 and #51 on github. This quickfix, pending a rewrite of the logging system to be non-blocking and load-shedding, moves the logging outside of the critical path for part of the requests as a whole. Some requests, such as token creation for channels (POST /v2/channels/(\\d+)/tokens) still contain logs in said critical path and will only see minor improvements.
The escript allows to do a few operations on redgrid: - get the redgrid process status - get a list of connected nodes according to redgrid - suspend redgrid (and unregister from it) - resume redgrid (and connect to it) The script connects as a hidden node.
Logplex_stats is a table that receives a very large amount of writes for one big read every minute. We should be able to generally benefit by reducing contention on resources given pretty much all drains write to that table, at the cost of a slightly longer blocking time when reading.
The send behaviour is documented, along with basic L10'ing. There is otherwise not a need for tests as advanced as for the http drain buffers given there is not a lot of similar error handling code included. (TCP drains just ignore all send errors and keep going)