Skip to content


Subversion checkout URL

You can clone with
Download ZIP
branch: master
Commits on Jun 26, 2014
  1. Fix for R17

Commits on Apr 10, 2013
  1. Merge remote-tracking branch 'ferd/bypass-pool'

Commits on Mar 17, 2013
  1. Bump to 1.2.9

  2. Fix type specs

Commits on Oct 24, 2012
  1. @ferd

    Adding an option to bypass pools entirely

    ferd authored
    In some cases, a few high priority requests happening with
    a low frequency may be seen as admissible -- for example,
    important error messages, diagnostic polls, and so on.
    This change adds the possibility set the option 'max_connections'
    to {max_connections, bypass} to have a socket opened that does
    not respect a given's Scheme+Host+Port's pool limits and opens
    a socket outside of it.
    This socket will be closed as soon as the response is obtained.
  2. @ferd

    fixing doc

    ferd authored
    Adding the ReqId to requests broke documentation building.
    This fixes it.
Commits on Jul 9, 2012
  1. @ferd

    Merge branch 'feature/port_leak_fix2' of in…

    ferd authored
    …to socket-leak
Commits on Jul 7, 2012
  1. Attempt to fix port leak, also fixed clean up of sockets.

    Yoshihiro Tanaka authored
    There is a possibility of port leak when processes are killed using exit/2 BIF when they are connecting/closing sockets, executing in prim_inet module.
    Also fixed the format of remaining free sockets in terminate/2.
Commits on Jul 6, 2012
  1. @ferd

    Attempt at fixing a possible process leak

    ferd authored
    Yoshihiro Tanaka found that when lhttpc:request closes a worker
    due to a timeout, it's possible that it happens after the port
    is unlinked in prim_inet:close, but before it is properly closed.
    This results in orphaned sockets/ports being left hanging in the
    This fix attempts to wrap lhttpc_sock:close commands around a safe
    build that should resolve it.
    A potential fix would have been to have the manager monitor the
    sockets itself, but this wouldn't have worked if the socket is new
    and the manager has never seen it before, hence the current fix.
Commits on Apr 28, 2012
  1. @ferd
Commits on Nov 25, 2011
  1. @ferd

    Changing mechanism of the load balancer

    ferd authored
    Whenever a server would listen to TCP connections but never accept
    them, the lhttpc application would leak a ton of processes until
    the virtual machine is taken down.
    This was due to the way setting up connections was being done
    within the load balancer. This would lead to many milliseconds of
    delay for each socket connection attempt, and an eventual queue
    build-up would happen in the load balancer.
    Because requests freely spawn processes, this ended up having
    too many requests that the LB cannot deal with.
    This fix changes the structure around so that each client is
    responsible of setting their own socket and connection, enabling
    the load balancer to easily deny connections to newer processes
    when older ones are still stuck. Setting a good request timeout
    can then insure that slow requests won't starve the system.
Commits on Nov 21, 2011
  1. @ferd

    Adding control flow on connections refused

    ferd authored
    When many connections were being refused, the load balancer
    would impose no good control flow mecanism on incoming requests.
    After a while, demand can overtake the process and grow the
    message queue until the VM goes out of memory.
    This patch adds a counter of refused connections (happens when
    the server is down); if too many connections are refusd in a row
    (as many as the possible sockets), some of the requests will be
    blocked and will return {error, offline}.
    Whenever a successful request is made, the counter is dropped.
    The patch also contains a few minute optimizations for record
    assignment, gaining minimal amounts of speed.
Commits on Nov 17, 2011
  1. Merge pull request #1 from ferd/master

    Faster LB.
  2. @ferd

    Changing the lhttpc load balancer to use ETS

    ferd authored
    The current implementation uses a dict and a queue for common
    socket operations when load-balancing. Over heavy load, the process
    gets to be very slow. Plus, it set itself as a high priority process,
    unbalancing the whole VM.
    This switches the dict to an ETS table, and the queue to a stack
    (list) in order to reduce operations. Moreover, the process will
    go back to a normal priority to make sure it doesn't mess up with
    the schedulers and timers too much.
Commits on Nov 15, 2011
  1. Oops

Commits on Jul 22, 2011
  1. Fix bug / tests

Commits on Jul 20, 2011
  1. Move .hrl to include folder

Commits on Jul 19, 2011
Commits on Apr 20, 2011
  1. @tolbrino

    Bumping version.w

    tolbrino authored
  2. @tolbrino
Commits on Mar 13, 2011
  1. @oscarh

    Fixed some formatting errors.

    oscarh authored
  2. @oscarh

    For whatever reason in R14B1, it seems 50ms isn't enough for the ssl …

    oscarh authored
    I've raised the timeout to a seemingly random 100ms instead, and at least on
    my computer, the tests pass.
Commits on Feb 25, 2011
Commits on Aug 23, 2010
  1. @oscarh

    Add some good TODOs :)

    oscarh authored
Commits on Aug 22, 2010
  1. @oscarh

    Update outdated TODO file

    oscarh authored
  2. @oscarh
  3. @oscarh

    Update changelog

    oscarh authored
Something went wrong with that request. Please try again.