Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion: use poll() for solving 1024 (FD_SETSIZE) clients problem in Linux #439

Closed
enginlee opened this issue Nov 19, 2014 · 7 comments
Closed
Assignees

Comments

@enginlee
Copy link

Hi,

Mongoose use select() for checking file descriptor of server and clients, but this function has limitation with FD_SETSIZE in Linux.
User need to enlarge this define and recompile their Linux kernel.
Is there any solution to handle > 1024 incoming (or long connected) client sockets?
I have one suggestion by using poll():

unsigned int mg_poll_server(struct mg_server *server, int milliseconds) {
  struct ll *lp, *tmp;
  struct connection *conn;
  time_t current_time = time(NULL), expire_time = current_time -
    MONGOOSE_USE_IDLE_TIMEOUT_SECONDS;
  static struct pollfd poll_list[4096];
  unsigned idx = 2, idx2 = 2;

  if (server->listening_sock == INVALID_SOCKET) return 0;

  poll_list[0].fd = server->listening_sock;
  poll_list[1].fd = server->ctl[1];
  poll_list[0].events = poll_list[1].events = POLLIN;
  poll_list[0].revents = poll_list[1].events = 0;

  LINKED_LIST_FOREACH(&server->active_connections, lp, tmp) {
    if (idx >= sizeof(poll_list)/sizeof(poll_list[0])) break;
    conn = LINKED_LIST_ENTRY(lp, struct connection, link);
    poll_list[idx].fd = conn->client_sock;
    poll_list[idx].events = POLLIN;
    poll_list[idx].revents = 0;
    if (conn->endpoint_type == EP_CLIENT && (conn->flags & CONN_CONNECTING)) {
      poll_list[idx].events |= POLLOUT;
    }
    if (conn->endpoint_type == EP_FILE) {
      transfer_file_data(conn);
    }
    if (conn->remote_iobuf.len > 0 && !(conn->flags & CONN_BUFFER)) {
      poll_list[idx].events |= POLLOUT;
    } else if (conn->flags & CONN_CLOSE) {
      close_conn(conn);
    }
    idx++;
  }

  if (poll(poll_list, idx, milliseconds) > 0) {
    if ((poll_list[1].revents&POLLIN) != 0) {
      execute_iteration(server);
    }

    // Accept new connections
    if ((poll_list[0].revents&POLLIN) != 0) {
      while ((conn = accept_new_connection(server)) != NULL) {
        conn->birth_time = conn->last_activity_time = current_time;
      }
    }

    // Read/write from clients
    LINKED_LIST_FOREACH(&server->active_connections, lp, tmp) {
      if (idx2 >= idx) break;
      conn = LINKED_LIST_ENTRY(lp, struct connection, link);
      if (poll_list[idx2].fd == conn->client_sock) {
        if ((poll_list[idx2].revents&POLLIN) != 0) {
          conn->last_activity_time = current_time;
          read_from_socket(conn);
        }
        if ((poll_list[idx2].revents&POLLOUT) != 0) {
          if (conn->endpoint_type == EP_CLIENT &&
              (conn->flags & CONN_CONNECTING)) {
            read_from_socket(conn);
          } else if (!(conn->flags & CONN_BUFFER)) {
            conn->last_activity_time = current_time;
            write_to_socket(conn);
          }
        }
      }
      idx2++;
    }
  }

  // Close expired connections and those that need to be closed
  LINKED_LIST_FOREACH(&server->active_connections, lp, tmp) {
    conn = LINKED_LIST_ENTRY(lp, struct connection, link);
    if (conn->mg_conn.is_websocket) {
      ping_idle_websocket_connection(conn, current_time);
    }
    if (conn->flags & CONN_LONG_RUNNING) {
      conn->mg_conn.wsbits = conn->flags & CONN_CLOSE ? 1 : 0;
      if (call_request_handler(conn) == MG_REQUEST_PROCESSED) {
        conn->last_activity_time = current_time;
        conn->flags &= ~CONN_LONG_RUNNING;
      }
    }
    if (conn->flags & CONN_CLOSE || conn->last_activity_time < expire_time) {
      close_conn(conn);
    }
  }

  return (unsigned int) current_time;
}
@cpq cpq added the p3 label Nov 19, 2014
@cpq
Copy link
Member

cpq commented Nov 19, 2014

Using poll() SGTM.
For Windows, we can go with simple poll() implementation through select().
@mmikulicic -- I guess that actually goes to fossa core, not mongoose.
This issue is reported using old mongoose code anyways.

@mkmik
Copy link
Contributor

mkmik commented Nov 19, 2014

SGTM as well

@mkmik
Copy link
Contributor

mkmik commented Apr 2, 2015

Rojer, related to cesanta/fossa#113

@cpq
Copy link
Member

cpq commented Apr 8, 2015

Recent related issue: #506

@cpq
Copy link
Member

cpq commented Apr 10, 2015

Recent PR against mongoose: #508

@ListerTheTormentor
Copy link

Perhaps some additional IO event handling functions should be provided to allow users to do what select is doing, so that Mongoose can be used as a pure module, without any framework's features, and more convenient in other frameworks,such as libev.

@rojer
Copy link
Collaborator

rojer commented May 15, 2016

we do have even manager internal interface and even used to have an epoll-based manager.
however, maintaining multiple proved difficult and ultimately was dropped.
we do not target environments with > 1024 simultaneous connections and deliberately choose simplicity. that said, 203d4a7 should have improved handling of short spikes above 1024 fds.

@rojer rojer closed this as completed May 15, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants