Skip to content

Buffering client requests. #2043

@tmszdmsk

Description

@tmszdmsk

Do you want to request a feature or report a bug?

Feature

What did you do?

We have a Traefik instance set up as ingress controller in our Kubernetes cluster. Before we used Nginx as a proxy for our services.
It was configured to buffer client' requests before sending them to upstream/destination.
Based on access metrics from Nginx and self-reported ones from microservices we set up alerts to trigger when response time passes the defined threshold. It was working fine.
However Traefik doesn't buffer clients' requests and pipes them straight to client.
In case of clients with slow user connection and e.g. file upload request between traefik and microservice/destination can take very long (tens of seconds).
It is generally fine, as we don't have any control over users connection. Problem is with metrics as we are no longer able to tell whether microservice behaves correctly based on request time.
Our Apdex alerts don't work.

Another drawback of not buffering requests is using more resources on microservices behind Traefik which work in thread-per-request model.

What did you expect to see?

We can base view on microservice healthiness on metrics we get from Traefik / self-reported by microservice.

What did you see instead?

Metrics are highly affected by clients connection speed.

Proposed solutions

  • allow requests buffering (AFAIK it is supported by https://github.com/vulcand/oxy)
  • provide more information about request processing time in logs. (e.g. last byte sent to destination to last byte received from destination)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions