Skip to content

Setup Check: Discover Hidden HTTP/WebDAV Limits and Performance Regressions #56503

@joshtrichards

Description

@joshtrichards

Tip

Like this idea?

  • Please use the 👍 reaction to show that you are interested in the same feature/suggestion.
  • Subscribe to the Issue to receive notifications regarding status changes, design discussion, and new comments.
  • Comment if you have something relevant to add in terms of implementing this possible improvement/enhancement such as: how it might work, scope, obstacles, or potential solutions (please avoid commenting just to say "Me too!" — that's what the 👍 is for — and keeps the Issue clear of extra noise for everyone subscribed to it).

Is your feature request related to a problem? Please describe.

Administrators often lack clear, actionable visibility into the effective (real-world) constraints of their Nextcloud deployment:

  • Practical maximum upload size (non‑chunked vs chunked).
  • Hidden timeout ceilings (web server, reverse proxy, PHP/FPM, network).
  • Effective throughput (upload / download bit rates per connection).
  • Behavior and correctness of chunked uploads and HTTP range requests.
  • Maximum feasible assembled upload size given timeouts + chunk sizing.
  • Whether current performance diverges from historical baselines.

Without systematic measurement:

  • Issues surface reactively (user complaints, failed large transfers).
  • It’s unclear which layer (client, proxy, PHP/FPM, storage backend) is constraining performance.
  • Administrators lack a repeatable diagnostic tool to communicate health or guide configuration tuning.

Describe the solution you'd like

Introduce one or more Setup Checks that can:

  1. Run an automated sequence of controlled WebDAV operations (upload/download tests, range requests, chunked vs non‑chunked uploads).
  2. Determine or estimate:
    • Max non‑chunked upload size before rejection (status codes / early termination).
    • Largest reliable chunk size for chunked uploads.
    • Time to assemble a multi‑chunk file before any timeout occurs.
    • Effective upload throughput (sustained bit rate) over a representative interval.
    • Effective download throughput (sustained bit rate).
    • Functional status of HTTP Range (partial download) support.
    • Derived maximum feasible upload size (given chunk size × assembly / timeout constraints).
    • Derived practical maximum download size within typical timeout envelopes.
  3. Optionally store historical snapshots (baseline vs current) for trend comparison.
  4. Provide actionable, human‑readable guidance (e.g., “PHP max_execution_time likely limiting multi‑GB assembly” or “Reverse proxy body size limit reached at X MB” or whatever).
  5. Expose results:
    • Admin UI (setup checks / serverinfo).
    • OCC command (scriptable / monitoring integration).
    • Optional user‑level simplified check (e.g., “Test my connection”) for self‑diagnostics and support triage.

Describe alternatives you've considered

  • Manual ad hoc testing (slow, non-repeatable).
  • Custom external scripting (reinvents logic; no UI integration).
  • Rely only on logs + guessing (reactive and incomplete).
  • Third‑party synthetic monitoring (generic; lacks Nextcloud-specific semantics).
  • Indirect tests such as simple third-party hosted speed testers (not instance specific; too generic)

Additional context

Initial implementation:

  • Tests: non‑chunked max size probe (binary search), chunk size probe, short-duration upload/download throughput sampling, Range request verification, assembly timeout detection (bounded attempt).
  • Packaging: either integrate as a general setup check or provide as an on-demand task within the Server Info app (Optionally the latter could expose the former for easier monitoring)
  • Result formatting: JSON (for API) plus a summarized UI view.
  • Basic recommendations mapped from findings (e.g., if failure at N MB + HTTP 413 → suggest checking client_max_body_size / LimitRequestBody / post_max_size or whatever).

Future extensions:

  • Scheduled periodic runs with retention (e.g. last 30 data points).
  • User-facing micro test (a more limited subset of tests and guardrails)

Risks / Considerations

  • Test traffic overhead (cap test duration and data volumes).
  • Privacy / data locality (use generated dummy data; never touch user files).
  • Potential false positives if run during resource contention (recommend a “quiet window” or multiple samples).
  • Multi‑tenant / large installations may need adjustable limits (configurable ceilings).
  • Flag to disable, limit data volume, or require explicit confirmation for low-resource installations.

Related:

Documentation integration:

  • Server Tuning
  • Large File Uploading
  • HTTP Server Configuration
  • Reverse Proxy Configuration
  • PHP Configuration
  • Client configuration

Metadata

Metadata

Assignees

No one assigned

    Projects

    Status

    Triaged

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions