-
Notifications
You must be signed in to change notification settings - Fork 442
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
revert "revert recent VirtualFile asyncification changes #5291" #5479
Closed
18 of 21 tasks
Comments
This was referenced Oct 5, 2023
This was
linked to
pull requests
Oct 5, 2023
problame
added a commit
that referenced
this issue
Oct 6, 2023
…d_page` (#5480) Motivation ========== It's the only user, and the name of `_for_write` is wrong as of commit 7a63685 Author: Christian Schwarz <christian@neon.tech> Date: Fri Aug 18 19:31:03 2023 +0200 simplify page-caching of EphemeralFile (#4994) Notes ===== This also allows us to get rid of the WriteBufResult type. Also rename `search_mapping_for_write` to `search_mapping_exact`. It makes more sense that way because there is `_for_write`-locking anymore. Refs ==== part of #4743 specifically #5479 this is prep work for #5482
problame
added a commit
that referenced
this issue
Oct 6, 2023
Problem ======= Prior to this PR, when we had a cache miss, we'd get back a write guard, fill it, the drop it and retry the read from cache. If there's severe contention for the cache, it could happen that the just-filled data gets evicted before our retry, resulting in lost work and no forward progress. Solution ======== This PR leverages the now-available `tokio::sync::RwLockWriteGuard`'s `downgrade()` functionality to turn the filled slot write guard into a read guard. We don't drop the guard at any point, so, forward progress is ensured. Refs ==== Stacked atop #5480 part of #4743 specifically part of #5479
problame
added a commit
that referenced
this issue
Oct 17, 2023
…ns (#5578) Before this PR, when we restarted pageserver, we'd see a rush of `$number_of_tenants` concurrent eviction tasks starting to do imitate accesses building up in the period of `[init_order allows activations, $random_access_delay + EvictionPolicyLayerAccessThreshold::period]`. We simply cannot handle that degree of concurrent IO. We already solved the problem for compactions by adding a semaphore. So, this PR shares that semaphore for use by evictions. Part of #5479 Which is again part of #4743 Risks / Changes In System Behavior ================================== * we don't do evictions as timely as we currently do * we log a bunch of warnings about eviction taking too long * imitate accesses and compactions compete for the same concurrency limit, so, they'll slow each other down through this shares semaphore Changes ======= - Move the `CONCURRENT_COMPACTIONS` semaphore into `tasks.rs` - Rename it to `CONCURRENT_BACKGROUND_TASKS` - Use it also for the eviction imitate accesses: - Imitate acceses are both per-TIMELINE and per-TENANT - The per-TENANT is done through coalescing all the per-TIMELINE tasks via a tokio mutex `eviction_task_tenant_state`. - We acquire the CONCURRENT_BACKGROUND_TASKS permit early, at the beginning of the eviction iteration, much before the imitate acesses start (and they may not even start at all in the given iteration, as they happen only every $threshold). - Acquiring early is **sub-optimal** because when the per-timline tasks coalesce on the `eviction_task_tenant_state` mutex, they are already holding a CONCURRENT_BACKGROUND_TASKS permit. - It's also unfair because tenants with many timelines win the CONCURRENT_BACKGROUND_TASKS more often. - I don't think there's another way though, without refactoring more of the imitate accesses logic, e.g, making it all per-tenant. - Add metrics for queue depth behind the semaphore. I found these very useful to understand what work is queued in the system. - The metrics are tagged by the new `BackgroundLoopKind`. - On a green slate, I would have used `TaskKind`, but we already had pre-existing labels whose names didn't map exactly to task kind. Also the task kind is kind of a lower-level detail, so, I think it's fine to have a separate enum to identify background work kinds. Future Work =========== I guess I could move the eviction tasks from a ticker to "sleep for $period". The benefit would be that the semaphore automatically "smears" the eviction task scheduling over time, so, we only have the rush on restart but a smeared-out rush afterward. The downside is that this perverts the meaning of "$period", as we'd actually not run the eviction at a fixed period. It also means the the "took to long" warning & metric becomes meaningless. Then again, that is already the case for the compaction and gc tasks, which do sleep for `$period` instead of using a ticker.
problame
added a commit
that referenced
this issue
Oct 17, 2023
…ns (#5578) Before this PR, when we restarted pageserver, we'd see a rush of `$number_of_tenants` concurrent eviction tasks starting to do imitate accesses building up in the period of `[init_order allows activations, $random_access_delay + EvictionPolicyLayerAccessThreshold::period]`. We simply cannot handle that degree of concurrent IO. We already solved the problem for compactions by adding a semaphore. So, this PR shares that semaphore for use by evictions. Part of #5479 Which is again part of #4743 Risks / Changes In System Behavior ================================== * we don't do evictions as timely as we currently do * we log a bunch of warnings about eviction taking too long * imitate accesses and compactions compete for the same concurrency limit, so, they'll slow each other down through this shares semaphore Changes ======= - Move the `CONCURRENT_COMPACTIONS` semaphore into `tasks.rs` - Rename it to `CONCURRENT_BACKGROUND_TASKS` - Use it also for the eviction imitate accesses: - Imitate acceses are both per-TIMELINE and per-TENANT - The per-TENANT is done through coalescing all the per-TIMELINE tasks via a tokio mutex `eviction_task_tenant_state`. - We acquire the CONCURRENT_BACKGROUND_TASKS permit early, at the beginning of the eviction iteration, much before the imitate acesses start (and they may not even start at all in the given iteration, as they happen only every $threshold). - Acquiring early is **sub-optimal** because when the per-timline tasks coalesce on the `eviction_task_tenant_state` mutex, they are already holding a CONCURRENT_BACKGROUND_TASKS permit. - It's also unfair because tenants with many timelines win the CONCURRENT_BACKGROUND_TASKS more often. - I don't think there's another way though, without refactoring more of the imitate accesses logic, e.g, making it all per-tenant. - Add metrics for queue depth behind the semaphore. I found these very useful to understand what work is queued in the system. - The metrics are tagged by the new `BackgroundLoopKind`. - On a green slate, I would have used `TaskKind`, but we already had pre-existing labels whose names didn't map exactly to task kind. Also the task kind is kind of a lower-level detail, so, I think it's fine to have a separate enum to identify background work kinds. Future Work =========== I guess I could move the eviction tasks from a ticker to "sleep for $period". The benefit would be that the semaphore automatically "smears" the eviction task scheduling over time, so, we only have the rush on restart but a smeared-out rush afterward. The downside is that this perverts the meaning of "$period", as we'd actually not run the eviction at a fixed period. It also means the the "took to long" warning & metric becomes meaningless. Then again, that is already the case for the compaction and gc tasks, which do sleep for `$period` instead of using a ticker. (cherry picked from commit 9256788)
problame
added a commit
that referenced
this issue
Nov 8, 2023
problame
added a commit
that referenced
this issue
Nov 8, 2023
problame
added a commit
that referenced
this issue
Nov 20, 2023
problame
added a commit
that referenced
this issue
Nov 28, 2023
problame
added a commit
that referenced
this issue
Nov 29, 2023
problame
added a commit
that referenced
this issue
Nov 29, 2023
Squashed commit of the following: commit 5ec61ce Author: Christian Schwarz <christian@neon.tech> Date: Wed Nov 29 16:17:12 2023 +0000 bump commit 34c33d1 Author: Christian Schwarz <me@cschwarz.com> Date: Mon Nov 20 14:38:29 2023 +0000 bump commit 8fa6b76 Author: Christian Schwarz <me@cschwarz.com> Date: Mon Nov 20 11:47:19 2023 +0000 bump commit 6c359a4 Author: Christian Schwarz <me@cschwarz.com> Date: Mon Nov 20 11:33:58 2023 +0000 use neondatabase/tokio-epoll-uring#25 commit 7d484b0 Author: Christian Schwarz <me@cschwarz.com> Date: Tue Aug 29 19:13:38 2023 +0000 use WIP tokio_epoll_uring open_at for async VirtualFile::open This makes Delta/Image ::load fns fully tokio-epoll-uring commit 51b26b1 Author: Christian Schwarz <me@cschwarz.com> Date: Tue Aug 29 12:24:30 2023 +0000 use `tokio_epoll_uring` for read path commit a4e6f0c Author: Christian Schwarz <me@cschwarz.com> Date: Wed Nov 8 12:36:34 2023 +0000 Revert "revert recent VirtualFile asyncification changes (#5291)" This reverts commit ab1f37e. fixes #5479
problame
added a commit
that referenced
this issue
Dec 1, 2023
problame
added a commit
that referenced
this issue
Dec 1, 2023
problame
added a commit
that referenced
this issue
Dec 4, 2023
…6000) Problem ------- Before this PR, there was no concurrency limit on initial logical size computations. While logical size computations are lazy in theory, in practice (production), they happen in a short timeframe after restart. This means that on a PS with 20k tenants, we'd have up to 20k concurrent initial logical size calculation requests. This is self-inflicted needless overload. This hasn't been a problem so far because the `.await` points on the logical size calculation path never return `Pending`, hence we have a natural concurrency limit of the number of executor threads. But, as soon as we return `Pending` somewhere in the logical size calculation path, other concurrent tasks get scheduled by tokio. If these other tasks are also logical size calculations, they eventually pound on the same bottleneck. For example, in #5479, we want to switch the VirtualFile descriptor cache to a `tokio::sync::RwLock`, which makes us return `Pending`, and without measures like this patch, after PS restart, VirtualFile descriptor cache thrashes heavily for 2 hours until all the logical size calculations have been computed and the degree of concurrency / concurrent VirtualFile operations is down to regular levels. See the *Experiment* section below for details. <!-- Experiments (see below) show that plain #5479 causes heavy thrashing of the VirtualFile descriptor cache. The high degree of concurrency is too much for In the case of #5479 the VirtualFile descriptor cache size starts thrashing heavily. --> Background ---------- Before this PR, initial logical size calculation was spawned lazily on first call to `Timeline::get_current_logical_size()`. In practice (prod), the lazy calculation is triggered by `WalReceiverConnectionHandler` if the timeline is active according to storage broker, or by the first iteration of consumption metrics worker after restart (`MetricsCollection`). The spawns by walreceiver are high-priority because logical size is needed by Safekeepers (via walreceiver `PageserverFeedback`) to enforce the project logical size limit. The spawns by metrics collection are not on the user-critical path and hence low-priority. [^consumption_metrics_slo] [^consumption_metrics_slo]: We can't delay metrics collection indefintely because there are TBD internal SLOs tied to metrics collection happening in a timeline manner (neondatabase/cloud#7408). But let's ignore that in this issue. The ratio of walreceiver-initiated spawns vs consumption-metrics-initiated spawns can be reconstructed from logs (`spawning logical size computation from context of task kind {:?}"`). PR #5995 and #6018 adds metrics for this. First investigation of the ratio lead to the discovery that walreceiver spawns 75% of init logical size computations. That's because of two bugs: - In Safekeepers: #5993 - In interaction between Pageservers and Safekeepers: #5962 The safekeeper bug is likely primarily responsible but we don't have the data yet. The metrics will hopefully provide some insights. When assessing production-readiness of this PR, please assume that neither of these bugs are fixed yet. Changes In This PR ------------------ With this PR, initial logical size calculation is reworked as follows: First, all initial logical size calculation task_mgr tasks are started early, as part of timeline activation, and run a retry loop with long back-off until success. This removes the lazy computation; it was needless complexity because in practice, we compute all logical sizes anyways, because consumption metrics collects it. Second, within the initial logical size calculation task, each attempt queues behind the background loop concurrency limiter semaphore. This fixes the performance issue that we pointed out in the "Problem" section earlier. Third, there is a twist to queuing behind the background loop concurrency limiter semaphore. Logical size is needed by Safekeepers (via walreceiver `PageserverFeedback`) to enforce the project logical size limit. However, we currently do open walreceiver connections even before we have an exact logical size. That's bad, and I'll build on top of this PR to fix that (#5963). But, for the purposes of this PR, we don't want to introduce a regression, i.e., we don't want to provide an exact value later than before this PR. The solution is to introduce a priority-boosting mechanism (`GetLogicalSizePriority`), allowing callers of `Timeline::get_current_logical_size` to specify how urgently they need an exact value. The effect of specifying high urgency is that the initial logical size calculation task for the timeline will skip the concurrency limiting semaphore. This should yield effectively the same behavior as we had before this PR with lazy spawning. Last, the priority-boosting mechanism obsoletes the `init_order`'s grace period for initial logical size calculations. It's a separate commit to reduce the churn during review. We can drop that commit if people think it's too much churn, and commit it later once we know this PR here worked as intended. Experiment With #5479 --------------------- I validated this PR combined with #5479 to assess whether we're making forward progress towards asyncification. The setup is an `i3en.3xlarge` instance with 20k tenants, each with one timeline that has 9 layers. All tenants are inactive, i.e., not known to SKs nor storage broker. This means all initial logical size calculations are spawned by consumption metrics `MetricsCollection` task kind. The consumption metrics worker starts requesting logical sizes at low priority immediately after restart. This is achieved by deleting the consumption metrics cache file on disk before starting PS.[^consumption_metrics_cache_file] [^consumption_metrics_cache_file] Consumption metrics worker persists its interval across restarts to achieve persistent reporting intervals across PS restarts; delete the state file on disk to get predictable (and I believe worst-case in terms of concurrency during PS restart) behavior. Before this patch, all of these timelines would all do their initial logical size calculation in parallel, leading to extreme thrashing in page cache and virtual file cache. With this patch, the virtual file cache thrashing is reduced significantly (from 80k `open`-system-calls/second to ~500 `open`-system-calls/second during loading). ### Critique The obvious critique with above experiment is that there's no skipping of the semaphore, i.e., the priority-boosting aspect of this PR is not exercised. If even just 1% of our 20k tenants in the setup were active in SK/storage_broker, then 200 logical size calculations would skip the limiting semaphore immediately after restart and run concurrently. Further critique: given the two bugs wrt timeline inactive vs active state that were mentioned in the Background section, we could have 75% of our 20k tenants being (falsely) active on restart. So... (next section) This Doesn't Make Us Ready For Async VirtualFile ------------------------------------------------ This PR is a step towards asynchronous `VirtualFile`, aka, #5479 or even #4744. But it doesn't yet enable us to ship #5479. The reason is that this PR doesn't limit the amount of high-priority logical size computations. If there are many high-priority logical size calculations requested, we'll fall over like we did if #5479 is applied without this PR. And currently, at very least due to the bugs mentioned in the Background section, we run thousands of high-priority logical size calculations on PS startup in prod. So, at a minimum, we need to fix these bugs. Then we can ship #5479 and #4744, and things will likely be fine under normal operation. But in high-traffic situations, overload problems will still be more likely to happen, e.g., VirtualFile cache descriptor thrashing. The solution candidates for that are orthogonal to this PR though: * global concurrency limiting * per-tenant rate limiting => #5899 * load shedding * scaling bottleneck resources (fd cache size (neondatabase/cloud#8351), page cache size(neondatabase/cloud#8351), spread load across more PSes, etc) Conclusion ---------- Even with the remarks from in the previous section, we should merge this PR because: 1. it's an improvement over the status quo (esp. if the aforementioned bugs wrt timeline active / inactive are fixed) 2. it prepares the way for #6010 3. it gets us close to shipping #5479 and #4744
problame
added a commit
that referenced
this issue
Dec 7, 2023
problame
added a commit
that referenced
this issue
Dec 8, 2023
problame
added a commit
that referenced
this issue
Dec 8, 2023
And also leave a comment on how to determine current size. Kind of follow-up to #6066 refs neondatabase/cloud#8351 refs #5479
problame
added a commit
that referenced
this issue
Dec 8, 2023
And also leave a comment on how to determine current size. Kind of follow-up to #6066 refs neondatabase/cloud#8351 refs #5479
problame
added a commit
that referenced
this issue
Dec 8, 2023
These changes help with identifying thrashing. The existing `pageserver_page_cache_find_victim_iters_total` is already useful, but, it doesn't tell us how many individual find_victim() calls are happening, only how many clock-LRU steps happened in the entire system, without info about whether we needed to actually evict other data vs just scan for a long time, e.g., because the cache is large. The changes in this PR allows us to 1. count each possible outcome separately, esp evictions 2. compute mean iterations/outcome I don't think anyone except me was paying close attention to `pageserver_page_cache_find_victim_iters_total` before, so, I think the slight behavior change of also counting iterations for the 'iters exceeded' case is fine. refs neondatabase/cloud#8351 refs #5479
problame
added a commit
that referenced
this issue
Dec 8, 2023
And also leave a comment on how to determine current size. Kind of follow-up to #6066 refs neondatabase/cloud#8351 refs #5479
problame
added a commit
that referenced
this issue
Dec 11, 2023
This was referenced Jan 4, 2024
problame
added a commit
that referenced
this issue
Jan 9, 2024
problame
added a commit
that referenced
this issue
Jan 9, 2024
These changes help with identifying thrashing. The existing `pageserver_page_cache_find_victim_iters_total` is already useful, but, it doesn't tell us how many individual find_victim() calls are happening, only how many clock-LRU steps happened in the entire system, without info about whether we needed to actually evict other data vs just scan for a long time, e.g., because the cache is large. The changes in this PR allows us to 1. count each possible outcome separately, esp evictions 2. compute mean iterations/outcome I don't think anyone except me was paying close attention to `pageserver_page_cache_find_victim_iters_total` before, so, I think the slight behavior change of also counting iterations for the 'iters exceeded' case is fine. refs neondatabase/cloud#8351 refs #5479
problame
added a commit
that referenced
this issue
Jan 9, 2024
problame
added a commit
that referenced
this issue
Jan 9, 2024
problame
added a commit
that referenced
this issue
Jan 9, 2024
Not done until shipped |
problame
added a commit
that referenced
this issue
Jan 11, 2024
problame
added a commit
that referenced
this issue
Jan 11, 2024
This was referenced Jan 11, 2024
problame
added a commit
that referenced
this issue
Jan 11, 2024
Proved useful when benchmarking 20k tenant setup when validating #5479
Staging showed no meaningful differences: https://neondb.slack.com/archives/C033RQ5SPDH/p1704912242795849 |
This is shipping to prod this week: 93450f1 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is the last hold-out for #4743
and a blocker for #4744
Tasks
{,try_}lock_for_write
intomemorize_materialized_page
#5480Follow-Ups
VirtualFile::open_with_options
is wasteful under pressure #6065The text was updated successfully, but these errors were encountered: