Boost::filesystem implementation#1
Merged
5133n merged 1 commit intofrstrtr:origin/sharechain/async_threadfrom Apr 19, 2021
Merged
Boost::filesystem implementation#15133n merged 1 commit intofrstrtr:origin/sharechain/async_threadfrom
5133n merged 1 commit intofrstrtr:origin/sharechain/async_threadfrom
Conversation
5133n
added a commit
that referenced
this pull request
Feb 28, 2022
5133n
added a commit
that referenced
this pull request
Jan 15, 2023
frstrtr
added a commit
that referenced
this pull request
Mar 15, 2026
Block verification results now display prominent banners: +++ BLOCK CONFIRMED +++ Height: 4607640 Block hash: 05ae900882314666... Verified: check #1 (10s after submission) --- BLOCK ORPHANED --- Height: 4607640 Block hash: 43f35be9a61d2b8e... Checked: 3 times over 120s — not in best chain Includes full block hash and time since submission for operational visibility.
frstrtr
added a commit
that referenced
this pull request
Mar 22, 2026
Restore p2pool's Phase 1 logic (data.py:2077-2108): - Walk backward from each unverified head - attempt_verify on each share — break on first success - Failed shares added to bads → removed from chain - Unrooted chains without verification → request parents This is the #1 fix: stops dead forks from accumulating in the raw chain (was 1200 raw vs 400 verified → unbounded growth).
frstrtr
added a commit
that referenced
this pull request
Apr 9, 2026
Root cause of event loop freeze (#1 stability blocker): timer handlers that throw before their reschedule call permanently die — the exception propagates through ioc.run_for(), gets caught as "non-fatal", but the timer never reschedules. Over 15-45 minutes enough timers die that the node becomes unresponsive (0% CPU, all threads in futex_wait). Fix: move reschedule BEFORE work in all ~15 recurring timers. The pattern is now: if (ec) return → reschedule → try { work } catch. This ensures the timer survives even if work throws, while still respecting cancel/shutdown semantics (ec check before reschedule). Timers fixed (reschedule-before-work + try/catch): - LTC embedded header sync (60s) - LTC embedded mempool cleanup (5m) - DOGE embedded header sync (5-60s) - DOGE embedded mempool cleanup (5m) - Think/clean_tracker (5s) — added catch(...) - Monitor (30s) — added catch(...) - Merged block retry (10s bounded) - CoinBroadcaster maintenance (5s) - CoinPeerManager refresh, save, fixed_seed - Stratum work push (30s) - WebServer stat log (60s) Watchdog enhanced: checks per-timer heartbeat timestamps every 60s, logs TIMER DEAD if any timer exceeds 3x its expected interval. Thread-pool post() callbacks wrapped in try/catch: - LTC header post-process (getheaders continuation) - LTC reorg handler (UTXO disconnect + work refresh) - DOGE header post-process
frstrtr
added a commit
that referenced
this pull request
Apr 19, 2026
heaptrack on .40 (15min capture) identified MiningInterface::get_pplns_for_tip as the #1 memory consumer (~280MB peak, 4 of top 10 PEAK MEMORY CONSUMERS, all from the same call site at web_server.cpp:3627), driven by ~17000 deep JSON copies per refresh every 2s. Root cause: when a share's tip wasn't in m_pplns_per_tip, the function returned m_pplns_per_tip.begin()->second — a copy of an arbitrary other share's PPLNS data. Both callers in c2pool_refactored.cpp:3580 and 3732 iterate every share in result['shares'] and gate inclusion on !p.empty(). The fallback always returned non-empty (when the cache had anything), so: Memory: every refresh deep-copied one cached PPLNS entry ~17000 times into the result JSON. With ~5KB per entry, that's ~85MB churn per refresh, baked into a cached HTTP response held for 2s. Correctness: the per-share PPLNS shown in the dashboard was wrong for any share whose tip wasn't currently cached. Since MAX_PPLNS_CACHE caps at 200 and there are ~17000 shares, the vast majority of shares displayed some other share's payouts labelled as theirs. Fix: honest miss. Return empty json on cache miss. Caller's !empty gate then correctly excludes uncached shares from the result map. p2pool reference: p2pool returns nothing for unknown share tips and lets the caller decide; it never mislabels another share's payouts as the requested share's.
frstrtr
added a commit
that referenced
this pull request
Apr 20, 2026
…yout
Combines Phase A polish (A) and the first Phase B increment (C) per
the M1 sign-off §4 / plugin-arch doc §16.1 migration sequence.
(A) Bundle-size gate
- size.config.json per-bundle byte budgets with warnAt
threshold (0.90). Matches M1 caps:
shared-core 40 KB, sharechain-explorer
120 KB, pplns-view 60 KB, optional plugins
30 KB each.
- scripts/check-bundle-size.mjs
ANSI-coloured report; fails CI if any
tracked bundle exceeds budget; warns at
>= warnAt; not-yet-built bundles are
informational-only (so the gate works
between phases without editing config).
- npm run size fires the gate standalone
- npm run verify typecheck + build + test + manifest + size
in one command — the contributor's
pre-push chain. Post-phase-2 this is what
explorer-modules CI runs.
(C-1) Phase B increment 1 — grid layout math
Pure-function extraction of the Explorer grid geometry from
dashboard.html:4676-4691 (cols/rows/cssWidth) and :4874-4878 (cell
position). No DOM, no canvas; just math. Deliberate first increment
because layout is the most geometry-dense part of the defrag object
and easy to pixel-diff via unit tests.
- src/explorer/grid-layout.ts
computeGridLayout(opts) -> { cols, rows, cssWidth, cssHeight,
step, cellSize, gap, marginLeft,
shareCount }
cellPosition(l, i) -> { index, col, row, x, y } | null
cellAtPoint(l, x, y) -> index | null (gap-aware hit test)
iterCells(l) -> Generator<CellPosition>
GridLayoutPlugin -> id 'explorer.grid.layout',
provides 'layout.grid',
slot 'explorer.layout.grid'
- src/explorer/index.ts Explorer bundle public API. Re-exports
SharedCore (self-contained bundle) and
adds Explorer-specific plugins +
registerExplorerBaseline(host).
Bundle infrastructure:
- esbuild.config.mjs now emits two bundles:
dist/shared-core.js (25.1 KB / 40 KB)
dist/sharechain-explorer.js (26.4 KB /
120 KB).
Self-contained: Explorer vendors
SharedCore in-bundle. Externalization
is a post-phase-2 optimisation.
Tests:
- tests/unit/grid-layout.test.ts
21 tests covering:
* empty / negative shareCount clamps
* wrap-to-multiple-rows math
* narrow-container cols=1 floor
* containerPadding override
* cellPosition origin + wrap + OOB
* cellAtPoint gap-rejection, margin-
rejection, negative-coords, OOB
index rejection
* round-trip cellPosition <-> cellAtPoint
* iterCells generator output
* plugin registration via host.getCapability
Status: 63/63 tests pass (42 prior + 21 new). Typecheck clean under
strict + exactOptionalPropertyTypes + noUncheckedIndexedAccess.
Both bundles under budget. Pipeline green end-to-end via
`npm run verify`.
Next: Phase B increment 2 — canvas renderer plugin that consumes
`layout.grid` + a coin-scoped `Colors` capability to paint cells.
frstrtr
added a commit
that referenced
this pull request
Apr 20, 2026
Three-phase animation state machine (spec §6, dashboard.html:4866-
5400). This commit ships the core — timing math, stagger schedules,
position interpolation for dying/wave/born tracks, and the
controller (start/tick/queueNext/reset). Scale, colour-lerp,
particle dissolution and card overlays land in subsequent commits;
each is a clearly isolable addition that the pure-function structure
accommodates without redesign.
Phase timing (verbatim from dashboard.html:4977-4982):
phase1Dur = 3000 DYING
phase2Dur = fast ? 2000 : 4000 WAVE (spec §6 text has typo,
code is authoritative)
phase2Start = phase1Dur
phase3Start = phase2Start + phase2Dur * 0.7 (overlap)
phase3Dur = 3000 BORN
duration = phase3Start + phase3Dur
Stagger schedules (all preserved verbatim from dashboard.html):
DYING — last dying share first, 150 ms per share. Newer end of
the window dies visibly before older shares (mirrors the
natural tail-eviction order).
WAVE — tail-first per dashboard.html:5146-5151:
distFromTail = (N-1) - newIndex
fraction = distFromTail / (N-1)
shareStart = fraction * phase2Dur * 0.7
shareT = clamp01((p2elapsed - shareStart) / 600)
ease = 1 - (1-shareT)^3 (easeOut-cubic)
Tail starts first, head last; each share animates over a
600 ms window regardless of phase2Dur.
BORN — newest share first, 150 ms per share, full phase3Dur
window per share.
Input / output:
AnimationInput {
oldShares, newShares, addedHashes, evictedHashes,
oldLayout, newLayout, userContext, palette,
hashOf, fast?
}
AnimationPlan {
tEnd, phase1Start/Dur, phase2Start/Dur, phase3Start/Dur,
frameAt(t): FrameSpec
}
FrameSpec {
cells: CellFrame[], // per-share {x,y,size,color,alpha,track}
backgroundColor,
layout // post-merge layout
}
Controller:
createAnimationController() → {
isRunning(), start(plan, now), tick(now), queueNext(plan), reset()
}
- tick() returns the frame to paint, or null when idle.
- queueNext() during running: queued plan starts automatically on
next tick after current tEnd (§6 threshold rule #2, _animDeferred).
- queueNext() while idle: queued plan starts on the next tick.
- reset() drops current + queued.
Also exposed:
SKIP_ANIMATION_NEW_COUNT_THRESHOLD = 100 (§6 rule #1, callers
skip build/start when
newCount >= 100)
Helpers: clamp01, lerp, easeInOut, computePhaseTiming(fast?)
Plugin: explorer.animator.three-phase (provides 'animator.grid')
Tests (19): phase-timing constants for slow + fast; clamp/lerp/ease
sanity; empty input edge; fast vs slow tEnd; wave position
interpolation at t=0 and t=tEnd; wave stagger (tail moves before
head at early phase2 tick); dying staggered alpha decay; dying at
phase1 end alpha ~= 0; born spawns below grid and lands at (0,0);
born alpha fade-in over first 30%; controller idle tick; controller
start+tick+finish becomes idle; controller queueNext during running;
controller queueNext on idle starts next tick; controller reset
drops both.
Status
- 132/132 tests pass in 2.2s (113 prior + 19 new)
- Typecheck clean
- Bundles: shared-core 25.1 KB / 40, sharechain-explorer 33.9 KB /
120 — both under budget
- Pipeline green end-to-end via npm run verify
Next commits layered onto this core:
Phase B #5 scale effects (lift-slide-land wave scale, dying-rise
scale, born shrink from bornScale→1x). Requires no new
types — just richer CellFrame.size interpolation and
paint-program extension for centred-scale fillRect.
Phase B #6 colour lerp — dying lerps toward palette.dead across
its stagger window; born coalesces from unverified to
coin colour.
Phase B #7 particle effects (ash dissolution, birth coalescence)
and card overlays (miner addr + PPLNS % text during
hold frames). Largest increment — may split again.
Phase B #8 wire mergeDelta + animator + gridRenderer into a
RealTime plugin (SSE subscription + delta application
+ animation trigger). Unlocks the demo against a live
c2pool server.
frstrtr
added a commit
that referenced
this pull request
Apr 20, 2026
Wires Transport subscribeStream + fetchDelta + mergeDelta + Animator
+ renderer into a single live-updates runtime. Operational parity
with dashboard.html's RealTime-mode pipeline.
src/explorer/realtime.ts (~370 LOC)
- RealtimeOrchestrator — pure state machine, no DOM, no RAF.
constructor(RealtimeConfig)
start() fetches window, subscribes to stream
stop() unsubscribe + AbortController.abort()
refresh() forces full window rebuild
getState() { window, animating, hasQueued, started,
shareCount, lastAppliedTip, deltaInFlight }
currentFrame(now) FrameSpec | null (idle returns static frame)
buildStaticFrame() no-animation snapshot for initial paint +
post-animation idle frames + resize events
Contract (spec §5 + §6):
1. start() → fetchWindow → subscribeStream.
2. onTip({hash}) → dedup against lastAppliedTip + pendingTip.
3. Only one delta request in flight at a time; if a newer tip
arrives during a fetch, drain it right after the current one
settles (coalesce).
4. mergeDelta → if fork_switch, trigger full rebuild.
5. If added.length >= skipAnimationThreshold (default 100 per
spec §6 rule #1), skip animation and reset the animator.
6. Otherwise buildAnimationPlan + either start (idle animator)
or queueNext (running animator per rule #2, _animDeferred).
7. onReconnect → fetchTip and apply if changed (delta v1 §A.3
catch-up semantics).
8. All Transport calls thread AbortSignal from the orchestrator's
internal AbortController so stop() cancels in-flight work.
- createRealtime(RealtimeDOMOptions) → RealtimeController
DOM adapter: owns the canvas + requestAnimationFrame loop.
Sizes canvas for devicePixelRatio each frame; paints via
buildAnimatedPaintProgram + executePaintProgram. stop() cancels
the RAF, calls orchestrator.stop(), and destroys the renderer.
- RealtimePlugin — id 'explorer.realtime.default', provides
'realtime.orchestrator', fills slot 'explorer.data.realtime'.
Capabilities expose { RealtimeOrchestrator, createRealtime } for
plugin consumers. Registered via registerExplorerBaseline.
Type tightening
- ShareForClassify gains `h: string` (spec §5.1 — every share has
one; omitting it was an oversight). Now satisfies DeltaShare
directly.
- DeltaShare relaxed: dropped the `[key: string]: unknown` index
signature — only requires `{ h: string }`. Strict interfaces
(ShareForClassify) now satisfy it without casts.
- realtime.ts extracts `h` from the provided hashOf (default:
(s) => s.h) rather than dictating the shape.
Tests (tests/unit/realtime.test.ts — 12 tests)
- start: fetches window + sets tip; empty window valid
- tip triggers delta fetch and appends; delta.since = current.tip
- tip dedup on same hash
- tip coalescing: max 1 delta in flight under rapid-fire tips
- fork_switch triggers second fetchWindow
- skipAnimationThreshold: bulk updates skip animation path
- below threshold: animation runs and completes via currentFrame(t)
- stop unsubscribes + halts tip processing after stop
- reconnect: fetches tip, applies delta if changed
- fetchWindow error surfaces via onError as structured ExplorerError
- currentFrame static snapshot reflects live window state
Status
- 166/166 tests pass in ~2.3s (154 prior + 12 new)
- Typecheck clean
- Bundles: shared-core 25.1 KB / 40 (63%), sharechain-explorer 42.0
KB / 120 (35%) — both under budget; 78 KB headroom for
particles + cards + any future wiring.
Next: Phase B #8 particles + card overlays (largest remaining
visual piece; ~300-400 LOC), or Qt refactor step 1 (CMake deps,
~30 LOC, parallel track). After particles + cards, Phase B is
feature-complete and M2 pixel-diff work begins.
frstrtr
added a commit
that referenced
this pull request
Apr 20, 2026
Builds the visual-regression infrastructure the Explorer spec §11 anchors against the (freshly-tagged) explorer-baseline-v0 on master @ d95779a. Opens dashboard.html in both modes against a mock c2pool server, screenshots #defrag-canvas in each, diffs with pixelmatch, reports delta vs threshold. Current measured delta: 5.02%, passing the 7% initial threshold with documented reduction path. Also tags master: git tag -a explorer-baseline-v0 master (annotated) So every future commit on the explorer-module branch can pixel-diff against a fixed anchor — spec §11 step 1. Harness layout (tests/visual/) - fixtures/generate.mjs Seeded mulberry32 RNG (seed=0xC2FFEE), deterministic 200-share chain across the full V36-native / V35→V36 / V35- legacy / stale / dead / fee / block mix. Miner 'XMINEADDRESS' triggers mine- colour branches. Emits window.json, tip.json, stats.json, merged_payouts.json. - mock-server.mjs Node http server listening on 127.0.0.1 :18082. Serves the fixtures for the endpoints dashboard.html hits. SSE stream is keep-alive-only — no tip pushes during capture window, so the screenshots are static. Empty-object replies for non-Explorer endpoints (/peers, /uptime, /stratum_stats, …) keep the rest of dashboard.html from error-cascading. - capture.mjs puppeteer-core against the system Chrome (/usr/bin/google-chrome by default; CHROME_BIN overrides). 1280x900 viewport, dpr 1, font-hinting off. Goes to dashboard.html twice: inline → no flag bundled → ?new-explorer=1 waits for #defrag-canvas + a 3 s render-settle pause, then screenshots that element only. Stable sizing: same fixture → same cols/rows → identically- sized canvas on both paths. - diff.mjs pngjs + pixelmatch. Writes out/diff.png (red-highlighted delta), reports {pixels, fraction, threshold}, exits non-zero on exceed. Threshold: 0.07 default (7%), override via THRESHOLD env or positional arg. - run.sh Orchestrator: generate → start server → capture → diff → kill server. EXIT trap guarantees cleanup. - README.md Design + measured-delta table + three known divergence sources and how future increments close them. devDeps: puppeteer-core@^23, pixelmatch@^6, pngjs@^7. No browser download — uses the system Chrome. Install footprint ~5 MB total. npm run visual Runs the whole pipeline. Exits non-zero on threshold exceed. .gitignore tests/visual/out/ ignored — screenshots regenerate on every run. Fixture JSON and scripts are tracked. Status - npm run visual PASSES with 5.02% delta vs 7% threshold - Fixtures: 200 shares, 12 miners, 2 blocks - Tests unchanged: 192/192 still green - Bundle sizes: shared-core 25.1 KB, sharechain-explorer 48.3 KB — unchanged - explorer-baseline-v0 tag Annotated, points at master @ d95779a, local-only (not pushed) Next tightening passes in priority order: Phase B #11 particles + cards Expected to flatten row-boundary AA differences (source #2 in README) → 2-3% delta. Cell border pass in grid-paint Match inline's fill-then-stroke order (source #1) → 1% delta. Final parity audit Pin threshold at 0.1%, bake into CI.
frstrtr
added a commit
that referenced
this pull request
Apr 20, 2026
Animator now emits particles and card overlays matching dashboard.html's
reference animation (4866-5400), with phase-faithful dying/born timings.
Dying (per share, after its stagger):
dt < 0.30 RISE - scale 1 to dyingScale, colour lerps to dead
dt < 0.55 HOLD - full-size card with miner addr + PPLNS%
dt < 1.00 DISSOLVE - shrinking core + 20 ash particles
Born (per share, after its stagger):
bt < 0.35 COALESCE - 20 particles gather into growing core
bt < 0.65 HOLD - full-size card with miner addr + PPLNS%
bt < 1.00 LAND - shrink bornScale to 1x, fly to grid slot
Particle positions and velocities are seeded deterministically
(mulberry32, seeded via AnimationInput.rngSeed) so frameAt(t) stays
pure — same inputs + same seed = identical particles.
FrameSpec gains `particles: ParticleFrame[]` and `cards:
CardOverlayFrame[]`. buildAnimatedPaintProgram renders the three
layers in z-order: cells -> particles -> cards. Card composition
(shadow, glow, fill, inner highlight, addr + pct text with drop
shadows) matches dashboard.html:5089-5112 / 5301-5327 exactly.
AnimationInput gains:
dyingScale / bornScale - card-size multipliers (default 5)
pplnsOf(share) - returns fraction in [0,1] for card text;
when absent, card shows '--' instead
minerOf(share) - overrides the default share.m lookup
rngSeed - particle RNG seed (default 0)
Bundle size: 52.9 KB / 120 KB cap (was 48.3 KB; +4.6 KB for ~500
LOC of particle + card logic).
Tests: 196 total, 196 pass (was 192/192). Six animator tests
updated for the new phase semantics (alpha is now phase-gated
rather than linear-decayed); seven new tests cover card overlays,
particle determinism, and the DISSOLVE/COALESCE windows.
The static pixel-diff harness still measures 5.02% - particles
and cards only appear during animation, which the steady-state
screenshot doesn't capture. The border-pass increment (README
divergence source #1) is what moves the static delta needle.
frstrtr
added a commit
that referenced
this pull request
Apr 23, 2026
vendor/simplifiedmns.hpp:
- CSimplifiedMNListEntry struct mirroring dashcore's wire format for
Dash mainnet's currently-advertised proto version (70230). Past
SMNLE_VERSIONED (70228) and DMN_TYPE (70227) thresholds, so
nVersion always serialised + early-return-for-old-peers branch
omitted. Per-entry nVersion still gates BLS scheme, ExtAddr, and
Evo (HPMN) extras — preserved verbatim.
- CalcHash() hand-written to mirror dashcore's SER_GETHASH path
bit-for-bit: SKIP nVersion, HONOUR per-entry nVersion-conditional
fields. This is the function whose output must match dashcore for
CBTX merkleRootMNList verification (step 7) to pass.
- CSimplifiedMNList holds a sorted-by-proRegTxHash vector of entries
+ CalcMerkleRoot() using inlined SHA256d-pairwise / dup-last-on-odd
(standard Bitcoin/Dash algorithm).
Wire format pinning notes (in preamble):
- NetInfo treated as legacy 18-byte CService (16-byte IPv6 + 2-byte
BE port). MnNetInfo / DIP-0028 ExtAddr is gated behind
DEPLOYMENT_V24 (EHF) which has not activated on mainnet — entries
with nVersion == 3 (ExtAddr) get no NetInfo support yet; revisit
when V24 activates.
- CBLSLazyPublicKey reduced to 48-byte std::array (legacy/basic
scheme flag is invisible at the wire layer, only affects curve
decompression we never do at MVP).
- CScript scriptPayout / scriptOperatorPayout are dashcore "mem-only"
fields — never serialized into mnlistdiff or merkle leaf hash.
Dropped entirely.
Landmine fix: building this triggered btclibs/hash.h transitively
pulling btclibs/serialize.h, which collided with pack.hpp's 1-arg
SERIALIZE_METHODS macro (landmine #1 again, new instance). Switched
include to <core/hash.hpp> (which has its own CHash256, no
serialize.h pull). Compute_merkle_root inlined instead of importing
ltc::coin::compute_merkle_root because that path drags
mweb_builder.hpp → btclibs/serialize.h. Updated landmines doc with
the new transitive consumer.
main_dash.cpp:
Includes simplifiedmns.hpp so the header is actually compiled.
No runtime wiring yet — that's steps 3+ (apply_diff, message handler,
CBTX root verification).
Status: builds clean, binary loads. Bit-exact correctness of
CalcHash() and CalcMerkleRoot() is unverified — that's step 7's job
(compare against the CBTX merkleRootMNList we already parse from the
coinbase). If CalcHash is wrong, step 7 surfaces it on every block.
frstrtr
added a commit
that referenced
this pull request
Apr 24, 2026
Crash reproducibly hit on first SML sync timer fire (PID dies right after the [SML] sync request log line, before any mnlistdiff arrives). Apport core dump bt: Thread 1 SIGSEGV #0 0x...3f0 (??) ← jumped to garbage #1 initiate_async_wait::operator()<std::function<...>&>(...) #2 main::{lambda(...)#3}::operator()(...) ← timer lambda body #3 wait_handler::do_complete(...) #4 scheduler::run(error_code&) #5 main Bug: the persistent std::function `sml_sync_tick` was passed BY LVALUE to async_wait, captured `&sml_sync_tick` for self-reference. boost:: asio's perfect-forwarding into the internal handler queue moved-from the lvalue on first dispatch (universal-reference deduction + std::forward semantics), leaving the outer std::function empty. The copy that ran on first fire then re-armed by passing the now-empty outer function — second fire dereferenced the empty std::function and SIGSEGV'd into garbage. Fix: hold the persistent std::function on the heap via shared_ptr, and NEVER pass it directly to async_wait. Instead, schedule via a fresh wrapper lambda that captures the shared_ptr by value and invokes (*sml_sync_tick)(ec) when fired. Each schedule() call hands async_wait a brand-new lambda; boost::asio can move-from the temporary as much as it wants without affecting the persistent function. Verified: pattern is the canonical chained-timer idiom. The shared_ptr keeps the tick function + captures alive across reschedules; each fresh wrapper lambda holds its own copy of the shared_ptr (refcount++) so the timer chain is self-sustaining until io_context exits.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.