Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Age-aware Mining #92

Closed
wants to merge 18 commits into from
Closed

Age-aware Mining #92

wants to merge 18 commits into from

Conversation

fosskers
Copy link
Contributor

@fosskers fosskers commented Apr 3, 2019

Currently, if fresh/out-of-date nodes are introduced to the network, they spend a fair amount of time getting caught up. Meanwhile, they continue to mine old blocks and yield Cut information into the network. This is wasted work, as any blocks they produce will never be considered by the main "Consensus Pack".

This PR allows the POW miner to detect its distance from the main Consensus. If too far, it will halt mining until caught up.

@fosskers fosskers changed the title Sensitive Mining Age-aware Mining Apr 3, 2019
src/Chainweb/BlockHeader.hs Outdated Show resolved Hide resolved
src/Chainweb/CutDB.hs Outdated Show resolved Hide resolved
-- components of this node.
--
updateNetworkHeight :: CutHashes -> IO ()
updateNetworkHeight = atomically . writeTVar networkHeight . Just . cutHashesHeight
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@larskuhtz is there a chance that a Cut could come in that would significantly drop the agreed-upon Cut Height?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There will be incoming cuts with very low height, but the agreed upon height should never decrease. Cuts of very low height should be filtered out immediately by the cut processor.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is some interference with #161 here: with the changes from that PR, if a cuts that are far ahead, would be dropped, too. So we would see in the processor only cuts that are at most 1000 blocks ahead. Once we get close to those we would start mining, even though the network is much further ahead.

One solution could be to sample the current height after we drop old cuts and before we drop far-ahead cuts.

@larskuhtz larskuhtz force-pushed the lars/tmp branch 3 times, most recently from ce80f40 to 6c63c1d Compare April 4, 2019 01:04
src/Chainweb/CutDB.hs Outdated Show resolved Hide resolved
src/Chainweb/CutDB.hs Outdated Show resolved Hide resolved
This will help nodes detect whether there is a point in mining or not.
STM is the much more idiomatic way to handle this.
@fosskers fosskers changed the base branch from lars/tmp to master April 4, 2019 16:03
@larskuhtz
Copy link
Contributor

What about keeping a little bit of state and compute the current height by more than a single incoming cut. E.g. we could keep track of the maximum for each node/origin and take some percentile of that. For example: the 90 percentile or the minimum of the max of the top 5 nodes, whatever, is less.

@fosskers
Copy link
Contributor Author

fosskers commented May 2, 2019

@larskuhtz Yes I think something like that will be better for preventing attacks / being more accurate. I'm thinking about design now.

@fosskers fosskers marked this pull request as ready for review May 7, 2019 17:01
@@ -266,26 +284,28 @@ startCutDb
startCutDb config logfun headerStore payloadStore = mask_ $ do
cutVar <- newTVarIO (_cutDbConfigInitialCut config)
queue <- newEmptyPQueue
cutAsync <- asyncWithUnmask $ \u -> u $ processor queue cutVar
peerHeights <- newTVarIO . PeerHeights $ BQ.singleton 16 (BlockHeight 0)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unsure how big the Queue should be.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use TBQueue in this case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no way to fold over a TBQueue.

updateNetworkHeight ch = case _cutOrigin ch of
Nothing -> pure ()
Just _ -> atomically . modifyTVar' peerHeights $
\(PeerHeights q) -> PeerHeights $ BQ.cons (_cutHashesHeight ch) q
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

O(1) consing + "forgetting" of oldest values.

stack.yaml Outdated Show resolved Hide resolved
Copy link
Member

@emilypi emilypi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

My only comments are questions:

  1. Are we sure we shouldn't simply be using TBQueue for this instead of your library
  2. Is there any low-hanging inlining we can do that would ease thunking.

Otherwise, it looks great.

@@ -266,26 +284,28 @@ startCutDb
startCutDb config logfun headerStore payloadStore = mask_ $ do
cutVar <- newTVarIO (_cutDbConfigInitialCut config)
queue <- newEmptyPQueue
cutAsync <- asyncWithUnmask $ \u -> u $ processor queue cutVar
peerHeights <- newTVarIO . PeerHeights $ BQ.singleton 16 (BlockHeight 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use TBQueue in this case?

-> Adjustments
-> IO ()
go :: Given WebBlockHeaderDb
=> Given (PayloadDb cas)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we purge these Given values?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Likely? @larskuhtz ?

-- considered "caught up".
--
catchupThreshold :: ChainwebVersion -> Natural
catchupThreshold = (2 *) . diameter . _chainGraph
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd probably be easier to inline this here or as a data entry in the chainweb version instead of calculating this every time we need to establish a consensusCut. That way, we can think of chainweb versions as having a tuneable "catchup threshold" built in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Memoize?

@fosskers fosskers added the component: miner The Mining Client and surrounding infrastructure. label Jun 14, 2019
@fosskers
Copy link
Contributor Author

Closing for now - the stand-alone mining client will likely make all of this defunct.

@fosskers fosskers closed this Jul 19, 2019
@fosskers fosskers deleted the colin/sync-phase branch September 18, 2019 19:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: miner The Mining Client and surrounding infrastructure.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants