Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

set share diff target based on address hash rate #174

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

roy7
Copy link
Contributor

@roy7 roy7 commented Feb 12, 2014

Instead of small miners getting assigned share diff targets based on the node's local hash rate, it will use the payment addresses' hash rate. Thus a small miner on a big public node has the same diff target as if they ran their own node instead.

I did not do the same logic for the pseudo share rates, that still calculates based on local node hash rate. I was concerned a large public node would have an unexpected increase in traffic if every miner started sending in a lot more pseudo shares. Perhaps that should also be done? Pseudo share target speed shouldn't be a share per second on a public node, however.

I added address and speed in the log since it won't be the same share_diff for every worker now.

@Rav3nPL
Copy link
Contributor

Rav3nPL commented Feb 12, 2014

Why change back in line 182?

@roy7
Copy link
Contributor Author

roy7 commented Feb 12, 2014

Raven, I don't recall changing that line. Maybe I'm out of sync somehow. When I 'git diff' I don't show it doing something there either.

line 182 change removed. Sorry for oversight. I had not done a git pull in that directory prior to my work.

@roy7
Copy link
Contributor Author

roy7 commented Feb 12, 2014

In my logging change, local_addr_rates might not be set and in scope. Not sure if we should just remove the speed of the miner from the logging completely?

Pull request updated to fix this by fetching the local rates and not depending on a prior variable.

@gururise
Copy link

Have you tested this change in production? Implementing this change, will one need to change the IDENTIFIER & PREFIX for each coin in the networks.py file?

@roy7
Copy link
Contributor Author

roy7 commented Feb 14, 2014

Yes I tested it for a while on my http://vtc-us-east.royalminingco.com:9171/static/ VTC pool and then when changing networks.py for other reasons on VTC we rolled it out network-wide. For this patch alone no change to networks.py is needed. It only changes the vardiff share difficulty targets given to connected miners.

Edit: Some of the later commits on this branch were from people finding bugs I didn't find in my own testing.

@roy7
Copy link
Contributor Author

roy7 commented Feb 21, 2014

I've noticed two odd things on my Un pool last night. My Antminer was about 1/3 of the total pool hash rate. I could set /DIFF lower and it'd honor that, or set it somewhat higher than vardiff and it'd honor that. However if I set it really high, it would ignore it. I've been unable to figure out why yet. The dust theshhold and 1.67% checks should only increase the share target difficulty, never lower it, as I understand it? So there should be no cap to the maximum /DIFF setting.

Also, since the payment address is changed on a worker prior to the get_work call, the local addresses list will have a payment address of the pool's address FEE % of the time with no hash rate for that address. Leading to the miner getting work at a different difficulty level until the next time they get work with normal payment addresses. Not sure if I can detect when that happens (if payment address == pool's fee address) and do something accordingly.

My Un poll is http://us-east.royalminingco.com:9655/static/. The one smaller user there has /10000 set. However his share difficulty right now is much lower than that. If I connect my bigger miner, then his diff will snap right to 10000 as expected.

@forrestv
Copy link
Member

The share difficulty can only be increased to 30 times the minimum share
difficulty.

On Fri, Feb 21, 2014 at 9:17 AM, Jonathan Roy notifications@github.comwrote:

I've noticed two odd things on my Un pool last night. My Antminer was
about 1/3 of the total pool hash rate. I could set /DIFF lower and it'd
honor that, or set it somewhat higher than vardiff and it'd honor that.
However if I set it really high, it would ignore it. I've been unable to
figure out why yet. The dust theshhold and 1.67% checks should only
increase the share target difficulty, never lower it, as I understand it?
So there should be no cap to the maximum /DIFF setting.

Also, since the payment address is changed on a worker prior to the
get_work call, the local addresses list will fee % of the time return a
worker that is the pool's fee address with no hash rate for that address.
Leading to the miner getting work at a different difficulty level until the
next time they get work with normal payment addresses. Not sure if I can
detect when that happens (if payment address == pool's fee address) and do
something accordingly.


Reply to this email directly or view it on GitHubhttps://github.com//pull/174#issuecomment-35733382
.

@roy7
Copy link
Contributor Author

roy7 commented Feb 21, 2014

Where in the code is that enforced? Having trouble locating it. On my Uno node at the moment the user's share diff target is about 34x the pool share minimum. Also, I'm confident if I connect my big miner back over his diff would snap to 10000 even before the global share minimum increased. I'll have to debug more this weekend.

@CartmanSPC
Copy link

@roy7 this might be it:
354ec22

@roy7
Copy link
Contributor Author

roy7 commented Feb 22, 2014

Thank you, Cart!

@roy7
Copy link
Contributor Author

roy7 commented Feb 22, 2014

@CartmanSPC Hmmmmm. So this might explain the behavior I was seeing because the clipping is based on node speed (get_pool_attempts_per_second) not individual miner speed. So the smalll miner at /10000 is capping at 30x minimum share diff, but when I join, his cap is high enough that he snaps right up to 10000 instead of the previously lower number. I'll have to try it sometime by logging all of those values on my node.

Correction, get_pool_attempts_per_second is the pool not the node. Is the idea here is that you don't want share diff so high you can't find shares at the target rate for the share chain? If so then that also makes sense. Remember when I was observing this the other night, my 180G was about 33% of the hash power on the coin I was testing. So when I join, the small miner is no longer forced to find shares faster to hit the target rate.

@roy7
Copy link
Contributor Author

roy7 commented Feb 26, 2014

Yes, so the behaviors I was observing were because of my size vs rest of network, and the cap that your / target can't be more than 30 times the normal amount. To my knowledge this is still working flawlessly.

@roy7
Copy link
Contributor Author

roy7 commented Feb 26, 2014

Here is another fork of p2pool I just found that has some optional command line options to adjust the pseudo share rate targets on a per miner or per address basis.

https://github.com/iongchun/p2pool/commits/auto-worker-diff

Seems like a great combination with my patch for address based share difficulty targets.

Edit: Someone on IRC tried iongchun's patch and had a variety of problems with DIFF 0 work going out and had to undo it. I've since submitted his work as a pull request. If accepted, you should also accept my fix for DIFF 0 work being prevented.

@roy7
Copy link
Contributor Author

roy7 commented Mar 18, 2014

Followup to my prior comment. iongchun's auto-worker-diff fixes DIFF 0 HWs for scrypt as well now. I made a pull request of it here:

#187

That is only for pseudoshare difficulties however. This pull request is still needed if you want actual share difficulties to be based on address speed instead of local node speed. (local node speed method is not good for public nodes.)

@roy7
Copy link
Contributor Author

roy7 commented Apr 25, 2015

Forrest how would you feel about merging this? You could add a command line option to toggle it off perhaps, if you think some private pools don't want this change. If there's some reason you or others didn't want this implemented I don't want to rock the boat.

@forrestv
Copy link
Member

I don't really agree with this change because is makes this limit nearly useless. With a limit per P2Pool node, you have to actually run multiple nodes to impact the P2Pool network difficulty much, but with this change, an entity with one node but several different miners with different payout addresses circumvents it.

In addition, public P2Pool pools aren't the intended use of P2Pool, and one exceeding 1.67% of the network is bad, in some sense.

If a public P2Pool node owner wants to make this change, I won't (and can't) stop them, but I don't think it should be the default behavior.

@roy7
Copy link
Contributor Author

roy7 commented Apr 25, 2015

Ok understood. :) I know public nodes has never been the intended use. Thanks for the quick response!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants