-
Notifications
You must be signed in to change notification settings - Fork 2.3k
the pool is regularly disconnected #1539
Comments
Actually this problem has nothing to do with ethminer. This can be caused by any of these :
|
Thank you for your prompt reply. I have been mining for a long time. I like Ethminer very much and I would like to use it always. I would not ask such question - as other miner gives out approximately on 2000 decisions only 4 rejekts and the hashrate on a pool corresponds. While ethminer receives a regular discount and dozens of rejects-on 2000 solutions about 20 rejects.
|
The cause of rejects have to be investigated. |
Highly unlikely |
I've run a batch of 30 minutes and got several disconnections too. I'd suggest to get in touch with pool devs to inspect the problem. |
For much time this was the first such strange case. I am very grateful for your help. I'll contact pool support. |
Another farm 4588+1578 shows a record of stability. |
Sorry - 4rx588+1rx578 |
Still I will tell that on other farms with such problem there are only Nvidia. |
It happens when pool doesn't send jobs too often, usually small pools cause this disconnects. Switched to smaller pool on purpose and experienced the same behaviour. What Andrea Lanfranchi mentioned about "--response-timeout" did the trick, depend on how often pool sends jobs you need to increase value from 2-30s. and even more.
|
With this, the disconnects from the pool are gone. |
Two comments:
|
Why would changing --response-timmeout or --work-timeout cause a "connection remotely closed" to disappear? I think that error message comes from the stack, so I don't see how client side timeouts would be the cause! Or, does boost::asio::error::eof not necessarily mean "connection remotely closed"? Seems so: "An error code of boost::asio::error::eof indicates that the connection was closed by the peer." Just checked, either of the --response-timmeout or --work-timeout timers would have issued a specific error log on expiring. |
@jean-m-cyr The issue has been resolved by increasing some timeouts, see #1539 (comment). I'm guessing that when one of the timeout is hit, the ethminer disconnects from the pool. If I'm right, then the log message "connection remotely closed" is incorrect and users don't have a clue how to solve the problem. It would be much easier if the log said "disconnected from pool because of inactivity (response timeout)". I'm also wander if increasing the default value have any impact on the overall performance. If not we could increase them to make ethminer work with default values in this case. |
I do not agree.
I strongly believe the problem depicted on this thread is strictly related to the weakness of the pool which (I am only guessing) has suboptimal load balancing techniques or faulty pool implementation. All tests on Rate A pools have never depicted such a situation. |
The increase of timeouts, IMHO, has produced some effects only coincidentally |
As @jean-m-cyr correctly underlined ... if the disconnection was on our side boost would have returned the "Operation Aborted" error code which is also trapped with a different output message. |
No it does not affect ethminer performance in any way. Only problem you may stay connected longer on a non-responsive pool. |
@urpils and other. Opening post for this thread depicts connection on port 3002 which causes problems. This said if this statement by you is true
the pool is behaving pretty badly. As MTP has a block time of 24 seconds (avg) in 60 seconds the POOL should send at least 2~3 jobs. If it doesn't and it computes the missing jobs as idle time ... well blame pool maintainers ... not ethminer. |
Closing |
Hello.
I'm using the version ethminer-0.16.0rc1 and windows10. And I have the same problem of regular disconnects and rejects on etp coin - dodopool or metaverse.farm. I run the following bat file:
@echo off
timeout /t 45
:bg
ethminer -U -P stratum://WALLET.NAME:x@nl.metaverse.farm:3002 --exit --tstop 72 --tstart 45 --noeval
timeout /t 10
goto :bg
Thanks.
The text was updated successfully, but these errors were encountered: