New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
review of SETI limits #3387
Comments
You can request more work from boinc |
You can run multiple boinc clients on a single machine |
But takes concerted effort to arrange multiple running clients. More than the casual user will want to do. Much better to have a configurable preference setting accessible from the Manager. |
This a SETI@home issue, not a BOINC issue. |
If it is thought not relevant to this thread then which thread would be appropriate? |
SETI@Home has Message Boards |
The OP was requesting a feature that in BOINC that allows more than 100 threads to be used. Is that not a limit in BOINC? Do projects other than Seti allow more than 100 threads to be active. Use case is for mainstream processors with 128 threads or server processors with 256 threads. For these processors, is only running multiple clients simultaneously the only solution? |
The 100 limit is not in the client.
It's a SETI@home configuration setting.
…On Mon, Dec 2, 2019 at 12:36 PM KeithMyers ***@***.***> wrote:
The OP was requesting a feature that in BOINC that allows more than 100
threads to be used. Is that not a limit in BOINC? Do projects other than
Seti allow more than 100 threads to be active. Use case is for mainstream
processors with 128 threads or server processors with 256 threads.
For these processors, is only running multiple clients simultaneously the
only solution?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#3387?email_source=notifications&email_token=AAHQVAKX2AD44IIK4S4YU33QWVWWTA5CNFSM4JTTG3W2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFUZ5NI#issuecomment-560570037>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAHQVAOF2PPHK2G3Y5KUYCLQWVWWTANCNFSM4JTTG3WQ>
.
|
OK, thanks for clarifying there is no underlying BOINC limit on active threads. How does the OP contact the people responsible for the the Seti server configuration. I don't know of an equivalent to the github.com/BOINC/boinc/issues avenue for just Seti. |
Volunteers are welcome to post on the project's message boards: there is even a 'wish list' in the Questions and Answers area. But if @captainiom is willing to read before he posts, he will find that the same question has been asked many times before. And I expect that the answer will remain the same: project configurations are set for the benefit of the smooth running of the project as a whole, not to suit the private aspirations of any single member. |
All well and good. However I doubt you can show me one single "wish list" item that has made it onto the BOINC/Seti Issues >>pull request>>Projects/ToDo list. Can you show me any sign the Seti developers even read any of the posts in the Wish List forum? The only way I know of to track a deficiency and make it actionable for Seti/BOINC is to get it logged into Github/BOINC. All the Wish List forum is good for is a place to vent your frustration. The Seti configuration has not kept up with trend of cpus having more threads than the single core cpus of the time that Seti first started. A lot of crunching horsepower is being underutilized. |
I would echo Keith's comments. Moreover, I am aware that there has been concern in the past but I feel that the current thrust by AMD and to a lesser extent by Intel for ever more cores and threads requres a review at this time. Hyperthreading will inevitably develop to 4 threads per core and EPYC already have 128 threads available. |
There's been no official announcement from either of you, but the 'work in progress' limits were raised on Friday evening to 200 per CPU, 400 per GPU. I hope whoever did that is keeping a close eye on the database performance as the caches fill. |
Do these seti limits impinge on BOINC operation? |
No - except if you personally abuse the freedom, and download too many tasks (your own machine might slow down), or if a large group of users abuse the freedom (in which case, the whole SETI project might encounter database problems). |
Using your JSM handle, you reported in your 'Ryzen' (CPU) thread on the SETI message boards that
I am not yet sure that a four-fold increase in GPU limits counts as 'slowly', or addresses the specific Ryzen issue: that's why I'm urging caution and close monitoring, here and on the SETI boards. We had an intervention in 2013 (also at a weekend) which spiked the database to well over 10 million tasks, slowed everything to a crawl, and took at least a week to recover from. |
I was the one who increased the limits. I based it on database I/O rates
which, it seemed could handle the change. Long term the rates don't change
but the DB "in process" lookups take longer. If it becomes an issue I'll
drop them.
…On Sun, Dec 8, 2019 at 3:22 AM RichardHaselgrove ***@***.***> wrote:
Using your JSM handle, you reported in your 'Ryzen' (CPU) thread on the
SETI message boards that
I am pleased to report that the limits WILL be reviewed and if possible
and practical will be increased slowly.
I am not yet sure that a four-fold increase in GPU limits counts as
'slowly', or addresses the specific Ryzen issue: that's why I'm urging
caution and close monitoring, here and on the SETI boards. We had an
intervention in 2013 (also at a weekend) which spiked the database to well
over 10 million tasks, slowed everything to a crawl, and took at least a
week to recover from.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3387?email_source=notifications&email_token=ACS5ZMRXBI3TVIA25MA3MQTQXTKIBA5CNFSM4JTTG3W2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGG3YQA#issuecomment-562936896>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACS5ZMTYQB6NN2J35YOYHNLQXTKIBANCNFSM4JTTG3WQ>
.
--
Eric Korpela
korpela@ssl.berkeley.edu
AST:7731^29u18e3
|
Thanks - that's reassuring. Total in progress has reached 6.93 million, and still seems to be rising slowly, but under control. I'll pass on the news. |
I've dropped the GPU limit to 300 to keep us below 7M. We'll run it there
for a while.
…On Mon, Dec 9, 2019 at 12:33 AM RichardHaselgrove ***@***.***> wrote:
Thanks - that's reassuring. Total in progress has reached 6.93 million,
and still seems to be rising slowly, but under control. I'll pass on the
news.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3387?email_source=notifications&email_token=ACS5ZMWCCV2M66UCAE7GPYDQXX7GRA5CNFSM4JTTG3W2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGIJFSY#issuecomment-563122891>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACS5ZMW6YUUQO4NUIZKRHHDQXX7GRANCNFSM4JTTG3WQ>
.
--
Eric Korpela
korpela@ssl.berkeley.edu
AST:7731^29u18e3
|
Thank you for airing this request.
Capt McKenzie
|
As AMD hardware develops with ever increasing cores and hyperthreading the SETI limits become increasingly outdated. The maximum cpu limit of 100 tasks is particularly irrelevant as it constrains crunching with all threads and when SETI servers go off line - whether for maintenance or due to a problem - the back up tasks awaiting start are quickly used up and dedicated machines just become idle which is hardly optimal.
Thus I suggest that all of these limits which are holding back performance should be reviewed. If necessary there could be two limits the standard exiting set and an enhanced set selectable in preferences?
The text was updated successfully, but these errors were encountered: