-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trial Factoring with gpuOwl #230
Comments
Two questions:
|
I would need an already factored exponent that is known to have factors to test...
absolutely, it is very fast, faster than my RX Vega 64. and I have tested running both PRP and TF on the same GPU at the same time and there was very little performance dropdown ! |
I found an exponent to test which is known to have factors because I have already computed it:
will post the result here. |
|
That is another factor, what it means? |
Have a look here: |
So the program is working, it only needs adapting the result line so that Primenet accepts it... |
Ok, and also primenet.py needs the TF (2) worktype to fetch trial factoring work. |
Here's some benchmark for trial factoring exponent 218812621, bitlevel 73-74: Radeon VII:
Radeon Pro VII:
|
For comparison: benchmark for mfakto on RX Vega 64, same exponent, same bitlevel:
|
@preda Can you tell me just where is the code for the output line so that I can try to fix it? |
Tired of waiting for your reply, I went ahead and found it.
which produces:
This allowed me to add the comma before bitlo. Now I only need to understand why the timestamp is not accepted by Primenet. |
I will open a discussion for this topic. |
For everybody to know: the old TF branch of gpuOwl currently works under the following conditions:
However the results can't be submitted to Primenet:
Problems in the result line:
The text was updated successfully, but these errors were encountered: