Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MANUAL - How to setup XMRig nVidia miner for newbies #100

Open
GrannyCryptomaster opened this issue Feb 3, 2018 · 48 comments
Open

MANUAL - How to setup XMRig nVidia miner for newbies #100

GrannyCryptomaster opened this issue Feb 3, 2018 · 48 comments
Labels

Comments

@GrannyCryptomaster
Copy link

GrannyCryptomaster commented Feb 3, 2018

THIS GUIDE IS FOR 2.4.5 VERSION. In the newer versions, there are some minor differeces in config file.
If you find this usefull, please consider a donation. This was wrote in hours after so many days of learning and testing. :)) Thanks!
BTC: 1FP4Pm4cztjL7QGaj9VFXi1cScsW6UDx8W

See Updates at the end of post.
To see all the parameters, you can export them in a help.txt file with this help.cmd file:

@echo off
xmrig-nvidia.exe -help > help.txt

Before using XMRig in Windows 10 x64bit, you must install in this order:
1.vc-redist all versions from 2012 to present, both x32 and x64;
2.nVidia driver 388.71 - only the graphics diver, and maybe Physix (I heard some miners use it); don't go with 39x driver, it causes problems with some miners (ethminer, etc.). This is the first ver. that supports CUDA 9.1.
3.CUDA 9.1 and all the patches (you don't need Samples and Documentation, only driver and runtime, maybe tools and other stuff).

At the first start of the miner, you will see something like:
"GeForce GTX 1060 3GB - 44x27 6x25 arch:61 SMX:9"
What this means? How to tweak the settings for optimal mining?
Explanations:
44 threads x 27 blocks,
bfactor 6 x bsleep 25,
achitecture 61,
multiprocessors SMX 9.

The settings can be made in the config.json file, opened with Notepad. It's a simple text file.
It comes with the miner with a basic config. After first start, it will be replaced by the same file, but with settings sugested by the miner, accordingly with your rig setup. In the regenerated file, you can now modify any settings you want.. threads, blocks, wallet, miner, etc.
Best practice: before starting the miner for the first time, edit your config.json with the basic settings that will remain unaltered in the future, and save it in a BACKUP folder. When you add a GPU or replace it, you should delete the config.json, and replace it with the backup, to let the miner set the new GPU.
Here is an example:

{
    "algo": "cryptonight",
    "background": false,
    "colors": true,
    "donate-level": 5,
    "log-file": null,
    "print-time": 300,
    "retries": 90,
    "retry-pause": 10,
    "syslog": false,
    "threads": null,
    "pools": [
        {
            "url": "pool1:port",
            "user": "wallet.worker",
            "pass": "x",
            "keepalive": true,
            "nicehash": false
        },
	{
            "url": "pool2:port",
            "user": "wallet.worker",
            "pass": "x",
            "keepalive": true,
            "nicehash": false
        }
    ]
}

A small omision from dev... if you want to add more pools, you must put a comma "," after the "}", so you will have "pools": [ {pool1}, {pool2}, ...]. In the original file, the comma is missing.
BEFORE you start tuning the threads, you must set the other parameters to proper values in the regenerated config.json, after first start.

The index is the number of GPU, like GPU 1, GPU 2 etc, as they are identified by the system. The index in miner starts at 0. The numbering in Afterburner or GPU-Z starts at 1. So index 0 means GPU 1, index 1 means GPU 2... etc. GPU1 is the one on the 16x slot, GPU2 is on the other 16x/8x/4x slot. GPU3, 4, 5 and 6 are in the 1x slots starting from cpu. I don't know at what position will be a GPU linked in the M.2 slot. BTW in Windows you can have maximum 8 graphical processors: 8GPU or 7GPU+1cpu hd graphics if you use the HDMI on your motherboard. If you have 6 PCIe slots, you can use the M.2 slots too. They are basicaly PCIexpress x4. In BIOS you must set Above 4G decoding enabled and M.2 slot on PCIe mode.
You can't set smx or arch; they are automaticaly identified by the miner accordingly with your GPU model.
Blocks must be SMX multiplied by 3.
Set bfactor 8, bsleep 100 for GUI systems (Windows), or let the basic 6x25 for others.

You can find the optimal threads by starting with a low value, like 12, or 22, or whatever (it can be set in the initial config.json file, see help, and than modified in the regenerated config.json file), start miner, if it crashes, lower value by -2; if dosen't crashes, increase value by +8. When you find the maximum threads value, let the miner run for 5 min, note the hashrate; than lower the value by -2, or a multiple of 2, and note the hashrate eachtime; you will notice an increase in hashrate, than a decrease. Fine tune there with +/-2 untill you get max hashrate. You should be aware of the fact that some pools may ban you for a short period of time, if you connect and disconnect frequently (aka start miner a few times in 5min for ex).
For ex. the optimal threads setting for GTX 1060 3GB is 32. With 6 GPU at 44 threads (the maximum value), I got around 2650 H/s, but with 32 threads, my hashrate jumped to 2863 H/s.
So an optimal config for that GPU should give you this:
"GeForce GTX 1060 3GB - 32x27 8x100 arch:61 SMX:9" in Windows.

print-time 300: the hashrate and system health is printed once per 300seconds.
retries 90, retry-pause 10: if connection to the pool is lost, the miner awaits for 10 seconds, than retries to reconnect for 90 times (90x10sec=15minutes). The pause must be set no less than 5 sec.
Parameter "variant": should be:
1 for new Cryptonight PoW (Monero7 fork);
0 for old Cryptonight PoW;
-1 for Cryptonight Heavy.


Update1:
nVidia GPU mining in Windows 10:
GPU RAM used for cryptonights: Threads x Blocks x M = RAM MB

Blocks = SMX x 3 (maxim SMX x 8)
M = 2MB - for CN
M = 4MB - for CN heavy
Windows 10 reservs <=20% video RAM (That's one reason why Linux it's better).
Must let some RAM for padding (~100MB).

From what I experimented and from some wiki descriptions of what threads and block mean,
I think a good rule is to increase blocks more, than to increase threads.
Threads should be 32, 16 or 8. Blocks should be SMX x 4 or more.
If you have enough memory, you should use Threads x Blocks = no. of cores in the GPU.
One core takes care of one thread. The value of Threads you specify is fore number of threads in a block.
The value of Blocks is the total number of blocks. 1 multiprocessor (SM) takes care of 1 to 8 blocks. So the number of blocks should be a multiple of SMX (Blocks=SMX x1/x2/...x8).

For CN-heavy, if you have low memory like in 3GB cards, just use 1/2 no. of Threads from CN.

Ex: For GTX 1060 3GB GPU, in Windows, you have 2490MB available; SMX 9.
The formula goes like this:
CN: 44 x 27 x 2 = 2376 MB (<2490-100)
CNH: 22 x 27 x 4 = 2376 MB (<2490-100)
So, the maximum Threads is 44 for CN (Monero) and 22 for CNH (Sumokoin), with Blocks 27.


Update 2:
use online config generator/editor https://config.xmrig.com/
...then tweak the Threads on each GPU untill you get the best hashrates.


Update 3:
For GTX 1060 3GB Hynix, best seetings I found so far in Windows 10 x64:
CN: th x bl bf x bs: 32x36 8x100 or 8x144 8x100 (for me, it gives same hashrates).
32x36 = 1152 total threads (=the no. of cores in this GPU). 1152 x 2MB = 2304MB videoRAM.
CNHeavy: 16x36 8x100 or 4x144 8x100
16x36 = 576 total threads. 576 x 4MB = 2304MB videoRAM.

8x100 for WINDOWS is imperative!!! With 6x25 you get lower and varriable hashrate. With 8x100 the hashrate is very stable, and is the highest. Dosen't matter if the monitor is plugged in mobo or in GPU. You must use 8x100 on all cards in the rig.


Update 4:
You can use static diff, but if the pool is configured correctly, you don't need to. And many pools don't support static diff. The pool's var diff algo can work just fine.

Static diff can be easily calculated:
HR x TT = staic diff

HR = your rig's hashrate (for ex. I have an average of 2839 H/s in CN-heavy; I use 2800 for calculation).
TT = target share time in seconds (I understand that this is the time that an accepted share occures). Depending on coin (blocktime, network hashrate, etc) it varries. For CN, 30sec to 60sec TT is recommanded.
This calculator can be found at this address:
https://haven.miner.rocks/#static_diff_calc

So as my example goes, with a rig that has 2800 H/s and a TT of 30s, you have 2800*30=84000 static diff. If you want more accepted shares, but at lower diff, lower the time. Anyway, setting a lower time than recommanded, meaning setting a lower static diff, will not increase your earnings; 2 shares at diff 25000 equals 1 share at diff 50000 as coin reward. Sending more shares with lower diff will just chocke the pool server. This technique is used by evil miners when they attack a pool/coin.


Update 5:
Also see this thread for Threads and Blocks for GTX series:
#166


Update 6:
It seems that CN-Heavy (Sumo, Loki, Haven, etc) likes OC the core too, more than CN (Monero), besides mem, so try to increase GPU Core's freq, and maybe the power limit, to see the difference in hashrate.


Update 7:
After days of testing with a 7 x GTX 1060 3GB rig and a RX580 pc, I can share the following findings:
-XMRig nVidia and XMRig AMD hashes better than XMR-Stak, along with a 1% lower fee (30.05.2018)
gain +1,50%
-static diff gives more hashes at pool side than var diff (see above how to calculate static diff)
gain +0,65%
-lowering the static diff to half than pools recomandation gives more hashes at pool side (set for 15s)
gain +1,97%
-CNHeavy is core and mem intensive, so OC both. Is not so demanding as equihash for power, so you can lower the power limit. For mem OC an AMD card, use HWiNFO64 sensors only mode, to see the mem errors of the GPU and use Claymore ETH miner. You should have 0 mem errors in 30 min.

@xmrig
Copy link
Owner

xmrig commented Feb 17, 2018

Great manual, in addition now can use online config generator/editor https://config.xmrig.com/
Thank you.

@xmrig xmrig added the manual label Feb 17, 2018
@ghost
Copy link

ghost commented Feb 19, 2018

Pretty good manual but it leaves me wondering where did you get that "Blocks must be SMX multiplied by 3" from? I'm gettings best results using 32 which is SMX (4) * 8.

@GrannyCryptomaster
Copy link
Author

GrannyCryptomaster commented Feb 21, 2018

@tudde
That's devs recomendation. I read it somewhere in a post from xmrig and from xmr-stak. I really don't know what all the parameters mean, I just made a guide from various sources and from my experience, to save a begginers time, to get mining and catch up quick with stuff.
There are not 2 identical GPU's even from the same brand, so I think a general absolute perfect settings wouldn't apply to everyone's rig. Anyone can test and try all the combinations he wants, if the time is not an issue. But I think, if you follow this guide, you can start mining with a pretty good setup, with near top hashrates.
For GTX 1060 3GB, SMX are 9.

@kisakuAtYop
Copy link

@GrannyCryptomaster
I am very interested in your nice post. As you mentioned, you use "GeForce GTX 1060 3GB" to do mining, and the hash rate can reach 28xx (H/s). Because I'm a newbie for mining, and from benchmark website http://monerobenchmarks.info/searchGPU.php it only shows about 5xx (H/s). Is it different models between you mentioned and website? Please help to clarify this, thanks.

@GrannyCryptomaster
Copy link
Author

GrannyCryptomaster commented Mar 15, 2018

2880H/s for 6 GPUs... 480H/s for 1... I mentioned there 6 GPU...
I use power 65, temp 83, core -50 to -100 (depending on model), mem +500. I set all cores to the same frequencie; every card is unique, so they need different settings. I have only models with Hynix mem, witch is the worst of all. The models with Samsung memory can go up to mem +950. This is in Afterburner and with "Force P2 state - ON". If you set this to OFF, it can go to P0 state, but the OC sets must be lowered, because P0 state is the top state (maximum power and performance). More on this can be found especialy on Ethereum forums; search for nVidia Profile Inspector.

@kisakuAtYop
Copy link

@GrannyCryptomaster Thanks for your information.

@TheHawkmaster
Copy link

@GrannyCryptomaster thanks for the info. Question, are there certain values for threads and blocks to use that depend on the cards memory, etc? Will adjusting the product of threads and blocks have any adverse effects on the cards (damaging them, etc)?

I run a lot of GTX 1060 6GB cards, and I think when I start my miners the default settings are 40 blocks of 8 threads and the SMX is 10. I will try tinkering with a block count of 20 and 30 and do I just keep increasing the thread number until I see messages like "illegal memory access was encountered"?

@GrannyCryptomaster
Copy link
Author

I don't know to much technical details, but from my expirience and from what I've read, maximum threads are not the best option. Idealy you should test every combination of threads and blocks for some time, but this will take forever... Yes, maximum threads are when you get memory errors. I think a good method will be this... use the config generator from the link above. Use the blocks from that and tweak the threads; go up with them untill you get errors then go down in -2 steps. You should see an increase in hashrate, then a decrease. Use the best value. That's in theory.

@TheHawkmaster
Copy link

TheHawkmaster commented Mar 16, 2018

@GrannyCryptomaster what's funny is that this particular rig would run stock 40 blocks and 8 threads and would get that stupid error once in a while.

When modifying the launch parameters will i see a noticeable increase in power consumption as well? Gonna get my kill-a-watt on this to check.

One last thing, I normally run core at +150 and memory at +500. Any benefits to running the clock at -100 and should I boost memory clock above +500 or is that going to bring on stability issues?

@GrannyCryptomaster
Copy link
Author

GrannyCryptomaster commented Mar 16, 2018

Try the blocks = smx x 3 version; devs recommends that. So use threads x blocks T x 30. I don't know if it's the best, but at least for me works. I got miner crash in two cases: when I exceded the max threads - the miner crashes at start; when I OC mem too much - GPU errors, miner freeze.
Cryptonights don't need core OC. Only mem. I run it with top and stable hashrates with core -25 to -100, depending on card. I try to set the same core frequency on all. So decrease the power limit to 70-65, unlink it from temp, temp should stay at defaults 83 or whatever, put core at -100, and mem at +500. Read all the above threads. Underpowering and underclocking won't decrease hashrates too much; but the hashrates will be more stable and you will use less power, and the gpu will be cooler.
You can try different setings for core to get the best hashrates, go from -100 and up for one gpu at a time. Mem depends on manufacturer. Hynix can take + 500, Micton and Samsung double. First, let core at 0, power limit at 65 and teak the mem untill you get hashrates dropes or miner crashes or windows freezes. Go back 50 and use that. If it is stable for a day, than that's it. Now teak the core.

@TheHawkmaster
Copy link

@GrannyCryptomaster awesome thanks for those tips. I usually run my cards at 65% power, but I've lowered them down to 50% until I install some more dedicated circuits in my mining cave.

All this tinkering is fun to get the max possible hashrate out of cards, and better the highest hash per unit of watt consumed.

I will report some of my findings about thread x block, using block = 30. So far one of my miners likes 40x30, less hash with 48x30 and 60x30, and of course won't even run if at 120x30 :)

@7urboPitt
Copy link

where in the hell is the instructions ??? exe file config file , nothing in github's xmrig-nvidia-master zip archive . howq do you work this software, you start it with what where and config what file or gui or jinx dust and staw hay whatsoever where the hell on the internet is laymans instructions ???

@7urboPitt
Copy link

not even a text readme ???

@7urboPitt
Copy link

i downloaded the file , so what the hell must i do now . I have a keyboard and a mouse ready !

@7urboPitt
Copy link

bat file ???

@7urboPitt
Copy link

what ?

@7urboPitt
Copy link

fuckit , nerds help !

@GrannyCryptomaster
Copy link
Author

GrannyCryptomaster commented Apr 2, 2018

Use this config generator: https://config.xmrig.com/
Download the release you need, from CODE > Releases.
Put the config.json file generated earlier in the same folder with the exe file.
Double click start.cmd.
You need to have installed CUDA 9.1 drivers and vc-redist 2017 (Visual C studio).

@GrannyCryptomaster
Copy link
Author

Read the updates in the first post!
Be carefull with new nVidia Windows drivers; they may create problems. I use the newest 388.xx version that supports CUDA 9.1 (that is 388.71), and CUDA 9.1 drivers. I found it very stable. Try to avoid 39x.xx drivers.

@psaux01
Copy link

psaux01 commented Apr 6, 2018

Can u help me with my Nvidia 1050 GTI 4GB please, ? i need get maximum HS with this card with out do overclock.

@psaux01
Copy link

psaux01 commented Apr 6, 2018

What pool port in DIFF do you recommend for this card?

@GrannyCryptomaster
Copy link
Author

GrannyCryptomaster commented Apr 6, 2018

Read the guide. Use the autoconfig from the link above. Tweak the threads +/-2 untill you get best H/s. Use the port with the lowest diff. It's not a constant, it's just the starting diff, from which the pool will start recalculate the diff with each new block, according with the number of shares submitted and the time took to submit them. After few hours you will see that the diff will not change too much (it is shown by the miner); that's the diff for you. You can use that to set a fix diff, that will not change, if the pool supports it. In CN (cryptonight) pools the fix diff can be set with "wallet+diff" in config file of your miner.

ex: Wallet address = 12345aaa, diff you want = 15000; you will have: "12345aaa+15000" as your wallet address. You will receive work only with 15000 diff always.

Why you want a fix diff? Because, if you use a miner with "dev fee" like xmr-stak or xmrig, the miner switches from time to time to dev's pool and dev's wallet, and you loose the autorecalculated diff from the pool; the pool will resend you work with the starting diff specified by the port and start autoadjusting. You get payed by the number of shares accepted by the pool and the diff at which they where produced. Is like a weight. If the shares are send for a bigger diff, they weight more and you get more coins as reward. So if you send more shares with the lower diff untill get readjusted, you are payed less. Another problem... when the diff is autoadjusted it goes up and down untill it finds the optimum spot. When it goes up, it may surpasse the processing power of your GPU and the shares may not be submited in time for the pool to accept them. You will see "new work received from the pool" many times but no acceped shares between those "new work...". In the long run, from what I experienced when I mined CN, is better to use a fix diff, regardless many say that is not a big difference in rewards (around 1%). Why is not a big difference? When the diff is low, you send more shares at lower weight; when the diff is high, you send fewer shares, but at a greater weight. So it's like an equilibium. That's the theory. In practice, as I said... set a fix diff. ;)
And OC your cards, is not a problem; they are made for extra load, especially the 1000 series. The good thing with CN is that you have to OC only the mem; you can reduce power to 70% and the core can be let at 0 or reduced to as low as -100. The hashrate is not affected so much. The power is needed by the core; mem dosen't use so much. And the CN algo is mem intensive, not core intensive.

@TheHawkmaster
Copy link

Hey Granny, do you have a guide on how to compile the miner for a windows environment?

@GrannyCryptomaster
Copy link
Author

Nope. I did't try to compile miners. Maybe dev can help you.

@psaux01
Copy link

psaux01 commented Apr 6, 2018

thanku very much !!

@woodaxed
Copy link

woodaxed commented Apr 8, 2018

ive got a 12 card rig 7 amd cards and the rest gtx 1050 ti i cannot get it to run at all i only want to mine xmr with the nvidia cards and sorting the config file is a nightmare tbh

@GrannyCryptomaster
Copy link
Author

Send me your config file to take a look. You can delete the wallet address

@woodaxed
Copy link

woodaxed commented Apr 8, 2018

tbh ive given up and deleted the lot wasted to much time with it if people want us to use there programmes then they should make them more user freindly

@woodaxed
Copy link

woodaxed commented Apr 8, 2018

ive tried 3 different guides and none of them work

@GrannyCryptomaster
Copy link
Author

GUIDE UPDATED!

@Andyc001
Copy link

Hi, need some help please, I have setup Start Batch file and Json config file, XMRig runs correct, except i'm not getting 'use pool .......' and my hash rates are 0, any help would be appreciated.

image

@Andyc001
Copy link

Andyc001 commented Apr 17, 2018

Start.bat file text is below -

@echo off
start /low %~dp0\xmrig-nvidia.exe --config=%~dp0\config.json

and config.json is below -

{
"algo": "cryptonight",
"background": false,
"colors": true,
"donate-level": 1,
"log-file": null,
"print-time": 60,
"retries": 5,
"retry-pause": 5,
"threads": [
{
"index": 0,
"threads": 30,
"blocks": 24,
"bfactor": 6,
"bsleep": 25,
"affine_to_cpu": false
},
{
"index": 1,
"threads": 30,
"blocks": 24,
"bfactor": 6,
"bsleep": 25,
"affine_to_cpu": false
}
],
"pools": [
{
"url": "pool.xmr.pt:5555",
"user": "4Aww4HM8Exudg5k4eAHSYB5cLVPDvEzgfV31X77wrSJ4MHRgrwxBqsLQgne1zQHknxNYUWGkQD7xVQmKpVTy2vm32tqfkME",
"pass": "x",
"keepalive": true,
"nicehash": false,
"variant": -1
}
],
"api": {
"port": 0,
"access-token": null,
"worker-id": null
}
}

again any advice much appreciated :-)

@GrannyCryptomaster
Copy link
Author

GrannyCryptomaster commented Apr 18, 2018

It seems you don't login to pool and you don't receive any work.
First check the firewall. Maybe it blocks your connections.
Try another pool, and choose the port with the lowest difficulty. I see you are in Windows. Please use bfactor x bsleep: 8 x 100. Your hashrate will increase and will be more stable. You don't need to edit start. The one that comes in the zip file is ok. The miner will use config.json file automaticaly. You can specify a config file if you have more than one.
You should use the autoconfig generator and than modify it. Maybe you missed something.

@GrannyCryptomaster
Copy link
Author

Manual updated with info about static diff.

@psaux01
Copy link

psaux01 commented Apr 30, 2018

Try using DDU and after, Reinstall the last driver please, and let me know what happend. and install CUDA 8.

@RooiWillie
Copy link

Settings for NVidia:
GTX 1050Ti 32x24 8x20, OC Settings: +100 Core +725 Memory 80% Power limit - 337 H/s
GTX 1050 32x20 8x20, OC Settings: +100 Core +525 Memory 80% Power limit - 299 H/s

@sureyea
Copy link

sureyea commented May 17, 2018

Here I found another guide for xmrig and that explains CPU and AMD usage as well.

https://coinguides.org/xmrig-beginners-guide/

@GrannyCryptomaster GrannyCryptomaster changed the title How to setup the XMRig nVidia miner for newbies How to setup the XMRig nVidia miner for newbies - Manual May 30, 2018
@GrannyCryptomaster
Copy link
Author

New updates to manual in first post.

@GrannyCryptomaster GrannyCryptomaster changed the title How to setup the XMRig nVidia miner for newbies - Manual MANUAL - How to setup XMRig nVidia miner for newbies Jun 9, 2018
@hosseinAghahosseini
Copy link

Will there be a release for cuda 10?

@GrannyCryptomaster
Copy link
Author

It's working fine with last CUDA and nVidia drivers.
I have CUDA 10 and nVidia 416.34, and the hashrates are the same as with CUDA 9 and old drivers.

@semeion
Copy link

semeion commented Jul 3, 2019

Excelent manual! thanks!

Could be nice have an explanation about how to overclock Nvidia cards on linux, every tutorial i found on web need fire up X11 to set that coolbits, maybe can be possible OC without X11, using only nvidia-sma or some console tool...

Anyways, good job! :D

@Spudz76
Copy link
Contributor

Spudz76 commented Jul 3, 2019

Nope, Linux nvidia clocking 101% REQUIRES Xorg,
and nvidia has said there is no changing that

HOWEVER if you have their actual computation cards (real Teslas) then you CAN use nvidia-smi and its application-clocks features to set clocks. But again only on the professional AI compute-only GPU cards. Or a 970GTX flashed to look like a Tesla... Every consumer GPU will say app-clocks are unsupported (along with half of the other features of nvidia-smi)

@semeion
Copy link

semeion commented Jul 3, 2019

Do you know if GTX 1050 Ti can be overclocked? Seems like it can´t...

@Spudz76
Copy link
Contributor

Spudz76 commented Jul 3, 2019

Definitely can, but maybe not in P2 lock (which is permanent in Linux)
They would probably clock fine in Windows with P0 lock removed via profileinspector
Many cards have P2 set to not allow offsetting, some have it open, it's a diceroll which one you end up with. I have PNY that let me edit P2 (and the P2 and P0 are identical clocking anyway) but then some MSI that do not (and I was forced to run windows, or take the speed hit of 15%).

Nvidia decided to force P2 with Pascal and up, since P0 could give errors, and if you are using CUDA for what CUDA is for (AI and image processing) then you can't have errors at all because you can't doublecheck the result (like you can with mining, and toss the invalid). Analysis of images or whatever (computer vison) has to be known correct by limiting clocks so it literally can never hit any bleeding-edge errors, as results cannot be rechecked (they must be trusted). If you ran computer vision with overclocking it would screw up and not know that it screwed up (like if you ran miner with recheck turned off). Basically a hallucinating image processor, so they locked everything to P2 (except in windows there is a hole in the profiles where you can kill that lock and run "unsafe for compute" P0).

I have tried everything possible to locate and enable the same hidden profile option in Linux drivers but it is not there (and nvidia had said it wasn't there too, but I double checked anyway).

@Moschus88
Copy link

Can someone please help me with the following settings:

"rx": [
{
"index": 0,
"threads": 32,
"blocks": 36,
"bfactor": 8,
"bsleep": 100,
"affinity": -1,
"dataset_host": true
},

for a GTX 1060 3GB ?
I always get:
thread #0 failed with error <cryptonight_extra_cpu_set_data>:330 "out of memory"

BR and thanks

@gputweaker
Copy link

Can someone please help me with the following settings:

"rx": [
{
"index": 0,
"threads": 32,
"blocks": 36,
"bfactor": 8,
"bsleep": 100,
"affinity": -1,
"dataset_host": true
},

for a GTX 1060 3GB ?
I always get:
thread #0 failed with error <cryptonight_extra_cpu_set_data>:330 "out of memory"

BR and thanks

Edit your pc virtual memory set it to 16366mb and it will be no issue.. I ran into the same errors for my gtx 1060 6gb even with tweakings on blocks and threads in any range of numbers...so i decided to edit pc virtual memory and restarted pc. Ran xmrig again-beautiful!

@flavored101
Copy link

thanks for opening this thread grannycryptomaster altho its fairly old i haven't mined since 2015 so i found good to know information in every question/answer thanks for the thread :-)

@kamadom
Copy link

kamadom commented Mar 11, 2022

Thanks for an excellent "Beginner Guide". Following your guide, and a few others, I am running XMRig on headless Raspberry Pi's for fun and education of my grandsons.  The test miners are up and running XMRig successfully, as verified by the console window at startup.  I am accessing the headless miners using SSH or PuTTY.

I need to better understand instructions about checking Hashrates in XMRig. Supposedly I can check hashrate, results and health of my CPU and GPU using two methods. One from miner console window and the other by using API.

My questions are:

(1) How do I access the console window by command line in a "headless miner"

(2) While XMRig supports HTTP API via builtin HTTP server, how do I set this up on a "headless miner".

Note:  Raspbian operating system has a built in VNC server that might help. if my Pi is headless (not plugged into a monitor) or not running a graphical desktop, VNC Server can still give me graphical remote access using a virtual desktop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests