Create fully optimised version of plot #4

Open
wants to merge 24 commits into
from

Conversation

Projects
None yet
5 participants

k06a commented Jun 8, 2017

No description provided.

k06a referenced this pull request in bhamon/gpuPlotGenerator Jun 13, 2017

Closed

Fully optimised plots #19

BobbyT commented Jun 13, 2017

Hi,

did you test it under linux? If so, would tell me which distribution?

k06a commented Jun 14, 2017

  1. macOS
  2. CentOS 6.5

BobbyT commented Jun 14, 2017

hmm I didn't get it working in CentOS 7 and Ubuntu 17.04, Debian 8 (compilation warnings).
It started to allocate/buffer RAM more than specified in the -m parameter,
threads never started.

I will try CentOS 6.5

k06a commented Jun 14, 2017 edited

@BobbyT Debian issue can be solved: #1
Can you show me your command? -m 4096 will allocated 1GB of ram

BobbyT commented Jun 14, 2017

I think i tried to give 32G and then later 23G (I thought it maybe works async)

Thats what I used last:
-s 0 -n 30052352 -m 94208 -t 24

k06a commented Jun 14, 2017

@BobbyT is there any error print?

k06a commented Jun 14, 2017

@BobbyT it started to allocate whole file, which is 7TB, you need to wait a few minutes I think. Can you see a file size or drive free space to monitor progress?

BobbyT commented Jun 14, 2017

I will try it later this day again.

but I let it run last night, this morning Ubuntu couldn't wake up. After a reboot, the file was still 0K size.

k06a commented Jun 14, 2017

@BobbyT I would like to achieve file to be allocated as fast as possible, sure without zeroing. Can you tell me which of resize methods was used? This should be printed on start.

k06a commented Jun 14, 2017 edited

@BobbyT in new version you don't really need a huge RAM amount. Think there will be no difference to use 4GB or 32GB of RAM. It fills half of ram, starts to write it to the disk and filling second part simultaneously, then wait both writing and filling processes to finish. And then writes second part to the disk and filling first part with new nonces and again waiting both processes to finish ...

And one more idea. 12 000 nonces per minute speed requires 50MB/s disk speed. So writing will be usually slower than generating on GPU for average disk. Can you tell me you disk speed and generation speed?

BobbyT commented Jun 15, 2017 edited

Ubuntu 17.04:
This time I started with 4GB of RAM
I don't know the nonces per minute, never got to that point, but original mdcct's would give me around 11k.
Disk Speed is around 160 mb/s.

sudo ./plot -k 123 -x 1 -d /media/drive/ -s 0 -n 30097408 -m 16384 -t 24
Using SSE4 core.
Total generation steps: 3674
Creating plots for nonces 0 to 30097408 (7895 GB) using 4096 MB memory and 24 threads
Using ftruncate to expand file size to 7348GB

RAM Usage:

free -m
              total        used        free      shared  buff/cache   available
Mem:          48281        1236         268          38       46775       46452
Swap:          2047           0        2047

Compiling :

--- Compiling for 64-bit arch ---
CFLAGS=-D LINUX -D AMD64 -O2 -Wall -D_FILE_OFFSET_BITS=64 -m64
gcc -Wall -m64 -c -o shabal64.o shabal64.s
gcc -Wall -m64 -c -O2 -march=native -o mshabal_sse4.o mshabal_sse4.c
gcc -Wall -m64 -c -O2 -march=native -mavx2 -o mshabal256_avx2.o mshabal256_avx2.c
gcc -D LINUX -D AMD64 -O2 -Wall -D_FILE_OFFSET_BITS=64 -m64 -c -o helper.o helper.c		
gcc -D LINUX -D AMD64 -O2 -Wall -D_FILE_OFFSET_BITS=64 -m64 -o plot plot.c shabal64.o mshabal_sse4.o mshabal256_avx2.o helper.o -lpthread -std=gnu99 -DAVX2
plot.c: In function ‘main’:
plot.c:659:2: warning: ignoring return value of ‘ftruncate’, declared with attribute warn_unused_result [-Wunused-result]
  ftruncate(ofd, file_size);
  ^~~~~~~~~~~~~~~~~~~~~~~~~
gcc -D LINUX -D AMD64 -O2 -Wall -D_FILE_OFFSET_BITS=64 -m64 -o optimize optimize.c helper.o
gcc -D LINUX -D AMD64 -O2 -Wall -D_FILE_OFFSET_BITS=64 -m64 -DSOLO -o mine mine.c shabal64.o helper.o -lpthread
gcc -D LINUX -D AMD64 -O2 -Wall -D_FILE_OFFSET_BITS=64 -m64 -DURAY_POOL -o mine_pool_all mine.c shabal64.o helper.o -lpthread
gcc -D LINUX -D AMD64 -O2 -Wall -D_FILE_OFFSET_BITS=64 -m64 -DSHARE_POOL -o mine_pool_share mine.c shabal64.o helper.o -lpthread

k06a commented Jun 15, 2017 edited

@BobbyT looks like it is still expanding the file. Can you track free space on this disk?

Strange it is not using fallocate way to expand file.

BobbyT commented Jun 15, 2017 edited

I can give you iostat, but I can't access via GUI or df.
If I try open the GUI, it will just freeze.
If I execute df, the prompt just blinks and nothing happens

Question: Is it normal that my whole 48GB RAM are buffered/cached?

Results for about 50 minutes running:

iostat  /dev/sdb2 -dx
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,29    0,01    0,54    7,07    0,00   92,09

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb2
                272,61        41,79    176993,59     133780  566597200

k06a commented Jun 15, 2017

@BobbyT thanks for your feedback looks like OS trying to allocate whole file with ZEROING. Looks like virtual memory growing while zeroing. I should fix this behaviour.

k06a commented Jun 15, 2017

@BobbyT what file system are you using on this volume?

BobbyT commented Jun 15, 2017

Currently exFat.
Potting: hp z800 workstation
Mining: Mac Mini late 2012

therefore I thought, exFat is the best solution

BobbyT commented Jun 15, 2017

An additional info: I can't kill the process once started

k06a commented Jun 15, 2017

@BobbyT please try newest version I've just pushed.

BobbyT commented Jun 15, 2017 edited

just started.
current result:

sudo ./plot -k 123 -x 1 -d /media/drive/ -s 0 -n 30097408 -m 16384 -t 24
Using SSE4 core.
Total generation steps: 3674
Creating plots for nonces 0 to 30097408 (7895 GB) using 4096 MB memory and 24 threads
0.03% Percent done. 15777 nonces/minute, 31:47 left (can restore from step 0) 

24 Threads started, then disappeared.
RAM usage looked good in the beginning, now this again:
used about 5GB, which is good, but about 42GB buffered/cached.

free -m
              total        used        free      shared  buff/cache   available
Mem:          48281        5253         265          30       42761       42443
Swap:          2047           0        2047


the filesize is now 100GB after 10 minutes.
ls and df takes a few seconds to execute

If this is expected behavior, then I'll wait and see

nmon output:

┌nmon─16f──────[H for help]───Hostname=HP-Z80Refresh= 2secs ───17:33.17──────────────────────────────────────────────────────────┐
│ Disk I/O ──/proc/diskstats────mostly in KB/s─────Warning:contains duplicates──────────────────────────────────────────────────────────│
│DiskName Busy  Read WriteMB|0          |25         |50          |75       100|                                                         │
│sda        0%    0.0    0.0|>                                                |                                                         │
│sda1       0%    0.0    0.0|>                                                |                                                         │
│sda2       0%    0.0    0.0|>                                                |                                                         │
│sdb      100%    0.0  189.6|RWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW>                                                         │
│sdb1       0%    0.0    0.0|>                                                |                                                         │
│sdb2     100%    0.0  189.6|RWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW>                                                         │
│sdc        0%    0.0    0.0|>                                                |                                                         │
│sdc1       0%    0.0    0.0|>                                                |                                                         │
│sdc2       0%    0.0    0.0|>                                                |                                                         │
│sdc3       0%    0.0    0.0|>                                                |                                                         │
│sdc5       0%    0.0    0.0|>                                                |                                                         │
│Totals Read-MB/s=0.0      Writes-MB/s=379.2    Transfers/sec=585.8                                                                     │
│────────────────────────────────────────────────────────────────────────────────

k06a commented Jun 15, 2017 edited

@BobbyT please try new version, it uses fsync to avoid cache expanding. I am not sure 40GB of cached memory is normal. Looks like unexpected caching.

BobbyT commented Jun 15, 2017 edited

First I plotted 10GB:

-s 0 -n 32768 -m 16384 -t 24
Using SSE4 core.
Total generation steps: 4
Creating plots for nonces 0 to 32768 (8 GB) using 4096 MB memory and 24 threads
100.00% Percent done. 16277 nonces/minute, 0:00 left (can restore from step 3)               
Finished plotting.

but it left buffered memory


free -m
              total        used        free      shared  buff/cache   available
Mem:          48281        5269       26063          26       16947       42432
Swap:          2047           0        2047

I cleansed with

echo 3 > /proc/sys/vm/drop_caches

now I'm plotting 100GB

-s 0 -n 409600 -m 16384 -t 24
Using SSE4 core.
Total generation steps: 50
Creating plots for nonces 0 to 409600 (107 GB) using 4096 MB memory and 24 threads
44.00% Percent done. 2656 nonces/minute, 1:29 left (can restore from step 21)  

And the nonces per minute are down from 15k to 2.5k. It also needs a lot more time than expected.
It's still running. But RAM again full buffered.

free -m
              total        used        free      shared  buff/cache   available
Mem:          48281        5289         307          26       42684       42412
Swap:          2047           0        2047

I think the 10GB plot couldn't buffer the whole RAM, because it simply didn't run long enough.

Disk speed seems also down to <20MB/s and sometimes < 10MB/s

k06a commented Jun 15, 2017

@BobbyT try one more version with possible fix of unexpected buffering, please.

BobbyT commented Jun 15, 2017

Result 12 GB Plot with 4GB RAM

time sudo ./plot -k 123 -x 1 -d /media/drive/ -s 0 -n 49152 -m 16384 -t 24
Using SSE4 core.
Total generation steps: 6
Creating plots for nonces 0 to 49152 (12 GB) using 4096 MB memory and 24 threads
100.00% Percent done. 16219 nonces/minute, 0:00 left (can restore from step 5)               
Finished plotting.

real	3m21,350s
user	70m57,636s
sys	1m3,240s

free -m
              total        used        free      shared  buff/cache   available
Mem:          48281        1019       22035          35       25226       46674
Swap:          2047         137        1910

still left buffered RAM, is this okay or not?

Do you have any test cases? Maybe I'm doing something wrong?

k06a commented Jun 15, 2017

This may be unusable cache. Can you try again to create huge file and monitor what happens?

BobbyT commented Jun 15, 2017 edited

120 GB Plot with 44G RAM test

Commit: 87bed9a Add fsync to prevent file caching

time sudo ./plot -k 123 -x 1 -d /media/drive/ -s 0 -n 360448 -m 180224 -t 24
[sudo] password for malisa: 
Using SSE4 core.
Total generation steps: 4
Creating plots for nonces 0 to 360448 (94 GB) using 45056 MB memory and 24 threads
100.00% Percent done. 5685 nonces/minute, 0:15 left (can restore from step 3)               
Finished plotting.

real	55m16,485s
user	520m10,868s
sys	5m37,564s

Commit: 8b300a2 Disable buffering

time sudo ./plot -k 123 -x 1 -d /media/drive/ -s 0 -n 360448 -m 180224 -t 24
Using SSE4 core.
Total generation steps: 4
Creating plots for nonces 0 to 360448 (94 GB) using 45056 MB memory and 24 threads
100.00% Percent done. 15366 nonces/minute, 0:05 left (can restore from step 3)               
Finished plotting.

real	27m34,118s
user	489m4,772s
sys	6m24,408s

free -m
              total        used        free      shared  buff/cache   available
Mem:          48281         717       45025          34        2538       47055
Swap:          2047         386        1661

df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             24G     0   24G   0% /dev
tmpfs           4,8G  9,9M  4,8G   1% /run
/dev/sdc5       192G  6,5G  175G   4% /
tmpfs            24G  8,0K   24G   1% /dev/shm
tmpfs           5,0M  4,0K  5,0M   1% /run/lock
tmpfs            24G     0   24G   0% /sys/fs/cgroup
tmpfs           4,8G  136K  4,8G   1% /run/user/1000
/dev/sdb2       7,3T   89G  7,2T   2% /media/drive

I think there is an undeniable improvement in time.
I also think the freed RAM afterwards is due to the large staggersize, there is simply no more space left for buffering.

Question: Is the optimized file also smaller?
Plotter: 94GB, written 89GB

Currently running same test with only 4GB RAM

BobbyT commented Jun 16, 2017 edited

120 GB Plot with 4GB RAM Test:

time sudo ./plot -k 123 -x 1 -d /media/drive/ -s 0 -n 491520 -m 16384 -t 24
Using SSE4 core.
Total generation steps: 60
Creating plots for nonces 0 to 491520 (128 GB) using 4096 MB memory and 24 threads
100.00% Percent done. 16272 nonces/minute, 0:00 left (can restore from step 59)               
Finished plotting.

real	30m41,668s
user	709m48,980s
sys	2m30,272s

free -m
              total        used        free      shared  buff/cache   available
Mem:          48281         848        4446          34       42985       46844
Swap:          2047         363        1684

unfortunately still buffered RAM left

screenshot_buffered_ram

k06a commented Jun 16, 2017

Ok, looks like 8b300a2 fixed issues. I think os keep some part of file buffered to prevent disk IO if possible, but it is not used. This is system responsibility. So this buffered ram will be purged if needed to any app.

k06a commented Jun 16, 2017

@BobbyT thanks for your testing! 128GB file (491520 nonces) on speed 16k nonces/min should process for 30min, as I see in your log: real 30m41,668s. So I will think it is best possible result. There are no IO delays. It may took up to 30 hours to fill 8TB. Keep in mind to use -r flag with argument to restore plotting in case of any IO failure.

BobbyT commented Jun 16, 2017

Plotting 1024GB

time sudo ./plot -k 123 -x 1 -d /media/drive/ -s 0 -n 4128768 -m 172032 -t 24
Using SSE4 core.
Total generation steps: 48
Creating plots for nonces 0 to 4128768 (1083 GB) using 43008 MB memory and 24 threads
100.00% Percent done. 16310 nonces/minute, 0:05 left (can restore from step 47)               
Finished plotting.

real	254m56,675s
user	5963m13,276s
sys	21m29,068s

I used less memory, because Ubuntu started swapping and I thought it must caused by not enough free RAM. But I actually had to disable swapping (swappiness) and this made it also a bit faster.

k06a commented Jun 16, 2017

@BobbyT I see you plotted 4128768 nonces with speed 16310 nonces/minute it is:

4128768 / 16310 = 253 minutes

and yo had:

real	254m56,675s

Looks like there is no any HDD delay. It plots fully on your CPU speed.

BobbyT commented Jun 16, 2017

Currently I'm plotting the full 8TB HDD. If I use more than 43G of RAM (48G installed), swapping starts and nmon shows activity on system drive, too.

the 1024GB test I had to restart, disable swappiness=0 and decrease staggersize in order to prevent swapping. Without that, nonces/minute starts dropping.
My system drive isn't an SSD, this is an old workstation. So, maybe thats just me.

k06a commented Jun 16, 2017

@BobbyT I don't know how to avoid this buffering right now. I am working on macOS, I saw no buffering while 1TB plotting with 8GB RAM of 16GB total.

BobbyT commented Jun 16, 2017

@k06a what kind of Mac do you use? What your hardware specs, nonces per minute and time results?

k06a commented Jun 16, 2017 edited

@BobbyT I've used MacBook Pro 15" Retina 2015 with AVX2 core and 8 thread: 12000 nonces/minute. Time was equal to nonces/speed. 12k was a top speed, and Mac kept it until plotting finished. I saw several gigabytes of free RAM while plotting.

hheexx commented Jun 16, 2017

Error while file lseek (errno 22 - Invalid argument).

I receive this error after some time for plots > 2TB

k06a commented Jun 16, 2017

@hheexx what OS are you using?

hheexx commented Jun 16, 2017

@k06a Ubuntu server 16.04

k06a commented Jun 16, 2017 edited

@hheexx is it 32bit or 64bit version? What file system are you using on plotted drive? Try to recover with -r option from last successful step. Is it okay of still falls in error?

hheexx commented Jun 16, 2017 edited

@k06a
It crashes before first recovery segment is completed.
It's ext4, 6tb partition. 64bit ofc.

hheexx commented Jun 16, 2017

2TB file

root@Ubuntu-1604-xenial-64-minimal /mnt/m/plots # e4defrag -c 533994803042953104_21800000_7659520_7659520
now/best size/ext
533994803042953104_21800000_7659520_7659520
490558/14960 3997 KB

Total/best extents 490558/14960
Average size per extent 3997 KB
Fragmentation score 1
[0-30 no problem: 31-55 a little bit fragmented: 56- needs defrag]
This file (533994803042953104_21800000_7659520_7659520) does not need defragmentation.
Done.

zmeyc commented Jun 17, 2017 edited

@k06a Works on Ubuntu 16, but resulting file is fragmented.
@hheexx The idea of plot file optimization is to plot scoops of the same block sequentially to avoid seeking.
Plotter writes them in cycles, i.e. block 0 scoops, block 1 scoops etc.
Then seeks to end of previous block 0 scoops and writes more block 0 scoops.
Then seeks to end of previous block 1 scoops and writes more block 1 scoops.
If the file is not pre-allocated completely, block 1 scoops end up physically after block 0 scoops. Next chunk of block 0 scoops is not merged with previous chunk.
So, optimization is useless in the end if the file is not pre-allocated properly.
I noticed that while plotting, df still show free space available, so ftruncate() doesn't really allocate the disk space, it only sets file size. Probably zeroing out the entire file before plotting can fix this problem.

Btw, ext4 superblock is repeated multiple times in the middle of disk leading to additional fragmentation. Also, inode count and some other parameters can be tweaked to improve fragmentation / disk space usage even further.

k06a commented Jun 17, 2017 edited

@zmeyc awesome! Thanks for xfs_io -f -c "fiemap -v" /mnt/test/file1, trying to make if works under macOS. Lets discuss in Telegram: https://t.me/k06aa

k06a commented Jun 17, 2017

@zmeyc please try newest version, I had added file preallocation with 4096 step to avoid system freeze...

BobbyT commented Jun 17, 2017 edited

8TB Plot

Ubuntu 17.04 64bit
exFat

Commit: 8b300a2 Disable buffering

time sudo ./plot -k 123 -x 1 -d /media/drive/ -s 0 -n 30105600 -m 172032 -t 24
Using SSE4 core.
Total generation steps: 350
Creating plots for nonces 0 to 30105600 (7897 GB) using 43008 MB memory and 24 threads
100.00% Percent done. 16312 nonces/minute, 0:05 left (can restore from step 349)               
Finished plotting.

real	1846m54,421s
user	43428m34,844s
sys	98m50,084s

 df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             24G     0   24G   0% /dev
tmpfs           4,8G  9,9M  4,8G   1% /run
/dev/sdc5       192G  6,9G  175G   4% /
tmpfs            24G   48K   24G   1% /dev/shm
tmpfs           5,0M  4,0K  5,0M   1% /run/lock
tmpfs            24G     0   24G   0% /sys/fs/cgroup
tmpfs           4,8G  144K  4,8G   1% /run/user/1000
/dev/sdb2       7,3T  7,2T  102G  99% /media/drive

BobbyT commented Jun 17, 2017

Current Problem: I can only mount it as readonly, repair isn't possible

k06a commented Jun 17, 2017

@BobbyT thanks for reporting!

Due to code issues created plot is fragmented on some file systems. But look like exFAT do not support for spares files: https://msdn.microsoft.com/ru-ru/library/windows/desktop/ee681827(v=vs.85).aspx – that means your plot should not be fragmented at all.

k06a commented Jun 17, 2017

@BobbyT what does it mean?

Current Problem: I can only mount it as readonly, repair isn't possible

Is it a new HDD or old one?

k06a commented Jun 17, 2017 edited

I had 4 drives: (1TB, 1TB, 1.5TB, 2TB) and 2 of them died while I filled them with plots. This was very old hard drives, I think 5-7 years old WD Green.

zmeyc commented Jun 17, 2017

@BobbyT @k06a
8 tb should be pretty new. what does fsck say?
btw, did you specify your account's numeric id in -k ? 123 is for testing only.

cooling down hdds is a must especially when plotting, check their temperature & smart info

BobbyT commented Jun 17, 2017

It's a new one, seagate archive.
Windows chkdsk helped, repaired it.
macOS couldn't repair, and afaik linux doesn't have repair tools for exfat, or does it?

Now it's mounted without issues in macOS.

@zmeyc I use -k numeric id, but before posting results here, I replace personal data.

zmeyc commented Jun 17, 2017

@BobbyT I had lots of issues with exfat on OS X, it sometimes refused to mount, most of the times it could be fixed by plugging HDD into Windows PC, but sometimes it lost all data. I recommend using native fs-es: ext4 or hfs+ if possible. But ext4 needs to be tuned for storing large files, there are lots of details, I'll probably write a post about it.

zmeyc commented Jun 17, 2017 edited

@k06a @BobbyT In particular (for ext4): check that the partition is properly aligned, increase inode_ratio, disable journaling (wins 10% extra disk space), disable file access time updating on mount, disable HDD sleep (hdds will die faster if they park every minute and Seagate does this, at least external ones, you can hear clicking noises on every block start). Cool them down, large room fan works the best. :D I managed to overheat mine when internal small fan died during plotting, hdd is still alive but SMART shows errors.

BobbyT commented Jun 18, 2017 edited

this time I'm using HFS+ as filesystem, without journaling
Commit: 92cfbb7

When I started the process, the drive got filled up 99% (took over 16 hours) and then plotting started.

Thats the result: Error while file write (errno 28 - No space left on device).

time sudo ./plot -k 123 -x 1 -d /media/drive/  -s 30105600 -n 30105600 -m 172032 -t 24
Using SSE4 core.
Total generation steps: 350
Creating plots for nonces 30105600 to 60211200 (7897 GB) using 43008 MB memory and 24 threads
Using ftruncate to expand file size to 7350GB
Resizing file to 7892002406400 of 7892002406400 (100.00) Done!
1.43% Percent done. 16312 nonces/minute, 30:24 left (can restore from step 4)               Error while file write (errno 28 - No space left on device).

real	1090m38,665s
user	748m48,108s
sys	63m54,984s

k06a commented Jun 18, 2017

@BobbyT wow, huge allocation time! Just a found and fixed a bug with allocation by ftruncate. Can you try newest version to allocate file from begin? Sorry for inconvenience.

BobbyT commented Jun 18, 2017

@k06a Okay I'm running again. Whats the expected allocation time?

k06a commented Jun 18, 2017

@BobbyT just pushed commit e52c650

k06a commented Jun 18, 2017

I think it should allocate 1TB for a few minutes.

BobbyT commented Jun 18, 2017

doesn't it depend on the speed of the hard drive?

k06a commented Jun 18, 2017

@BobbyT it depends, I told you estimated time for 50MB/s HDDs

BobbyT commented Jun 18, 2017 edited

Ok, I stopped it, because it took longer than you said it would.
24 Minutes and 242G was written.
Although nmon always shows above 150MB/s,

time sudo ./plot -k 123 -x 1 -d /media/drive/  -s 30105600 -n 30110000 -m 172032 -t 24
Using SSE4 core.
Total generation steps: 350
Adjusting total nonces to 30191616 to match stagger size
Creating plots for nonces 30105600 to 60297216 (7920 GB) using 43008 MB memory and 24 threads
Using ftruncate to expand file size to 7371GB
Resizing file to 258923298816 of 79145509^T^C4 (3.25)

real	24m20,076s
user	0m0,000s
sys	1m59,376s

 df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             24G     0   24G   0% /dev
tmpfs           4,8G  9,9M  4,8G   1% /run
/dev/sdb5       192G  7,0G  175G   4% /
tmpfs            24G   72K   24G   1% /dev/shm
tmpfs           5,0M  4,0K  5,0M   1% /run/lock
tmpfs            24G     0   24G   0% /sys/fs/cgroup
tmpfs           4,8G  148K  4,8G   1% /run/user/1000
/dev/sda2       7,3T  242G  7,1T   4% /media/drive

zmeyc commented Jun 18, 2017

@k06a @BobbyT Could you make a small file and compare it with the one made by original plotter? It produces a different file for me.

./plot -k 123 -x 1 -d . -s 0 -n 500 -m 500 -t 7

Binary files can be compared with cmp.

zmeyc commented Jun 18, 2017

@k06a Latest commit didn't fix the problem, the file produced with the new plotter is twice as big as the original one: 262144000 bytes instead of 131072000.

BobbyT commented Jun 18, 2017

905e815 plot.c Line 682, https://github.com/k06a/mjminer/blob/master/plot.c#L682
There is an little Bug.

k06a commented Jun 19, 2017 edited

@BobbyT sad bug, fixed it.

hheexx commented Jun 19, 2017

plot.c:682:51: error: ‘FALLOC_FL_INSERT_RANGE’ undeclared (first use in this function)
int ret = fallocate(ofd, FALLOC_FL_ZERO_RANGE | FALLOC_FL_INSERT_RANGE, off, chunkSize);
^
plot.c:682:51: note: each undeclared identifier is reported only once for each function it appears in
plot.c:658:3: warning: ignoring return value of ‘ftruncate’, declared with attribute warn_unused_result [-Wunused-result]
ftruncate(ofd, current_file_size);
^
Makefile:44: recipe for target 'plot' failed
make: *** [plot] Error 1

k06a commented Jun 19, 2017

@hheexx sad, just removed this flag

hheexx commented Jun 19, 2017 edited

Still there it's warning:
gcc -D LINUX -D AMD64 -O2 -Wall -D_FILE_OFFSET_BITS=64 -m64 -o plot plot.c shabal64.o mshabal_sse4.o mshabal256_avx2.o helper.o -lpthread -std=gnu99 -DAVX2
plot.c: In function ‘main’:
plot.c:658:3: warning: ignoring return value of ‘ftruncate’, declared with attribute warn_unused_result [-Wunused-result]
ftruncate(ofd, current_file_size);
^

But it compiles.

hheexx commented Jun 19, 2017

I am currently investigating situation that plots written using this fork do not get confirmations.

Anybody else tested?

BobbyT commented Jun 19, 2017

@hheexx how do I know if its confirmend or not?

k06a commented Jun 19, 2017

@hheexx is plot size normal or doubled? How fast file allocates?

hheexx commented Jun 19, 2017 edited

@BobbyT
I use creepMiner.
In log you can see nonces that are submitted and from what plotfile.

I can see a lot of the time if the nonce is from plotfile written by this plotter it does not get confirmation from pool. There is no error but there is no confirmation.

I am using version after "Disable buffering" commit.
Will try head now.

hheexx commented Jun 19, 2017

@k06a
Creating plots for nonces 90465344 to 98124864 (2009 GB) using 45056 MB memory and 12 threads
Resizing file to 490209280 of 2007897210880 (0.02)Failed to expand file to size 490209280 (errno 95 - Operation not supported).

k06a commented Jun 19, 2017

@hheexx what file system are you using? Maybe try on ext4?

BobbyT commented Jun 19, 2017

@hheexx I'm using creepMiner too. But I'm new to this and I don't know how a confirmation looks like. I started it yesterday.

k06a commented Jun 19, 2017

I'll create same plots and compare md5

hheexx commented Jun 19, 2017

@k06a it is ext4.

k06a commented Jun 19, 2017

Checking plots from new version:

$ sudo time ./plot -k 1893347256907199281 -x 2 -d ./ -n 32768 -s 9043968 -m 4096 -t 8
Using AVX2 core.
Total generation steps: 16
Creating plots for nonces 9043968 to 9076736 (8 GB) using 1024 MB memory and 8 threads
Using fcntl::F_SETSIZE to expand file size to 8GB
Resizing file to 8589934592 of 8589934592 (100.00) Done!
100.00% Percent done. 9865 nonces/minute, 0:00 left (can restore from step 15)               
Finished plotting.
      210.27 real      1527.01 user        13.18 sys

$ md5 1893347256907199281_9043968_32768_32768
MD5 (1893347256907199281_9043968_32768_32768) = ef1844af74c193858c42b08a7e5685e1

And old version:

$ sudo time ./plot -k 1893347256907199281 -x 2 -d ./ -n 32768 -s 9043968 -m 4096 -t 8
Using AVX2 core.
Creating plots for nonces 9043968 to 9076736 (8 GB) using 1024 MB memory and 8 threads
87 Percent done. 9113 nonces/minute, 0:00 left                
Finished plotting.
      220.06 real      1490.95 user        10.66 sys

$ ./optimize 1893347256907199281_9043968_32768_4096 
Reorganizing file 1893347256907199281_9043968_32768_4096 to file 1893347256907199281_9043968_32768_32768:
Processing 512 scoops at once (uses 1073 MB memory)
processing Scoop 3585 of 4096           
Done.
Replacing plot file

$ md5 1893347256907199281_9043968_32768_32768
MD5 (1893347256907199281_9043968_32768_32768) = ef1844af74c193858c42b08a7e5685e1

k06a commented Jun 19, 2017 edited

@hheexx can you try to realloc file with dd and tell me allocation time?

sudo dd if=/dev/zero of=<DESTINATION_AND_FILENAME> bs=256k count=<NONCES>

Than you can fill preallocated file with .\plot app :)

hheexx commented Jun 19, 2017

don't know why. It should be ext4.

fstab:
/dev/sdh1 /mnt/h ext4 defaults,noatime,nodiratime 0 0

k06a commented Jun 19, 2017 edited

@hheexx try new version it uses dd as fallocate fallback. It may be harder to abort because of system() inner call, use kill if needed.

BobbyT commented Jun 19, 2017

@hheexx PlotsCheck says "checked - OK" on file from this result: #4 (comment)

about your screenshot, I don't find such entries in my creepMiner Log, neither submitted nor confirmed. But it's only been running for 24h.

I think I have to restart plotting, @zmeyc was right about exfat

hheexx commented Jun 19, 2017

@k06a trying just now. It returned 100% Done almost imidiatly, maybe you are measuring progress wrong.

Here are dd timings:

7659520+0 records in
7659520+0 records out
2007897210880 bytes (2.0 TB, 1.8 TiB) copied, 11527.6 s, 174 MB/s

real 192m9.440s
user 0m3.464s
sys 31m20.072s

@BobbyT Thanks! Maybe you have submission and confirmation times in your web ui?

zmeyc commented Jun 19, 2017 edited

@k06a I've tried to rerun the test (Ubuntu 16.04):

./plot -k 123 -x 1 -d . -s 0 -n 500 -m 500 -t 7

And it produced file twice the bigger than original plotter.
It's filled with zeros at the start.

k06a commented Jun 19, 2017

@zmeyc please provide full output

zmeyc commented Jun 19, 2017 edited

@k06a

-rw-r--r-- 1 user group 261881856 Jun 19 16:31 123_0_500_500 <--- new
-rw-r--r-- 1 user group 131072000 Jun 19 16:27 123_0_500_500.old

Original one:

$ plot -k 123 -x 1 -d . -s 0 -n 500 -m 500 -t 7
Using SSE4 core.
Creating plots for nonces 0 to 500 (0 GB) using 125 MB memory and 7 threads
Writing plot to disk... position: 0
Writing plot to disk... position: 100000000
0 Percent done. 4795 nonces/minute, 0:00 left                
Finished plotting.
$ mv 123_0_500_500 123_0_500_500.old

New one:

$ plot -k 123 -x 1 -d . -s 0 -n 500 -m 500 -t 7
Using SSE4 core.
Total generation steps: 2
Creating plots for nonces 0 to 500 (0 GB) using 125 MB memory and 7 threads
Using fallocate to expand file size to 0GB
Resizing file to 32000 of 131072000 (0.02%)
Failed to expand file to size 32000 (errno 95 - Operation not supported).
Using dd to expand file size to 0GB
Resizing file to 1536000 of 131072000 (1.17%)1+0 records in
1+0 records out
262144 bytes (262 kB, 256 KiB) copied, 0,00126702 s, 207 MB/s
1+0 records in
1+0 records out
262144 bytes (262 kB, 256 KiB) copied, 0,000947488 s, 277 MB/s
...SKIPPED...
262144 bytes (262 kB, 256 KiB) copied, 0,00188257 s, 139 MB/s
1+0 records in
1+0 records out
262144 bytes (262 kB, 256 KiB) copied, 0,0140019 s, 18,7 MB/s
100.00% Percent done. 4519 nonces/minute, 0:00 left (can restore from step 1)               
Finished plotting.

k06a commented Jun 19, 2017

@zmeyc please truncate middle lines :)

k06a commented Jun 19, 2017 edited

@zmeyc can't reproduce this bug on macOS. Is it allocated of twice size of allocated normal and doubled while plotting process? You can notice this on more huge file size.

zmeyc commented Jun 19, 2017

@k06a Done:
After allocation it's smaller than expected:

524025856 123_0_2000_2000
524288000 123_0_2000_2000.old

Then it grows during plotting:

1048313856 123_0_2000_2000
524288000 123_0_2000_2000.old

Btw, there's an error durring allocation:

Failed to expand file to size 128000 (errno 95 - Operation not supported).

k06a commented Jun 19, 2017

@zmeyc please check with latest commit

BobbyT commented Jun 19, 2017

@k06a I've been running @zmeyc 's test,

With: 5b0a131 the file is 125 MB
With: d462574 the file is 2G

k06a commented Jun 19, 2017

@BobbyT just fixed this 16x bug :)

BobbyT commented Jun 19, 2017 edited

@k06a would mind telling me whats the difference between the latest and this Commit: 8b300a2 "Disable buffering", please?

To me it's seems, that zeroing or expand file size will take 15-16 hours and plotting an additional 31 hours.
Where as the commit 8b300a2 only does the plotting in about 31 hours without zeroing.
Note: I'm plotting an 8TB file

zmeyc commented Jun 19, 2017

@BobbyT The resulting file was fragmented (effectively unoptimized): #4 (comment)

k06a commented Jun 19, 2017 edited

@BobbyT file was very fragmented and only looks like optimized, but not really optimized for single reading by miner. With preallocation it is really optimized for single-range reading per block.

BobbyT commented Jun 19, 2017

ok thx for the answer.

would fragmentation also appear if there is only one large file on the drive?

k06a commented Jun 19, 2017

@BobbyT yep :(

zmeyc commented Jun 19, 2017

@BobbyT If you have another HDD of the same size, you can move the file to it, it will be defragmented in process.

hheexx commented Jun 19, 2017

Or you can defrag it :)

hheexx commented Jun 19, 2017

I changed the pool. Now I receive errors for all nonces in new plots:
https://i.gyazo.com/441718fd212ce8cc4fb539499dbc2065.png

k06a commented Jun 19, 2017

@hheexx what version on miner did you used?

k06a commented Jun 19, 2017

Current version is checked with md5 comparison

hheexx commented Jun 19, 2017

1.6.0. I also tried burst-miner R4

btw what is the purpose of -m parameter if for optimized plots m == n ?
I used -m of 180224

BobbyT commented Jun 19, 2017

@zmeyc recommended using cmp to compare, could some one do this?

k06a commented Jun 19, 2017

If you used plotter from this branch https://github.com/k06a/mjminer/tree/fix/optimize your output file will be fully optimized no matter if m == n. Output filename will have m == n.

k06a commented Jun 19, 2017

@BobbyT you can trust md5 as cmp: #4 (comment) it is not really possible to meet md5 collision %)

zmeyc commented Jun 19, 2017

-m is how much memory is used during plotting. 20000 is 5 gb. Half size is used for plotting, half size for writing. The bigger "m" the less seeking there will be during plotting.

There's still one unsolved problem: on Ubuntu preallocated file is one block lesser than expected.

BobbyT commented Jun 19, 2017 edited

@zmeyc meaning, I can cancel my current running process?

zmeyc commented Jun 19, 2017 edited

@BobbyT I don't know if this bug will affect final fragmentation, during pre-allocation stage last block of the file (262 kb) is not written to disk. But file contents seems to match after plotting. :)
Maybe this won't have noticeable effect.

hheexx commented Jun 19, 2017

@k06a I cloned master from https://github.com/k06a/mjminer
As I can see there is no difference between branch and master.

As output filename has m == n does that means that -m parameter does not matter as it will be same as n?

k06a commented Jun 19, 2017

@hheexx my master and my fix/optimize are already merged. Compare with original repo I forked.

BobbyT commented Jun 19, 2017

@k06a would it somehow be possible to start zeroing multiple drives at once and then let the plotter pick up the file? And maybe, then plotting multiple drives at once, with GPU on one drive and CPU the other ?

Also a general question: Why isn't auto discover nonces and stagger size possible?

k06a commented Jun 19, 2017

@BobbyT stagger will be auto-selected in case of missing argument -m. You can prealloc file with dd or I can add option for prealloc-only behaviour, for example -z

BobbyT commented Jun 19, 2017

@k06a create prealloc only and pickup later pre-allocted file for plotting would be a nice feature.

k06a commented Jun 19, 2017

@BobbyT just added -z option

zmeyc commented Jun 19, 2017 edited

@hheexx -m is amount of memory plotter uses, if the value is too high it'll use swap space: #4 (comment)

start nonce can be any random value, just make sure the ranges don't intersect. it's easier to assign them sequentially. plotter can't select it automatically because it doesn't know ranges of nonces on all hdds.

k06a commented Jun 19, 2017

@BobbyT just fixed -z option to work properly

BobbyT commented Jun 19, 2017 edited

@k06a thx, but you are bit too fast. I repeated @zmeyc 's test with and without -z and both md5 hashes are equal.
So, I think it didn't work.
And to pickup the file for later plotting, how would that work? With -r ?

hheexx commented Jun 20, 2017

@k06a Preliminary I think It's problem with stagger.
I used stagger of 180224 and looks like it produces corrupted plots.

Now I tried 80.000 and it's looks like it's valid.

BobbyT commented Jun 20, 2017

my current status. Threads are sometimes running, sometimes not.
nmon is showing me write rate <20MB/s, zeoring was in a range between 90MB and 190MB per second.

Is this normal?

time sudo ./plot -k 123 -x 1 -d /media/drive/ -s 0 -n 30105600 -m 172032 -t 24
Using SSE4 core.
Total generation steps: 350
Creating plots for nonces 0 to 30105600 (7897 GB) using 43008 MB memory and 24 threads
Using fallocate to expand file size to 7350GB
Resizing file to 1926758400 of 7892002406400 (0.02%)
Failed to expand file to size 1926758400 (errno 95 - Operation not supported).
Using dd to expand file size to 7350GB
Resizing file to 7892002406400 of 7892002406400 (100.00%) Done!
1.71% Percent done. 3740 nonces/minute, 132:14 left (can restore from step 5)  

k06a commented Jun 20, 2017 edited

Threads are sometimes running, sometimes not.

This means threads waits buffer to be written. Speed may be slower because each step is written with 4096 seeks. Your stagger is about 44GB, so buffer is 22GB and buffer is written with 4096 parts of size 5MB. So speed may be lower than while zeroing.

k06a commented Jun 20, 2017 edited

@hheexx @BobbyT looks like there were a bug for stagger size greater than 8GB. Just fixed it and checked on 10GB stagger:

$ md5 123_0_40960_40960
MD5 (123_0_40960_40960) = bb76288dd004ed7eb310f282da4bbbdc
$ md5 123_0_40960_40960.new 
MD5 (123_0_40960_40960.new) = bb76288dd004ed7eb310f282da4bbbdc

hheexx commented Jun 20, 2017

@k06a huh. Thanks!
No way to fix corrupted plots? I have 20TB of them :(

btw. one more stagger size question:
Does stagger influence reading performance or mining in any way in optimized plots?

k06a commented Jun 20, 2017 edited

Each stagger half is written to plot with 4096 seeks. The greater stagger you use, the less seeks will be performed. But I don't think it will be huge time difference between 4GB stagger and 8GB. But it may be, you can test for us :)

BobbyT commented Jun 20, 2017

@k06a do I have to replot?

k06a commented Jun 21, 2017

@BobbyT @hheexx sorry for this inconvenience. You need to replot if you used stagger greater than 4GB. And thank you for your contribution, you helped to discover and fix bugs.

k06a commented Jun 21, 2017 edited

@Smit1237 wrote:
Hmm, strange, performance dropped alot, on my usb 3.0 disks i can barely plot with 2k nonces minute with i7-7700k. as usbtop reports usb bus not fully loaded. on previous version i can plot with 25k nonces/min
./plot -k id -x 2 -d /burst4 -s 57344000 -n 4096000 -m 8192 -t 8 Using AVX2 core.
Total generation steps: 1000
Creating plots for nonces 57344000 to 61440000 (1074 GB) using 2048 MB memory and 8 threads
Using fallocate to expand file size to 1000GB
Resizing file to 1073741824000 of 1073741824000 (100.00%) Done!
8.30% Percent done. 2059 nonces/minute, 30:26 left (can restore from step 82)

@Smit1237 your hard drive writes 256KB buffer 4096 times for each of 1000 steps. This produces HDD to work slower than usual. Try to increase stagger size (4096 stagger means 1GB of RAM).

k06a commented Jun 21, 2017

@Smit1237 notice, do not use RESTORE feature with different stagger sizes.

Thanks, already understood that, so i deleted my comment.

k06a commented Jun 21, 2017

@Smit1237 there is no need to delete comments. Somebody may find our talk useful.

k06a commented Jun 21, 2017

@Smit1237 please tell us if speed incremented with bigger stagger size. It is very interesting for us.

i'm a bit ashamed for asking stupid question answered before. Yeah i will test now and report results

k06a commented Jun 21, 2017

I don't think we can improve performance of this case.

yep, speed increased, thanks for advice 19k nonces/min hope i can increase speed to 25k nonces/per minute like in older version

k06a commented Jun 21, 2017

@Smit1237 in first moment speed is faster than will be in the middle. Second generation step and first writing step are synchronised at the end. So generation of third step and writing of second begins simultaneously and also synchronized. Are you sure your speed stays on 19k level?

Smit1237 commented Jun 21, 2017 edited

decreaed to 800 nonces/min

And slowly decreasing to 400, still not usable for me.

BobbyT commented Jun 21, 2017

@Smit1237 same here, for my 8TB plot it showed 150 hours time. What kind of hdd do you have? Do you also use seagte archive?

@BobbyT 8Tb Seagate Backup Plus Hub (STEL8000200) inside it reside seagate archive afaik

BobbyT commented Jun 21, 2017

@Smit1237 hmm this might be it: https://youtu.be/wQS-IhjkBSA?t=3m4s (watch it from 3:04 to 4:30)

Smit1237 commented Jun 21, 2017 edited

Hmmm, maybe but, i can plot using old version at full 25k nonces, and some other plotter, with a bit slover speeds(mdcct original or so) . Even at windows through ext2fsd i can achieve 11k nonces on 6700 cpu

BobbyT commented Jun 21, 2017

yes, but they do not optimize, or do they?

Xplotter writes an optimized plot, original - not

zmeyc commented Jun 21, 2017

@BobbyT @Smit1237 Which stagger do you get these results with?

BobbyT commented Jun 21, 2017

@Smit1237 Currently testing XPlotter 0.7 with winetricks

@zmeyc I've always used 172032 which should be about ~43G

Smit1237 commented Jun 21, 2017 edited

originally i used 4096, with new version switched to 16384
p.s. stabilized at ~3k nonces/min

k06a commented Jun 21, 2017

@Smit1237 XPlotter uses the same algorithm. @BobbyT awesome random writes speed demystification video episode.

Well, looks like i'm stuck with a bunch of very slow disks.

k06a commented Jun 21, 2017

@Smit1237 looks like they are slow for random writing but very fas for sequential reading. So your mining speed will be awesome as soon as you finish plotting :)

k06a commented Jun 21, 2017

@Smit1237 I think you can easily plot to all your disks simultaneously.

Smit1237 commented Jun 21, 2017 edited

@k06a yes, read speed is very good , so i must wait, no other options, good idea, i must plot all of them at once, cpu is capable of doing this
P.s. lesson learned

BobbyT commented Jun 21, 2017 edited

I just tested truncate under ext4 and hfs+ and realized that is is extremely slow under hfs+.
XPlotter starts plotting right away under ext4, but under hfs+ its like with dd.

Is it because hfs+ tries to avoid fragmentation? And if so, do I have to optimize at all? Couldn't I just skip zeroing and start plotting like in #4 (comment) ?

Update:
hfs+ 8TB file allocation takes about 15 hours
ext4 8TB file allocation takes seconds, maybe a minute

Hmmm. xplotter under windows gives me steady 11000 nonces/minute under ntsfs volume, looks like i did something wrong.

BobbyT commented Jun 21, 2017

@Smit1237 from XPlotter and @k06a 's mjminer I'm getting nearly the same speed 15-17k.
XPlotter v1.0 uses ntfs stream, which only works with ntfs.
What is your target file system or mine system?

zmeyc commented Jun 21, 2017

Could it be that XPlotter doesn't do preallocation producing fragmented files? That would explain fast speed. I remember seeing recommendation to defrag the disk after plotting, but haven't used XPlotter myself nor checked it's source code.

I tried on both systems, linux + ext4, win10 + ntfs(obvious) on linux my speed is constantly decreasing(staarting with 25k nonces per minute). On windows i receive a steady 11000, dunno what i'm doing wrong. I really want to mine on linux since this is dedicated machine for mining. Xplotter produces optimized plots for sure at least 1.0 version. And yes, xplotter preallocates space

zmeyc commented Jun 21, 2017 edited

Just a thought: could it be ext4 journaling, did you disable it? Also ext4 can be fine-tuned for storing large files: https://unix.stackexchange.com/questions/43102/largefile-feature-at-creating-file-system
Barriers are also enabled by default and make writing slower: https://ext4.wiki.kernel.org/index.php/Ext4_Howto

yeah, aleready digging it, thanks for pointing me in right direction

BobbyT commented Jun 22, 2017

How many Inodes do you need for one ~8TB file?
Does it make sense to create one Inode per nonce?

Disabling journal won't help, looks like i need to dig deeper

BobbyT commented Jun 22, 2017

I'm currently running a test with one Inode per nonce.

Inode count: 30523904

An I'm plotting 30110000. I've set the Inode size to 256k in bytes: 262144
Before that I had 244 Mio Inodes, after -T huge I had 122 Mio, now only 30 Mio.

intermediate status: writing to the drive feels at bit faster.

Disabling barriers definetly helps, but i can't get past 2,5k nonces

zmeyc commented Jun 22, 2017 edited

Is the partition aligned? It can be checked with fdisk, should start on sector 2048 or 4096. I used to create partitions with gparted, but it produces misaligned partitions for 8 tb drives and misinforms you that the partition is aligned. :( Only Ubuntu GUI "Disks" utility seems to work correctly.

My fs settings:

Add burst to /etc/mke2fs.conf:
burst = {
        features = extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize,^resize_inode,sparse_super2
        auto_64-bit_support = 1
        inode_size = 256
        blocksize = 4096
        inode_ratio = 1073741824
        reserved_ratio = 0
}

// BE CAREFUL, THIS WILL DESTROY ALL DATA ON PARTITION /dev/sda1:
mke2fs -T burst -L myhdd0 /dev/sda1
// Won't mount without this option when journaling is disabled:
tune2fs -o journal_data_writeback /dev/sda1
// Also needed:
fsck /dev/sda1

Add to /etc/fstab:
LABEL=myhdd0 /mnt/myhdd0 ext4 defaults,noatime,data=writeback,barrier=0,nobh,errors=remount-ro 0 0

sudo mount -a

Another interesting option is sparse_super2 which leaves only 2 superblock copies at the beginning and at the end of the drive. inode_ratio is set to maximum possible.

I'm getting 2x slower read speed than HFS+, but maybe it's not filesystem to blame (could be my mb's USB controller driver not working properly on Linux), still investigating.

zmeyc commented Jun 22, 2017

https://askubuntu.com/questions/50428/how-do-i-check-whether-partitions-on-my-ssd-are-properly-aligned

The optimal alignment uses information reported by disk. That's not always aligned to the physical block size as sometimes the hardware lies about its block size. Sometimes hard disks have 4k blocks internally, but report 512b blocks. Additional check would be to see if start divides to 4096 (and end+1 also)

Btw, most upvoted method in many SO answers

parted /dev/sda
align-check opt n

Does NOT work for 8 tb drives. Reports that the drive is aligned while on manual inspection it's not.

Ssd as cache works flawlessly, plotting at full speed
@k06a Can you add option to use ssd as cache, and then copy ready plots to slow disks? We can set up bounty for that, i think this is killer feature for plotting software.
P.s. My results on wd blue ssd 500gb
https://pastebin.com/QezLSAYG

k06a commented Jun 22, 2017

@Smit1237 try newest version. It may write a little bit faster, but may decrease number of nonces to adjust hdd sector size.

BobbyT commented Jun 23, 2017 edited

@Smit1237 how long does it take to copy the file from ssd to seagate archive?

Smit1237 commented Jun 23, 2017 edited

@k06a Helped a bit, but i need to test it more
@BobbyT Approx 35 min for 400 gb plot file, copying almost at full speed of drive, so i satisfied enough, but, this ssd will die relatively fast in such scenarion i think.

k06a commented Jun 23, 2017

@Smit1237 you need to specify disk sector size in bytes with -b option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment