Create fully optimised version of plot #4
Conversation
k06a
referenced
this pull request
Jun 8, 2017
Closed
Introduce files memory mapping to create fully optimized plots of any size #3
k06a
referenced
this pull request
in bhamon/gpuPlotGenerator
Jun 13, 2017
Closed
Fully optimised plots #19
BobbyT
commented
Jun 13, 2017
|
Hi, did you test it under linux? If so, would tell me which distribution? |
k06a
commented
Jun 14, 2017
|
BobbyT
commented
Jun 14, 2017
|
hmm I didn't get it working in CentOS 7 and Ubuntu 17.04, Debian 8 (compilation warnings). I will try CentOS 6.5 |
k06a
commented
Jun 14, 2017
•
BobbyT
commented
Jun 14, 2017
|
I think i tried to give 32G and then later 23G (I thought it maybe works async) Thats what I used last: |
k06a
commented
Jun 14, 2017
|
@BobbyT is there any error print? |
k06a
commented
Jun 14, 2017
|
@BobbyT it started to allocate whole file, which is 7TB, you need to wait a few minutes I think. Can you see a file size or drive free space to monitor progress? |
BobbyT
commented
Jun 14, 2017
|
I will try it later this day again. but I let it run last night, this morning Ubuntu couldn't wake up. After a reboot, the file was still 0K size. |
k06a
commented
Jun 14, 2017
|
@BobbyT I would like to achieve file to be allocated as fast as possible, sure without zeroing. Can you tell me which of resize methods was used? This should be printed on start. |
k06a
commented
Jun 14, 2017
•
|
@BobbyT in new version you don't really need a huge RAM amount. Think there will be no difference to use 4GB or 32GB of RAM. It fills half of ram, starts to write it to the disk and filling second part simultaneously, then wait both writing and filling processes to finish. And then writes second part to the disk and filling first part with new nonces and again waiting both processes to finish ... And one more idea. 12 000 nonces per minute speed requires 50MB/s disk speed. So writing will be usually slower than generating on GPU for average disk. Can you tell me you disk speed and generation speed? |
BobbyT
commented
Jun 15, 2017
•
|
Ubuntu 17.04:
RAM Usage:
Compiling :
|
k06a
commented
Jun 15, 2017
•
|
@BobbyT looks like it is still expanding the file. Can you track free space on this disk? Strange it is not using |
BobbyT
commented
Jun 15, 2017
•
|
I can give you iostat, but I can't access via GUI or Question: Is it normal that my whole 48GB RAM are buffered/cached? Results for about 50 minutes running:
|
k06a
commented
Jun 15, 2017
|
@BobbyT thanks for your feedback looks like OS trying to allocate whole file with ZEROING. Looks like virtual memory growing while zeroing. I should fix this behaviour. |
k06a
commented
Jun 15, 2017
|
@BobbyT what file system are you using on this volume? |
BobbyT
commented
Jun 15, 2017
|
Currently exFat. therefore I thought, exFat is the best solution |
BobbyT
commented
Jun 15, 2017
|
An additional info: I can't kill the process once started |
k06a
commented
Jun 15, 2017
|
@BobbyT please try newest version I've just pushed. |
BobbyT
commented
Jun 15, 2017
•
|
just started.
24 Threads started, then disappeared.
the filesize is now 100GB after 10 minutes. If this is expected behavior, then I'll wait and see nmon output:
|
k06a
commented
Jun 15, 2017
•
|
@BobbyT please try new version, it uses |
BobbyT
commented
Jun 15, 2017
•
|
First I plotted 10GB:
but it left buffered memory
I cleansed with
now I'm plotting 100GB
And the nonces per minute are down from 15k to 2.5k. It also needs a lot more time than expected.
I think the 10GB plot couldn't buffer the whole RAM, because it simply didn't run long enough. Disk speed seems also down to <20MB/s and sometimes < 10MB/s |
k06a
commented
Jun 15, 2017
|
@BobbyT try one more version with possible fix of unexpected buffering, please. |
BobbyT
commented
Jun 15, 2017
|
Result 12 GB Plot with 4GB RAM
still left buffered RAM, is this okay or not? Do you have any test cases? Maybe I'm doing something wrong? |
k06a
commented
Jun 15, 2017
|
This may be unusable cache. Can you try again to create huge file and monitor what happens? |
BobbyT
commented
Jun 15, 2017
•
|
120 GB Plot with 44G RAM test Commit: 87bed9a Add fsync to prevent file caching
Commit: 8b300a2 Disable buffering
I think there is an undeniable improvement in time. Question: Is the optimized file also smaller? Currently running same test with only 4GB RAM |
BobbyT
commented
Jun 16, 2017
•
|
120 GB Plot with 4GB RAM Test:
unfortunately still buffered RAM left |
k06a
commented
Jun 16, 2017
|
Ok, looks like 8b300a2 fixed issues. I think os keep some part of file buffered to prevent disk IO if possible, but it is not used. This is system responsibility. So this buffered ram will be purged if needed to any app. |
k06a
commented
Jun 16, 2017
|
@BobbyT thanks for your testing! 128GB file (491520 nonces) on speed 16k nonces/min should process for 30min, as I see in your log: |
BobbyT
commented
Jun 16, 2017
|
Plotting 1024GB
I used less memory, because Ubuntu started swapping and I thought it must caused by not enough free RAM. But I actually had to disable swapping (swappiness) and this made it also a bit faster. |
k06a
commented
Jun 16, 2017
|
@BobbyT I see you plotted 4128768 nonces with speed 16310 nonces/minute it is:
and yo had:
Looks like there is no any HDD delay. It plots fully on your CPU speed. |
BobbyT
commented
Jun 16, 2017
|
Currently I'm plotting the full 8TB HDD. If I use more than 43G of RAM (48G installed), swapping starts and nmon shows activity on system drive, too. the 1024GB test I had to restart, disable swappiness=0 and decrease staggersize in order to prevent swapping. Without that, nonces/minute starts dropping. |
k06a
commented
Jun 16, 2017
|
@BobbyT I don't know how to avoid this buffering right now. I am working on macOS, I saw no buffering while 1TB plotting with 8GB RAM of 16GB total. |
BobbyT
commented
Jun 16, 2017
|
@k06a what kind of Mac do you use? What your hardware specs, nonces per minute and time results? |
k06a
commented
Jun 16, 2017
•
|
@BobbyT I've used MacBook Pro 15" Retina 2015 with AVX2 core and 8 thread: 12000 nonces/minute. Time was equal to nonces/speed. 12k was a top speed, and Mac kept it until plotting finished. I saw several gigabytes of free RAM while plotting. |
hheexx
commented
Jun 16, 2017
|
Error while file lseek (errno 22 - Invalid argument). I receive this error after some time for plots > 2TB |
k06a
commented
Jun 16, 2017
|
@hheexx what OS are you using? |
hheexx
commented
Jun 16, 2017
|
@k06a Ubuntu server 16.04 |
k06a
commented
Jun 16, 2017
•
|
@hheexx is it 32bit or 64bit version? What file system are you using on plotted drive? Try to recover with |
hheexx
commented
Jun 16, 2017
•
|
@k06a |
hheexx
commented
Jun 16, 2017
|
2TB file
|
zmeyc
commented
Jun 17, 2017
•
|
@k06a Works on Ubuntu 16, but resulting file is fragmented. Btw, ext4 superblock is repeated multiple times in the middle of disk leading to additional fragmentation. Also, inode count and some other parameters can be tweaked to improve fragmentation / disk space usage even further. |
k06a
commented
Jun 17, 2017
•
|
@zmeyc awesome! Thanks for |
k06a
commented
Jun 17, 2017
|
@zmeyc please try newest version, I had added file preallocation with 4096 step to avoid system freeze... |
BobbyT
commented
Jun 17, 2017
•
|
8TB Plot Ubuntu 17.04 64bit Commit: 8b300a2 Disable buffering
|
BobbyT
commented
Jun 17, 2017
|
Current Problem: I can only mount it as readonly, repair isn't possible |
k06a
commented
Jun 17, 2017
|
@BobbyT thanks for reporting! Due to code issues created plot is fragmented on some file systems. But look like exFAT do not support for spares files: https://msdn.microsoft.com/ru-ru/library/windows/desktop/ee681827(v=vs.85).aspx – that means your plot should not be fragmented at all. |
k06a
commented
Jun 17, 2017
|
@BobbyT what does it mean?
Is it a new HDD or old one? |
k06a
commented
Jun 17, 2017
•
|
I had 4 drives: (1TB, 1TB, 1.5TB, 2TB) and 2 of them died while I filled them with plots. This was very old hard drives, I think 5-7 years old WD Green. |
zmeyc
commented
Jun 17, 2017
BobbyT
commented
Jun 17, 2017
|
It's a new one, seagate archive. Now it's mounted without issues in macOS. @zmeyc I use -k numeric id, but before posting results here, I replace personal data. |
zmeyc
commented
Jun 17, 2017
|
@BobbyT I had lots of issues with exfat on OS X, it sometimes refused to mount, most of the times it could be fixed by plugging HDD into Windows PC, but sometimes it lost all data. I recommend using native fs-es: ext4 or hfs+ if possible. But ext4 needs to be tuned for storing large files, there are lots of details, I'll probably write a post about it. |
zmeyc
commented
Jun 17, 2017
•
|
@k06a @BobbyT In particular (for ext4): check that the partition is properly aligned, increase inode_ratio, disable journaling (wins 10% extra disk space), disable file access time updating on mount, disable HDD sleep (hdds will die faster if they park every minute and Seagate does this, at least external ones, you can hear clicking noises on every block start). Cool them down, large room fan works the best. :D I managed to overheat mine when internal small fan died during plotting, hdd is still alive but SMART shows errors. |
BobbyT
commented
Jun 18, 2017
•
|
this time I'm using HFS+ as filesystem, without journaling When I started the process, the drive got filled up 99% (took over 16 hours) and then plotting started. Thats the result: Error while file write (errno 28 - No space left on device).
|
k06a
commented
Jun 18, 2017
|
@BobbyT wow, huge allocation time! Just a found and fixed a bug with allocation by |
BobbyT
commented
Jun 18, 2017
|
@k06a Okay I'm running again. Whats the expected allocation time? |
k06a
commented
Jun 18, 2017
k06a
commented
Jun 18, 2017
|
I think it should allocate 1TB for a few minutes. |
BobbyT
commented
Jun 18, 2017
|
doesn't it depend on the speed of the hard drive? |
k06a
commented
Jun 18, 2017
|
@BobbyT it depends, I told you estimated time for 50MB/s HDDs |
BobbyT
commented
Jun 18, 2017
•
|
Ok, I stopped it, because it took longer than you said it would.
|
zmeyc
commented
Jun 18, 2017
zmeyc
commented
Jun 18, 2017
|
@k06a Latest commit didn't fix the problem, the file produced with the new plotter is twice as big as the original one: 262144000 bytes instead of 131072000. |
BobbyT
commented
Jun 18, 2017
|
905e815 plot.c Line 682, https://github.com/k06a/mjminer/blob/master/plot.c#L682 |
k06a
commented
Jun 19, 2017
•
|
@BobbyT sad bug, fixed it. |
hheexx
commented
Jun 19, 2017
|
plot.c:682:51: error: ‘FALLOC_FL_INSERT_RANGE’ undeclared (first use in this function) |
k06a
commented
Jun 19, 2017
|
@hheexx sad, just removed this flag |
hheexx
commented
Jun 19, 2017
•
|
Still there it's warning: But it compiles. |
hheexx
commented
Jun 19, 2017
|
I am currently investigating situation that plots written using this fork do not get confirmations. Anybody else tested? |
BobbyT
commented
Jun 19, 2017
|
@hheexx how do I know if its confirmend or not? |
k06a
commented
Jun 19, 2017
|
@hheexx is plot size normal or doubled? How fast file allocates? |
hheexx
commented
Jun 19, 2017
•
|
@BobbyT I can see a lot of the time if the nonce is from plotfile written by this plotter it does not get confirmation from pool. There is no error but there is no confirmation. I am using version after "Disable buffering" commit. |
hheexx
commented
Jun 19, 2017
|
@k06a |
k06a
commented
Jun 19, 2017
|
@hheexx what file system are you using? Maybe try on ext4? |
BobbyT
commented
Jun 19, 2017
|
@hheexx I'm using creepMiner too. But I'm new to this and I don't know how a confirmation looks like. I started it yesterday. |
k06a
commented
Jun 19, 2017
|
I'll create same plots and compare md5 |
hheexx
commented
Jun 19, 2017
|
@k06a it is ext4. |
hheexx
commented
Jun 19, 2017
|
Maybe somebody can try this if you have Windows machine close by. I don't @BobbyT |
k06a
commented
Jun 19, 2017
|
Checking plots from new version:
And old version:
|
k06a
commented
Jun 19, 2017
•
|
@hheexx can you try to realloc file with
Than you can fill preallocated file with |
k06a
commented
Jun 19, 2017
hheexx
commented
Jun 19, 2017
|
don't know why. It should be ext4. fstab: |
k06a
commented
Jun 19, 2017
•
|
@hheexx try new version it uses |
BobbyT
commented
Jun 19, 2017
|
@hheexx PlotsCheck says "checked - OK" on file from this result: #4 (comment) about your screenshot, I don't find such entries in my creepMiner Log, neither submitted nor confirmed. But it's only been running for 24h. I think I have to restart plotting, @zmeyc was right about exfat |
hheexx
commented
Jun 19, 2017
|
@k06a trying just now. It returned 100% Done almost imidiatly, maybe you are measuring progress wrong. Here are dd timings:
@BobbyT Thanks! Maybe you have submission and confirmation times in your web ui? |
zmeyc
commented
Jun 19, 2017
•
|
@k06a I've tried to rerun the test (Ubuntu 16.04):
And it produced file twice the bigger than original plotter. |
k06a
commented
Jun 19, 2017
|
@zmeyc please provide full output |
zmeyc
commented
Jun 19, 2017
•
Original one:
New one:
|
k06a
commented
Jun 19, 2017
|
@zmeyc please truncate middle lines :) |
k06a
commented
Jun 19, 2017
•
|
@zmeyc can't reproduce this bug on macOS. Is it allocated of twice size of allocated normal and doubled while plotting process? You can notice this on more huge file size. |
zmeyc
commented
Jun 19, 2017
|
@k06a Done:
Then it grows during plotting:
Btw, there's an error durring allocation:
|
k06a
commented
Jun 19, 2017
|
@zmeyc please check with latest commit |
k06a
commented
Jun 19, 2017
|
@BobbyT just fixed this 16x bug :) |
BobbyT
commented
Jun 19, 2017
•
|
@k06a would mind telling me whats the difference between the latest and this Commit: 8b300a2 "Disable buffering", please? To me it's seems, that zeroing or expand file size will take 15-16 hours and plotting an additional 31 hours. |
zmeyc
commented
Jun 19, 2017
|
@BobbyT The resulting file was fragmented (effectively unoptimized): #4 (comment) |
k06a
commented
Jun 19, 2017
•
|
@BobbyT file was very fragmented and only looks like optimized, but not really optimized for single reading by miner. With preallocation it is really optimized for single-range reading per block. |
BobbyT
commented
Jun 19, 2017
|
ok thx for the answer. would fragmentation also appear if there is only one large file on the drive? |
k06a
commented
Jun 19, 2017
|
@BobbyT yep :( |
zmeyc
commented
Jun 19, 2017
|
@BobbyT If you have another HDD of the same size, you can move the file to it, it will be defragmented in process. |
hheexx
commented
Jun 19, 2017
|
Or you can defrag it :) |
hheexx
commented
Jun 19, 2017
|
I changed the pool. Now I receive errors for all nonces in new plots: |
k06a
commented
Jun 19, 2017
|
@hheexx what version on miner did you used? |
k06a
commented
Jun 19, 2017
|
Current version is checked with md5 comparison |
hheexx
commented
Jun 19, 2017
|
1.6.0. I also tried burst-miner R4 btw what is the purpose of -m parameter if for optimized plots m == n ? |
BobbyT
commented
Jun 19, 2017
|
@zmeyc recommended using |
k06a
commented
Jun 19, 2017
|
If you used plotter from this branch https://github.com/k06a/mjminer/tree/fix/optimize your output file will be fully optimized no matter if |
k06a
commented
Jun 19, 2017
|
@BobbyT you can trust |
zmeyc
commented
Jun 19, 2017
|
-m is how much memory is used during plotting. 20000 is 5 gb. Half size is used for plotting, half size for writing. The bigger "m" the less seeking there will be during plotting. There's still one unsolved problem: on Ubuntu preallocated file is one block lesser than expected. |
BobbyT
commented
Jun 19, 2017
•
|
@zmeyc meaning, I can cancel my current running process? |
zmeyc
commented
Jun 19, 2017
•
|
@BobbyT I don't know if this bug will affect final fragmentation, during pre-allocation stage last block of the file (262 kb) is not written to disk. But file contents seems to match after plotting. :) |
hheexx
commented
Jun 19, 2017
|
@k06a I cloned master from https://github.com/k06a/mjminer As output filename has m == n does that means that -m parameter does not matter as it will be same as n? |
k06a
commented
Jun 19, 2017
|
@hheexx my |
BobbyT
commented
Jun 19, 2017
|
@k06a would it somehow be possible to start zeroing multiple drives at once and then let the plotter pick up the file? And maybe, then plotting multiple drives at once, with GPU on one drive and CPU the other ? Also a general question: Why isn't auto discover nonces and stagger size possible? |
k06a
commented
Jun 19, 2017
|
@BobbyT stagger will be auto-selected in case of missing argument |
BobbyT
commented
Jun 19, 2017
|
@k06a create prealloc only and pickup later pre-allocted file for plotting would be a nice feature. |
k06a
commented
Jun 19, 2017
|
@BobbyT just added |
zmeyc
commented
Jun 19, 2017
•
|
@hheexx -m is amount of memory plotter uses, if the value is too high it'll use swap space: #4 (comment) start nonce can be any random value, just make sure the ranges don't intersect. it's easier to assign them sequentially. plotter can't select it automatically because it doesn't know ranges of nonces on all hdds. |
k06a
commented
Jun 19, 2017
|
@BobbyT just fixed |
BobbyT
commented
Jun 19, 2017
•
hheexx
commented
Jun 20, 2017
|
@k06a Preliminary I think It's problem with stagger. Now I tried 80.000 and it's looks like it's valid. |
BobbyT
commented
Jun 20, 2017
|
my current status. Threads are sometimes running, sometimes not. Is this normal?
|
k06a
commented
Jun 20, 2017
•
This means threads waits buffer to be written. Speed may be slower because each step is written with 4096 seeks. Your stagger is about 44GB, so buffer is 22GB and buffer is written with 4096 parts of size 5MB. So speed may be lower than while zeroing. |
k06a
commented
Jun 20, 2017
•
hheexx
commented
Jun 20, 2017
|
@k06a huh. Thanks! btw. one more stagger size question: |
k06a
commented
Jun 20, 2017
•
|
Each stagger half is written to plot with 4096 seeks. The greater stagger you use, the less seeks will be performed. But I don't think it will be huge time difference between 4GB stagger and 8GB. But it may be, you can test for us :) |
BobbyT
commented
Jun 20, 2017
|
@k06a do I have to replot? |
k06a
commented
Jun 21, 2017
k06a
commented
Jun 21, 2017
•
@Smit1237 your hard drive writes 256KB buffer 4096 times for each of 1000 steps. This produces HDD to work slower than usual. Try to increase stagger size (4096 stagger means 1GB of RAM). |
k06a
commented
Jun 21, 2017
|
@Smit1237 notice, do not use RESTORE feature with different stagger sizes. |
Smit1237
commented
Jun 21, 2017
|
Thanks, already understood that, so i deleted my comment. |
k06a
commented
Jun 21, 2017
|
@Smit1237 there is no need to delete comments. Somebody may find our talk useful. |
k06a
commented
Jun 21, 2017
|
@Smit1237 please tell us if speed incremented with bigger stagger size. It is very interesting for us. |
Smit1237
commented
Jun 21, 2017
|
i'm a bit ashamed for asking stupid question answered before. Yeah i will test now and report results |
k06a
commented
Jun 21, 2017
|
I don't think we can improve performance of this case. |
Smit1237
commented
Jun 21, 2017
|
yep, speed increased, thanks for advice 19k nonces/min hope i can increase speed to 25k nonces/per minute like in older version |
k06a
commented
Jun 21, 2017
|
@Smit1237 in first moment speed is faster than will be in the middle. Second generation step and first writing step are synchronised at the end. So generation of third step and writing of second begins simultaneously and also synchronized. Are you sure your speed stays on 19k level? |
Smit1237
commented
Jun 21, 2017
•
|
decreaed to 800 nonces/min |
Smit1237
commented
Jun 21, 2017
|
And slowly decreasing to 400, still not usable for me. |
BobbyT
commented
Jun 21, 2017
|
@Smit1237 same here, for my 8TB plot it showed 150 hours time. What kind of hdd do you have? Do you also use seagte archive? |
Smit1237
commented
Jun 21, 2017
|
@BobbyT 8Tb Seagate Backup Plus Hub (STEL8000200) inside it reside seagate archive afaik |
BobbyT
commented
Jun 21, 2017
|
@Smit1237 hmm this might be it: https://youtu.be/wQS-IhjkBSA?t=3m4s (watch it from 3:04 to 4:30) |
Smit1237
commented
Jun 21, 2017
•
|
Hmmm, maybe but, i can plot using old version at full 25k nonces, and some other plotter, with a bit slover speeds(mdcct original or so) . Even at windows through ext2fsd i can achieve 11k nonces on 6700 cpu |
BobbyT
commented
Jun 21, 2017
|
yes, but they do not optimize, or do they? |
Smit1237
commented
Jun 21, 2017
|
Xplotter writes an optimized plot, original - not |
zmeyc
commented
Jun 21, 2017
BobbyT
commented
Jun 21, 2017
Smit1237
commented
Jun 21, 2017
•
|
originally i used 4096, with new version switched to 16384 |
k06a
commented
Jun 21, 2017
Smit1237
commented
Jun 21, 2017
|
Well, looks like i'm stuck with a bunch of very slow disks. |
k06a
commented
Jun 21, 2017
|
@Smit1237 looks like they are slow for random writing but very fas for sequential reading. So your mining speed will be awesome as soon as you finish plotting :) |
k06a
commented
Jun 21, 2017
|
@Smit1237 I think you can easily plot to all your disks simultaneously. |
Smit1237
commented
Jun 21, 2017
•
|
@k06a yes, read speed is very good , so i must wait, no other options, good idea, i must plot all of them at once, cpu is capable of doing this |
BobbyT
commented
Jun 21, 2017
•
|
I just tested Is it because hfs+ tries to avoid fragmentation? And if so, do I have to optimize at all? Couldn't I just skip zeroing and start plotting like in #4 (comment) ? Update: |
Smit1237
commented
Jun 21, 2017
|
Hmmm. xplotter under windows gives me steady 11000 nonces/minute under ntsfs volume, looks like i did something wrong. |
BobbyT
commented
Jun 21, 2017
zmeyc
commented
Jun 21, 2017
|
Could it be that XPlotter doesn't do preallocation producing fragmented files? That would explain fast speed. I remember seeing recommendation to defrag the disk after plotting, but haven't used XPlotter myself nor checked it's source code. |
Smit1237
commented
Jun 21, 2017
|
I tried on both systems, linux + ext4, win10 + ntfs(obvious) on linux my speed is constantly decreasing(staarting with 25k nonces per minute). On windows i receive a steady 11000, dunno what i'm doing wrong. I really want to mine on linux since this is dedicated machine for mining. Xplotter produces optimized plots for sure at least 1.0 version. And yes, xplotter preallocates space |
zmeyc
commented
Jun 21, 2017
•
|
Just a thought: could it be ext4 journaling, did you disable it? Also ext4 can be fine-tuned for storing large files: https://unix.stackexchange.com/questions/43102/largefile-feature-at-creating-file-system |
Smit1237
commented
Jun 22, 2017
|
yeah, aleready digging it, thanks for pointing me in right direction |
BobbyT
commented
Jun 22, 2017
|
How many Inodes do you need for one ~8TB file? |
Smit1237
commented
Jun 22, 2017
|
Disabling journal won't help, looks like i need to dig deeper |
BobbyT
commented
Jun 22, 2017
|
I'm currently running a test with one Inode per nonce.
An I'm plotting 30110000. I've set the Inode size to 256k in bytes: 262144 intermediate status: writing to the drive feels at bit faster. |
Smit1237
commented
Jun 22, 2017
|
Disabling barriers definetly helps, but i can't get past 2,5k nonces |
zmeyc
commented
Jun 22, 2017
•
|
Is the partition aligned? It can be checked with fdisk, should start on sector 2048 or 4096. I used to create partitions with gparted, but it produces misaligned partitions for 8 tb drives and misinforms you that the partition is aligned. :( Only Ubuntu GUI "Disks" utility seems to work correctly. My fs settings:
Another interesting option is sparse_super2 which leaves only 2 superblock copies at the beginning and at the end of the drive. inode_ratio is set to maximum possible. I'm getting 2x slower read speed than HFS+, but maybe it's not filesystem to blame (could be my mb's USB controller driver not working properly on Linux), still investigating. |
zmeyc
commented
Jun 22, 2017
Btw, most upvoted method in many SO answers
Does NOT work for 8 tb drives. Reports that the drive is aligned while on manual inspection it's not. |
Smit1237
commented
Jun 22, 2017
|
Ssd as cache works flawlessly, plotting at full speed |
k06a
commented
Jun 22, 2017
|
@Smit1237 try newest version. It may write a little bit faster, but may decrease number of nonces to adjust hdd sector size. |
BobbyT
commented
Jun 23, 2017
•
|
@Smit1237 how long does it take to copy the file from ssd to seagate archive? |
Smit1237
commented
Jun 23, 2017
•
k06a
commented
Jun 23, 2017
|
@Smit1237 you need to specify disk sector size in bytes with |

k06a commentedJun 8, 2017
No description provided.