New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pi 5 HAT: Radxa Penta SATA HAT #615
Comments
It's on the site now: https://pipci.jeffgeerling.com/hats/radxa-penta-sata-hat.html I'll be testing and benchmarking soon! |
Some usage notes:
|
One concern could be heating—the JMB585 SATA controller chip hit peaks of 60+°C in my testing: There is an official fan/OLED board, and that seems like it would be a wise choice for this build. It seems to also require the case, which is also announced but not available anywhere right now. See: https://forum.radxa.com/t/penta-sata-hat-is-now-available/20378 |
I've also been monitoring IRQs and CPU affinity while doing network copies—the writes, specifically—and nothing really jumps out and suggests a bottleneck there (I'm reminded of this old Raspberry Pi linux issue): This was in the middle of a 50 GB folder copy to a ZFS array. It is averaging 70 MB/sec or so, which is a fair bit less than line speed over the gigabit connection :( @ThomasKaiser had suggested over in the Radxa forum there could be some affinity issues with networking on the Pi 5, but I don't see that via |
What about
|
More stats on the share from the macOS client:
And Samba version on the Pi:
|
Do you do only Finder copies (AKA 'network + storage combined' plus various unknown 'optimization strategies') or have you already tested network and storage individually? A quick And for Samba performance I came up with these settings when I wrote the generic 'OMV on SBC' install routine over half a decade ago: https://github.com/armbian/build/blob/e83d1a0eabcc11815945453d58e1b9f4e201de43/config/templates/customize-image.sh.template#L122 |
I've tested iperf3 on this setup a few times—on the 1 Gbps port, I get 940 Mbps up, 940 Mbps down (doing I set it to 50 GB to try to bypass more of the cached speed (since this is an 8 GB RAM Pi 5):
|
Testing from Windows 11 on the same network, reads maxed out at 110 MB/sec, just like on the Mac. Writes... are getting a consistent 108 MB/sec. (It did get a little more up-and-down around the halfway point, where the below screenshot was taken, but still averages above 105 MB/sec.) Now I'm shaking my fist strongly at my Mac—why does Apple have to hate GPLv3 so much!? Will try to see if there's a way to see what's going on with macOS Finder. I've heard from @jrasamba that macOS might try using packet signing (see article), which could definitely result in different performance characteristics. Not sure about Windows 11's defaults. Maybe I have to ditch using my Mac as the 'real world performance' test bed... with other networking stuff it's not an issue. And I know Finder's terrible... I just didn't think it was that terrible. :P (@jrasamba also suggested watching this video on io_uring with some good general performance tips.) |
You can check with As for GPL or not, IIRC Apple always used an SMB client that was derived from *BSD. Only for the SMB server component license issues came into play when Apple replaced Samba with their smbx. But they had another good reason since starting from 10.8 or 10.9 we were able to transfer Mac files flawlessly between Macs via SMB since all the HFS+ attributes were properly mapped via SMB unlike with Samba. Am about to setup tomorrow a q&d local Netatalk instance on a RPi 5 to restore a TM backup for a colleague on a MacBook to be shipped to her. But since this sounds like fun I might try to do the excercise with Samba instead and see whether the Samba tunables developed over half a decade ago are still important or not. No idea whether spare time allows or not... |
@ThomasKaiser - see above (#615 (comment)) — |
Yes. Sorry haven't seen the whole comment. |
I tried adding
Doesn't seem to make a difference either in the file copy speed, nor in the I also tried setting
I unmounted and re-mounted the share, and I'm still seeing the same performance. (So I set it back to I'm going to reboot the Pi and Mac entirely and try again. (I had just unmounted the share, restarted smbd on the Pi, and re-mounted the share.) |
Reboot changed nothing, so I had a gander at the SMB config documentation, and found the client signing variable might need to be disabled?
I re-mounted the share, but it's still showing as |
One last little nugget is I was debugging SMB via debug logging (easy enough to enable via OMV's UI), and I noticed there are actually two log files that are being written to when I'm working on my Mac:
I wonder if there's any possibility of SMB doing some kind of internal thrashing when it sees my Mac as both IP 10.0.2.15 and local hostname |
The Signing could really be the culprit (just searched through my OMV 'career'). Since I just replaced a M1 Pro MBP with an M3 Air I checked defaults (or what I believe the defaults are):
Not required doesn't mean disabled. Unfortunately I'm on macOS 14 for just a couple of days (the lazy guy trying to skip every other macOS release) and am not into all the details yet... |
Note that on my Mac, I didn't have anything in place in Even with |
This can and will harm SMB performance (bitten by this several times). But I guess you also tried it with log settings set to |
I only had it set to debug for about 3 minutes while I was replaying the copy, to get a snapshot of the log. Then set it right back to 'None' (which is the default in OMV). None of the performance data in this issue that I've posted was taken at any time when any smbd logging was enabled. |
Maybe Apple's most idiotic software piece ever :) Back in the days when network/storage testing was a huge part of my day job I always used Helios LanTest since being limited in some ways (explained here) showing the performance differences/increases you were aiming for when debugging settings while Windows Explorer and Finder do a lot under the hood that masquerades basic network setting mismatches due to parallelisms and automagically tuned settings like block sizes. |
Ah, because I have OMV installed, I had to run I ran through the firstaid wizard, and now I'm getting an IP address and connection on the Plugable USB 2.5G adapter.
Testing with
And here's a file copy to the Pi 5 over the 2.5G connection from Windows 11 (average of 270 MB/sec): And here's a file copy from the Pi 5 over the 2.5G connection to Windows 11 (average of 200 MB/sec): It seemed like the write was not really bottlenecked, but the read was bottlenecked on disk IO, of all things, according to |
Hopefully soon to be resolved: raspberrypi/linux#6077 And then someone needs to take time/efforts to develop sane IRQ affinity settings (like mostly I did for Armbian ages ago) |
Maybe some kind of parity calculation? |
It seems to be out of stock everywhere? Any ideas who might have them for sale? |
I was told Arace had limited stock and is out, hopefully they will get a new shipment in soon... |
Someone had suggested to also try one of the video editing benchmarking tools; in this case AJA System Test Lite, running it with |
The 'problem' with Finder is implementing hidden optimization strategies (3rd time the same link in the same issue: https://www.helios.de/web/EN/support/TI/157.html). As such testing with LanTest with 'Backbone networks, e.g. 40 Gigabit Ethernet' gives more reliable numbers. |
@ThomasKaiser - I understand that, but from that doc, it seems to indicate Finder should be more optimized for the types of copies I'm performing, whereas my experience seems to indicate something is seriously wrong with the SMB implementation on the latest macOS releases... or with some server/client negotiation. It's crazy (to me) that iperf, SFTP, and other more direct methods get line speed no issue, but Finder copies and cp/rsync/etc. using the SMB mount are so shaky and slower. In this case, I'm actually less interested in the theoretical, and more interested in what's causing the inconsistency with real world use cases (I do a lot of project copying, and faster total copy time for 30-90 GB folders is better for me). |
To investigate the Finder problem from another angle, maybe something like macOS's equivalent to strace? https://www.shogan.co.uk/devops/dtrace-dtruss-an-alternative-to-strace-on-macos/ |
Thank you! |
Thank you, @geerlingguy ! |
This is exciting. It looks like the JMB585 supports SATA port multipliers. Do you have any to test with? 20 drive arrays seem possible. I could imagine a mini-ITX NAS case with 5.25" drive cages/backplanes all wired up. You'd just need a power supply with a physical switch and molex+SATA power connectors. And some sort of mini-ITX Pi mounting bracket, of course. |
maybe check with eSATA port? But, i think speed will be low... maybe for magnetic hdd, but not SSD. |
Sad I hope this happens soon this seems like the perfect project for me, already got a pi 5, now the Penta SATA HAT would be next. Even for just having a device to copy and move files between disks and what not this would be encredibly useful. |
Arace have it in stock now. I just ordered two from https://arace.tech/products/radxa-penta-sata-hat-up-to-5x-sata-disks-hat-for-raspberry-pi-5 |
Question, this hat is compatible with OrangePi 5? |
@geerlingguy have you experimented with hibernation at all? |
@geerlingguy since we were talking about Finder weirdness and I'm currently testing SMB multichannel between a MacBook and Rock 5 ITX... just did a quick test with three files 2.3 GB in size on a Samba 4.13 share with Samba -> Mac constant +500 MB/s: Mac -> Samba very flaky numbers, short bursts at +450 MB/s but mostly nothing and the Finder waiting for whatever: But note that the network setup is somewhat broken anyway in direction to Rock 5 ITX so my Finder investigations need to be revisited once that is resolved. |
@ThomasKaiser - Thanks for posting that, and that is definitely my experience (though usually not that much of a blip where there's no writing. Definitely something weird, and watching the Console log on the Mac is almost useless :P |
Hi Jeff, Regards |
He's using https://raw.githubusercontent.com/geerlingguy/pi-cluster/master/benchmarks/disk-benchmark.sh to be called as explained in any of his sbc-review issues, e.g. this There are at least three problems with this script, one being a major one:
To talk about disk performance a switch to Quick test on a Rock 5 ITX with an 256 GB EVO Plus A2 SD card comparing three different settings:
In contrast Radxa's defaults since 2022 and Armbian defaults until 2024:
We see small drops in performance everywhere and also a bit of results variation since 2940 KB/s with Retesting with
Compared to But what these synthetic benchmarks don't tell anyway: real-world storage performance that is easily halved by the switch to At least it should be obvious that One might argue using 'OS defaults' would be the right thing since that's what they ship and users have to live with but me as someone who only does 'active reviews' (not just reporting numbers but improving numbers) can't disagree more since the best idea is to run the test in both modes: OS image defaults vs. [1] |
@ThomasKaiser - To properly benchmark storage solutions, you need to do a lot more than I think either of us do in a typical review cycle for a storage device. In my case, when it actually matters, I will test across different OSes with 100+ GB files, with folders with 5,000+ small files, and average the eventual total time for the copy back and forth. The disk-benchmark.sh script is a quick way to get an 'with the default OS image, in ideal circumstances, with smaller files, here's the kind of performance one can expect'. There are huge differences depending on if you use ext3/ext4, ZFS, Btrfs, Debian, Ubuntu, a board vendor's custom distro, And I do think it's useful to not sit there tweaking and tuning the default distro image for best performance, because I want my tests to reflect what most users will see. If they buy a Raspberry Pi, they will go to the docs and see they should flash Pi OS to the card using Imager. The docs don't mention setting |
@geerlingguy doesn't change anything wrt different testing methodology for sequential reads and writes. In case you accept geerlingguy/pi-cluster#12 this will become obvious with future testings and then you might decide to adjust your reporting or not :) |
True; honestly my main concern is to have a few different tests since I know many people just throw hdparm at it and call it a day. I like |
Correct, that's the garbage the majority of 'Linux tech youtubers' rely on. Ignoring (not knowing) that
Both are storage performance tests unlike
Simple solution: avoid ZFS for benchmarks and try to educate your target audience about the ZFS benefits (spoiler alert: they don't want this content ;) ) |
re: ZFS: Avoiding it is impossible if you want to show people what kind of performance you get on a modern NAS, since it seems like half the homelab world is focused on ZFS, and the other half a split between old school RAID (mdadm), Btrfs, and all the weird unholy things proprietary vendors cobble together (like Unraid). Also, if you don't mention ZFS when talking about storage, you end up with so many random comments about 'why not ZFS', it's the modern homelab equivalent to 'btw I use Arch' or 'why don't you use [vim|emacs|nano]?' :D Unavoidable, unfortunately! Anyway, I plan on deploying this HAT as a replica target for my main ZFS array... we'll see how that works out! Still looking to find a case for it. Too lazy to CAD my own heh |
If you still have this on-hand and would be willing to make a few measurements... How thick of a 2.5" drive can be mounted directly to the hat? Modern 2.5" SSDs are typically 7mm thick, while (high-capacity) 2.5" HDDs (lower speed but much cheaper per-TB) usually come in at 15mm thick. Are those too fat to stack all 4 slots? |
I know this is not the correct place for this question but ... is anyone willing to sell me an FPC cable for the Pi5. Thought Arace have them in stock, but my order is on back-order, essentially for almost a month now and they cannot confirm when a new batch will be available. Please help me out 🙏 |
Radxa sells an updated version of their Penta SATA HAT for $45, and it includes four SATA drive connectors, plus one edge connector for a 5th drive, 12V power inputs (molex or barrel jack) to power both the drives and the Pi 5 via GPIO, a cable for the 5th drive, an FFC cable to connect the HAT to the Pi 5, and screws for the mounting.
It looks like the SATA controller is a JMB585 PCIe Gen 3x2 SATA controller, so it could benefit from running the Pi 5's PCIe lane at Gen 3.0 speeds (setting
dtparam=pciex1_gen=3
in/boot/firmware/config.txt
). Radxa sent me a unit for testing.The text was updated successfully, but these errors were encountered: