Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Radxa X4 #48

Open
geerlingguy opened this issue Jul 20, 2024 · 23 comments
Open

Radxa X4 #48

geerlingguy opened this issue Jul 20, 2024 · 23 comments

Comments

@geerlingguy
Copy link
Owner

geerlingguy commented Jul 20, 2024

radxa-x4

Basic information

  • Board URL (official): https://radxa.com/products/x/x4 (docs)
  • Board purchased from: Arace
  • Board purchase date: July 19, 2024
  • Board specs (as tested): 8GB
  • Board price (as tested): $79.90 (+ $14.90 heatsink+fan)

Linux/system information

# output of `screenfetch`
                          ./+o+-       jgeerling@radxa-x4
                  yyyyy- -yyyyyy+      OS: Ubuntu 24.04 noble
               ://+//////-yyyyyyo      Kernel: x86_64 Linux 6.8.0-39-generic
           .++ .:/++++++/-.+sss/`      Uptime: 3m
         .:++o:  /++++++++/:--:/-      Packages: 1594
        o:+o+:++.`..```.-/oo+++++/     Shell: bash 5.2.21
       .:+o:+o/.          `+sssoo+/    Disk: 9.3G / 938G (2%)
  .++/+:+oo+o:`             /sssooo.   CPU: Intel N100 @ 4x 3.4GHz [49.0°C]
 /+++//+:`oo+o               /::--:.   GPU: UHD Graphics
 \+/+o+++`o++o               ++////.   RAM: 1399MiB / 7716MiB
  .++.o+++oo+:`             /dddhhh.  
       .+.o+oo:.          `oddhhhh+   
        \+.++o+o``-````.:ohdhhhhh+    
         `:o+++ `ohhhhhhhhyo++os:     
           .o:`.syhhhhhhh/.oo++o`     
               /osyyyyyyo++ooo+++/    
                   ````` +oo+++o\:    
                          `oo++.   

# output of `uname -a`
Linux radxa-x4 6.8.0-39-generic #39-Ubuntu SMP PREEMPT_DYNAMIC Fri Jul  5 21:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Benchmark results

CPU

Power

  • Idle power draw (at wall): 9.1 W
  • Maximum simulated power draw (stress-ng --matrix 0): 18.5 W
  • During Geekbench multicore benchmark: 21.2 W
  • During top500 HPL benchmark: 16 W

Disk

Kioxia BG6 2230 1TB NVMe SSD (KBG40ZNS1T02)

Benchmark Result
iozone 4K random read 60.17 MB/s
iozone 4K random write 227.78 MB/s
iozone 1M random read 1679.58 MB/s
iozone 1M random write 1506.50 MB/s
iozone 1M sequential read 1701.32 MB/s
iozone 1M sequential write 1527.16 MB/s
wget https://raw.githubusercontent.com/geerlingguy/pi-cluster/master/benchmarks/disk-benchmark.sh
chmod +x disk-benchmark.sh
sudo MOUNT_PATH=/ TEST_SIZE=1g ./disk-benchmark.sh

Run benchmark on any attached storage device (e.g. eMMC, microSD, NVMe, SATA) and add results under an additional heading.

Also consider running PiBenchmarks.com script.

Network

iperf3 results:

Built-in 2.5 Gbps Ethernet (Intel I226-V)

  • iperf3 -c $SERVER_IP: 2.35 Gbps
  • iperf3 -c $SERVER_IP --reverse: 1.77 Gbps
  • iperf3 -c $SERVER_IP --bidir: 2.34 Gbps up, 1.77 Gbps down

Built-in Realtek WiFi 6 (RTL8852BE)

  • iperf3 -c $SERVER_IP: 651 Mbps
  • iperf3 -c $SERVER_IP --reverse: 319 Mbps
  • iperf3 -c $SERVER_IP --bidir: 596 Mbps up, 88 Mbps down

GPU

glmark2-es2 results:

=======================================================
    glmark2 2023.01
=======================================================
    OpenGL Information
    GL_VENDOR:      Intel
    GL_RENDERER:    Mesa Intel(R) Graphics (ADL-N)
    GL_VERSION:     OpenGL ES 3.2 Mesa 24.0.9-0ubuntu0.1
    Surface Config: buf=32 r=8 g=8 b=8 a=8 depth=24 stencil=0 samples=0
    Surface Size:   800x600 windowed
=======================================================
[build] use-vbo=false: FPS: 2512 FrameTime: 0.398 ms
[build] use-vbo=true: FPS: 2623 FrameTime: 0.381 ms
[texture] texture-filter=nearest: FPS: 2532 FrameTime: 0.395 ms
[texture] texture-filter=linear: FPS: 2495 FrameTime: 0.401 ms
[texture] texture-filter=mipmap: FPS: 2507 FrameTime: 0.399 ms
[shading] shading=gouraud: FPS: 1958 FrameTime: 0.511 ms
[shading] shading=blinn-phong-inf: FPS: 1939 FrameTime: 0.516 ms
[shading] shading=phong: FPS: 3638 FrameTime: 0.275 ms
[shading] shading=cel: FPS: 3646 FrameTime: 0.274 ms
[bump] bump-render=high-poly: FPS: 2344 FrameTime: 0.427 ms
[bump] bump-render=normals: FPS: 4118 FrameTime: 0.243 ms
[bump] bump-render=height: FPS: 2491 FrameTime: 0.402 ms
[effect2d] kernel=0,1,0;1,-4,1;0,1,0;: FPS: 2842 FrameTime: 0.352 ms
[effect2d] kernel=1,1,1,1,1;1,1,1,1,1;1,1,1,1,1;: FPS: 1348 FrameTime: 0.742 ms
[pulsar] light=false:quads=5:texture=false: FPS: 4415 FrameTime: 0.227 ms
[desktop] blur-radius=5:effect=blur:passes=1:separable=true:windows=4: FPS: 1216 FrameTime: 0.823 ms
[desktop] effect=shadow:windows=4: FPS: 2335 FrameTime: 0.428 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 655 FrameTime: 1.528 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=subdata: FPS: 1187 FrameTime: 0.843 ms
[buffer] columns=200:interleave=true:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 621 FrameTime: 1.610 ms
[ideas] speed=duration: FPS: 2410 FrameTime: 0.415 ms
[jellyfish] <default>: FPS: 2134 FrameTime: 0.469 ms
[terrain] <default>: FPS: 204 FrameTime: 4.914 ms
[shadow] <default>: FPS: 3225 FrameTime: 0.310 ms
[refract] <default>: FPS: 508 FrameTime: 1.969 ms
[conditionals] fragment-steps=0:vertex-steps=0: FPS: 3842 FrameTime: 0.260 ms
[conditionals] fragment-steps=5:vertex-steps=0: FPS: 3708 FrameTime: 0.270 ms
[conditionals] fragment-steps=0:vertex-steps=5: FPS: 3697 FrameTime: 0.271 ms
[function] fragment-complexity=low:fragment-steps=5: FPS: 3447 FrameTime: 0.290 ms
[function] fragment-complexity=medium:fragment-steps=5: FPS: 3307 FrameTime: 0.302 ms
[loop] fragment-loop=false:fragment-steps=5:vertex-steps=5: FPS: 3047 FrameTime: 0.328 ms
[loop] fragment-steps=5:fragment-uniform=false:vertex-steps=5: FPS: 3273 FrameTime: 0.306 ms
[loop] fragment-steps=5:fragment-uniform=true:vertex-steps=5: FPS: 3566 FrameTime: 0.280 ms
=======================================================
                                  glmark2 Score: 2538 
=======================================================

Note: This benchmark requires an active display on the device. Not all devices may be able to run glmark2-es2, so in that case, make a note and move on!

TODO: See this issue for discussion about a full suite of standardized GPU benchmarks.

Memory

tinymembench results:

Click to expand memory benchmark result
tinymembench v0.4.10 (simple benchmark for memory throughput and latency)

==========================================================================
== Memory bandwidth tests                                               ==
==                                                                      ==
== Note 1: 1MB = 1000000 bytes                                          ==
== Note 2: Results for 'copy' tests show how many bytes can be          ==
==         copied per second (adding together read and writen           ==
==         bytes would have provided twice higher numbers)              ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
==         to first fetch data into it, and only then write it to the   ==
==         destination (source -> L1 cache, L1 cache -> destination)    ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
==         brackets                                                     ==
==========================================================================

 C copy backwards                                     :   5373.3 MB/s (0.2%)
 C copy backwards (32 byte blocks)                    :   5372.2 MB/s
 C copy backwards (64 byte blocks)                    :   5372.9 MB/s
 C copy                                               :   5456.5 MB/s
 C copy prefetched (32 bytes step)                    :   4020.8 MB/s
 C copy prefetched (64 bytes step)                    :   4143.6 MB/s (0.2%)
 C 2-pass copy                                        :   5217.0 MB/s (0.1%)
 C 2-pass copy prefetched (32 bytes step)             :   3174.1 MB/s
 C 2-pass copy prefetched (64 bytes step)             :   3175.5 MB/s (0.1%)
 C fill                                               :   7946.8 MB/s
 C fill (shuffle within 16 byte blocks)               :   7943.5 MB/s
 C fill (shuffle within 32 byte blocks)               :   7946.5 MB/s
 C fill (shuffle within 64 byte blocks)               :   7944.3 MB/s (0.2%)
 ---
 standard memcpy                                      :   8138.6 MB/s
 standard memset                                      :   7968.4 MB/s
 ---
 MOVSB copy                                           :   5436.5 MB/s
 MOVSD copy                                           :   5275.4 MB/s (2.6%)
 SSE2 copy                                            :   4847.0 MB/s
 SSE2 nontemporal copy                                :   7660.0 MB/s (0.8%)
 SSE2 copy prefetched (32 bytes step)                 :   4677.2 MB/s (0.3%)
 SSE2 copy prefetched (64 bytes step)                 :   4690.3 MB/s (0.2%)
 SSE2 nontemporal copy prefetched (32 bytes step)     :   6329.2 MB/s (0.3%)
 SSE2 nontemporal copy prefetched (64 bytes step)     :   6597.4 MB/s
 SSE2 2-pass copy                                     :   4644.6 MB/s (1.5%)
 SSE2 2-pass copy prefetched (32 bytes step)          :   3697.0 MB/s (0.7%)
 SSE2 2-pass copy prefetched (64 bytes step)          :   3767.6 MB/s
 SSE2 2-pass nontemporal copy                         :   2722.4 MB/s (1.2%)
 SSE2 fill                                            :   7329.0 MB/s (0.5%)
 SSE2 nontemporal fill                                :  16293.2 MB/s

==========================================================================
== Memory latency test                                                  ==
==                                                                      ==
== Average time is measured for random memory accesses in the buffers   ==
== of different sizes. The larger is the buffer, the more significant   ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
== accesses. For extremely large buffer sizes we are expecting to see   ==
== page table walk with several requests to SDRAM for almost every      ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest).                                         ==
==                                                                      ==
== Note 1: All the numbers are representing extra time, which needs to  ==
==         be added to L1 cache latency. The cycle timings for L1 cache ==
==         latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
==         two independent memory accesses at a time. In the case if    ==
==         the memory subsystem can't handle multiple outstanding       ==
==         requests, dual random read has the same timings as two       ==
==         single reads performed one after another.                    ==
==========================================================================

block size : single random read / dual random read, [MADV_NOHUGEPAGE]
      1024 :    0.0 ns          /     0.0 ns 
      2048 :    0.0 ns          /     0.0 ns 
      4096 :    0.0 ns          /     0.0 ns 
      8192 :    0.0 ns          /     0.0 ns 
     16384 :    0.0 ns          /     0.0 ns 
     32768 :    0.0 ns          /     0.0 ns 
     65536 :    2.5 ns          /     3.7 ns 
    131072 :    3.8 ns          /     4.7 ns 
    262144 :    5.1 ns          /     6.0 ns 
    524288 :    6.4 ns          /     7.2 ns 
   1048576 :    7.1 ns          /     7.5 ns 
   2097152 :    8.7 ns          /     9.8 ns 
   4194304 :   14.9 ns          /    18.8 ns 
   8388608 :   34.2 ns          /    50.9 ns 
  16777216 :   89.3 ns          /   129.8 ns 
  33554432 :  125.3 ns          /   165.9 ns 
  67108864 :  145.4 ns          /   181.9 ns 

block size : single random read / dual random read, [MADV_HUGEPAGE]
      1024 :    0.0 ns          /     0.0 ns 
      2048 :    0.0 ns          /     0.0 ns 
      4096 :    0.0 ns          /     0.0 ns 
      8192 :    0.0 ns          /     0.0 ns 
     16384 :    0.0 ns          /     0.0 ns 
     32768 :    0.0 ns          /     0.0 ns 
     65536 :    2.5 ns          /     3.7 ns 
    131072 :    3.8 ns          /     4.7 ns 
    262144 :    4.4 ns          /     4.9 ns 
    524288 :    4.7 ns          /     5.0 ns 
   1048576 :    4.9 ns          /     5.0 ns 
   2097152 :    5.2 ns          /     5.3 ns 
   4194304 :   11.3 ns          /    14.4 ns 
   8388608 :   28.9 ns          /    43.7 ns 
  16777216 :   80.1 ns          /   114.8 ns 
  33554432 :  109.6 ns          /   140.2 ns 
  67108864 :  125.3 ns          /   148.9 n

sbc-bench results

https://0x0.st/XO8j.bin

  * memcpy: 7292.7 MB/s, memchr: 10791.5 MB/s, memset: 7153.0 MB/s
  * 16M latency: 145.3 126.3 146.2 126.9 145.9 126.7 119.4 126.5 
  * 128M latency: 160.6 144.3 162.0 144.5 161.0 171.1 139.2 141.3 
  * 7-zip MIPS (3 consecutive runs): 7919, 7956, 7971 (7950 avg), single-threaded: 3459
  * `aes-256-cbc     891938.02k  1178187.61k  1216253.87k  1227214.51k  1230935.38k  1230744.23k`
  * `aes-256-cbc     876402.26k  1132943.79k  1217788.42k  1228350.12k  1230916.27k  1230236.33k`

Phoronix Test Suite

Results from pi-general-benchmark.sh:

  • pts/encode-mp3: 9.461 sec
  • pts/x264 4K: 4.43 fps
  • pts/x264 1080p: 20.22 fps
  • pts/phpbench: 706245
  • pts/build-linux-kernel (defconfig): 1018.833 sec
@geerlingguy
Copy link
Owner Author

geerlingguy commented Jul 20, 2024

Some coverage: CNX, Bret.dk, Hackaday, Tao of Mac, makerbymistake.

It's the first time an Intel-based SBC has challenged Raspberry Pi / RK3588 Arm boards in pricing. Running windows with 4/8 GB of RAM would be a bit of a pain, but Linux on it could be interesting.

Just because it's x86 doesn't mean it will be perfect, though (as I experienced with the LattePanda Mu). It is a very intriguing board, Intel must have a massive stock of N100 SoCs, with how many devices are integrating the things!

@geerlingguy
Copy link
Owner Author

geerlingguy commented Jul 30, 2024

A few notes from my assembly:

  • The heatsink case requires removal of the 'bottom feet' to get at the screws on the X4; if they had cutouts in the metal for a screwdriver, or if the cut was different, they could avoid users having to remove and replace 6 extra screws to get at the board.
  • The included thermal pad was definitely crusty. Bret.dk also mentioned this, it seems like at least whatever batch we were shipped, the thermal pads may be expired, or were stored in a very dry place that rotted them out. I'll attach a picture in a bit illustrating the problem—I think the crusty nature also resulted in incomplete contact, as the smaller die was only contacting the thermal pad about 50%.
  • I replaced the included thermal pad with my own, cut from a 1mm thick set.
  • The fan cable from the fan on the included heatsink is not very long—I had to re-remove the X4 to attach the little 2-pin JST connector, then re-attach the X4, stretching the fan cable to its limit. There's no way to route the fan cable differently—it could definitely use an extra cm or so of length.
  • The antennas and CMOS battery came pre-attached—and kind of just stick out from the case like antennae. I used a tiny piece of VHB tape to stick the CMOS battery onto the 'leg' of the heatsink case, and stuck the WiFi/BT antennas to the other side. There's not really a place to route these little wires, so they just stick out the open sides.
  • There was no M.2 mounting screw included, so I had to fish one out of my little M.2 screw baggie.
  • There's no thermal interface between the memory and the heatsink, so it doesn't get the benefit of the giant heatsink hovering just a couple mm or so above it.

radxa-x4-1

@geerlingguy
Copy link
Owner Author

geerlingguy commented Jul 30, 2024

Things I would like to check on with attached RP2040's GPIO:

  • How is the RP2040 attached to the system? (I didn't see it using lsusb)
  • Is there any way to get HATs to work?
  • Do any Pi libraries work with the X4, or will they need rewriting to work through the RP2040?
  • See question: Using GPIO through the RP2040 (it looks like their docs only mention using a tool called Zadig to get it to work in Windows, but even there, it seems like the USB device would need to be attached?)
Screenshot 2024-07-30 at 4 49 36 PM

It says it's connected to USB 2.0 / UART on the N100 itself... but I definitely don't see it on USB:

lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 002: ID 1997:2466 Shenzhen Riitek Technology Co., Ltd Wireless Receiver
Bus 003 Device 003: ID 13d3:3572 IMC Networks Bluetooth Radio
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

sudo lsusb -t
/:  Bus 001.Port 001: Dev 001, Class=root_hub, Driver=xhci_hcd/1p, 480M
/:  Bus 002.Port 001: Dev 001, Class=root_hub, Driver=xhci_hcd/1p, 20000M/x2
/:  Bus 003.Port 001: Dev 001, Class=root_hub, Driver=xhci_hcd/12p, 480M
    |__ Port 003: Dev 002, If 0, Class=Human Interface Device, Driver=usbhid, 12M
    |__ Port 003: Dev 002, If 1, Class=Human Interface Device, Driver=usbhid, 12M
    |__ Port 007: Dev 003, If 0, Class=Wireless, Driver=btusb, 12M
    |__ Port 007: Dev 003, If 1, Class=Wireless, Driver=btusb, 12M
/:  Bus 004.Port 001: Dev 001, Class=root_hub, Driver=xhci_hcd/4p, 10000M

@geerlingguy
Copy link
Owner Author

Power levels between 9-22W, while running Geekbench:

Screenshot 2024-07-30 at 4 26 51 PM

I'm guessing I should use a higher power USB-C adapter (currently using Argon's 27W PWR GaN...). They recommend their PD 30W PSU, or anything that "supports 12V and the current >= 2.5A under 12V".

@geerlingguy
Copy link
Owner Author

geerlingguy commented Jul 30, 2024

I'm seeing some very high temps:

coretemp-isa-0000
Adapter: ISA adapter
Package id 0:  +95.0°C  (high = +105.0°C, crit = +105.0°C)
Core 0:        +95.0°C  (high = +105.0°C, crit = +105.0°C)
Core 1:        +95.0°C  (high = +105.0°C, crit = +105.0°C)
Core 2:        +95.0°C  (high = +105.0°C, crit = +105.0°C)
Core 3:        +95.0°C  (high = +105.0°C, crit = +105.0°C)

acpitz-acpi-0
Adapter: ACPI interface
temp1:        +27.8°C

nvme-pci-0300
Adapter: PCI adapter
Composite:    +72.8°C  (low  = -273.1°C, high = +81.8°C)
                       (crit = +85.8°C)
Sensor 1:     +72.8°C  (low  = -273.1°C, high = +65261.8°C)

NVMe stays up around that temperature at all times. The CPU spikes to 95°C when I start hammering it, then it seems to throttle down to settle in at 74°C or so. It idles at 45°C.

Using i7z while running stress-ng:

Cpu speed from cpuinfo 806.00Mhz
cpuinfo might be wrong if cpufreq is enabled. To guess correctly try estimating via tsc
Linux's inbuilt cpu_khz code emulated now
True Frequency (without accounting Turbo) 806 MHz
  CPU Multiplier 8x || Bus clock frequency (BCLK) 100.75 MHz

Socket [0] - [physical cores=4, logical cores=4, max online cores ever=4]
  TURBO ENABLED on 4 Cores, Hyper Threading OFF
  Max Frequency without considering Turbo 906.75 MHz (100.75 x [9])
  Max TURBO Multiplier (if Enabled) with 1/2/3/4 Cores is  34x/34x/31x/29x
  Real Current Frequency 2877.43 MHz [100.75 x 28.56] (Max of below)
        Core [core-id]  :Actual Freq (Mult.)      C0%   Halt(C1)%  C3 %   C6 %  Temp      VCore
        Core 1 [0]:       2877.43 (28.56x)       100       0       0       0    95      1.0508
        Core 2 [1]:       2877.41 (28.56x)       100       0       0       0    95      1.0516
        Core 3 [2]:       2877.39 (28.56x)       100       0       0       0    95      1.0516
        Core 4 [3]:       2877.42 (28.56x)       100       0       0       0    95      1.0516



C0 = Processor running without halting
C1 = Processor running with halts (States >C0 are power saver modes with cores idling)
C3 = Cores running with PLL turned off and core cache turned off
C6, C7 = Everything in C3 + core state saved to last level cache, C7 is deeper than C6
  Above values in table are in percentage over the last 1 sec
[core-id] refers to core-id number in /proc/cpuinfo
'Garbage Values' message printed when garbage values are read
  Ctrl+C to exit

After about 10 seconds:

  Real Current Frequency 2221.42 MHz [100.75 x 22.05] (Max of below)
        Core [core-id]  :Actual Freq (Mult.)      C0%   Halt(C1)%  C3 %   C6 %  Temp      VCore
        Core 1 [0]:       2221.37 (22.05x)       100       0       0       0    76      0.9174
        Core 2 [1]:       2221.41 (22.05x)       100       0       0       0    76      0.9174
        Core 3 [2]:       2221.41 (22.05x)       100       0       0       0    76      0.9174
        Core 4 [3]:       2221.42 (22.05x)       100       0       0       0    76      0.9175

@geerlingguy
Copy link
Owner Author

geerlingguy commented Jul 30, 2024

Initial conclusion—assuming there isn't something massively wrong with my heatsink case: you need a lot more cooling / better thermal interface than a Raspberry Pi / Arm chip to cool the N100 adequately. Its's odd because the similar-size heatsink on the LattePanda Mu seems to do an adequate job of keeping it cool, at least for benchmarking runs.

I may re-paste the thing with higher performance thermal paste... but the gap is big enough I don't know if that's a good idea. A pad is really necessary.

@geerlingguy
Copy link
Owner Author

PiBenchmarks.com result:

Alias (leave blank for Anonymous): geerlingguy
Result submitted successfully and will appear live on https://pibenchmarks.com within a couple of minutes.

     Category                  Test                      Result      
HDParm                    Disk Read                 2837.27 MB/sec           
HDParm                    Cached Disk Read          1644.80 MB/sec           
DD                        Disk Write                732 MB/s                 
FIO                       4k random read            187889 IOPS (751559 KB/s)
FIO                       4k random write           84979 IOPS (339917 KB/s) 
IOZone                    4k read                   103342 KB/s              
IOZone                    4k write                  140635 KB/s              
IOZone                    4k random read            61937 KB/s               
IOZone                    4k random write           235508 KB/s              

                          Score: 51977                                       

@geerlingguy
Copy link
Owner Author

For GPIO use, see Les Pounder's video on using GPIO — you press the button on the board (near the RADXA logo, unmarked), and it resets the RP2040 and holds down BOOTSEL at the same time, so it appears as a USB device:

Bus 003 Device 005: ID 2e8a:0003 Raspberry Pi RP2 Boot

Under Ubuntu Desktop, it automatically mounts the volume:

sda           8:0    1   128M  0 disk 
└─sda1        8:1    1   128M  0 part /media/jgeerling/RPI-RP2

There, you can drag a UF2 file on it (like MicroPython), and use Thonny to interact with the RP2040, upload code to it, etc.

I have yet to find a way to virtually 'press the button'; some people mention the access is the same as on the X2L, but I tried using their code to manually reset the RP2040 with gpiod installed and got:

jgeerling@radxa-x4:~/Downloads$ source usb.sh
gpioset: error setting the GPIO line values: No such file or directory
gpioset: error setting the GPIO line values: No such file or directory
gpioset: error setting the GPIO line values: No such file or directory
gpioset: error setting the GPIO line values: No such file or directory

@geerlingguy
Copy link
Owner Author

geerlingguy commented Jul 31, 2024

I've squished in some Noctua thermal paste in place of the thermal pads, to see if it conducts the heat away any better.

image

Re-running Geekbench 6, I'm getting: https://browser.geekbench.com/v6/cpu/7138811 (very slightly faster than before).

And re-running HPL, I'm getting 37.531 Gflops at 15W, for 2.50 Gflops/W — so speed marginally better, efficiency slightly better (maybe just running a slight bit cooler).

Finally, here's a re-run of sbc-bench: https://0x0.st/XObB.bin

@camerahacks
Copy link

Hey Jeff, this is André from dphacks/rpilocator.

I just got mine today! I was able to reroute the fan cable by removing the 3 screws on the fan and repositioning it so the wires are closer to the connector.

@geerlingguy
Copy link
Owner Author

@camerahacks ah, that would definitely help! Maybe a QC issue and they can fix it on the production line for that heatsink fan.

@rcarmo
Copy link

rcarmo commented Aug 4, 2024

The ideal thing would be for the fan wire to go down to the board via a notch (I have that in my “improvements” section). That way I could 3D print dust covers for the front and back that didn’t leave a little turn of wire on the outside…

@alansill
Copy link

alansill commented Aug 4, 2024

Has anyone tried the POE+ HAT with this? Our interest as usual is in building SBC clusters for outreach and training, and while we have been able to cobble together POE for the 12V needed for the X2L boards by combining POE-to-12V barrel adapters with automotive 12V-to-USB-C adapters that support 12V output by soldering barrel connector leads onto them, we'd prefer an integrated solution.

@camerahacks
Copy link

@alansill , Radxa is offering a PoE+ HAT specific to the X4. The HAT has the power pins that line up with the X4's power pins.

@alansill
Copy link

alansill commented Aug 4, 2024

@alansill , Radxa is offering a PoE+ HAT specific to the X4. The HAT has the power pins that line up with the X4's power pins.

HI, thanks! I'm definitely aware of this and asking if anyone has any experience with using it yet.

@geerlingguy
Copy link
Owner Author

@alansill IIRC @bretmlw has one? He has his first thoughts here: https://bret.dk/intel-n100-radxa-x4-first-thoughts/

@alansill
Copy link

alansill commented Aug 5, 2024

Thanks. Now it's down to comparing good cheap PoE+ 2.5 GbE switches, I guess.

The only reason I want this feature is for remote hard power cycling control. Wish I could find a nice way to do this directly. The other option we're exploring is a simple option-isolator for each node conventional power controlled by the GPIO of the head node.

@willgryan
Copy link

Is there any mention of In-band ECC in the BIOS? I am wondering, because both the LattePanda Mu & ODROID H-4 use (LP)DDR5 and offer IBECC.

@bretmlw
Copy link

bretmlw commented Aug 5, 2024

@alansill , Radxa is offering a PoE+ HAT specific to the X4. The HAT has the power pins that line up with the X4's power pins.

HI, thanks! I'm definitely aware of this and asking if anyone has any experience with using it yet.

Thanks for the pong @geerlingguy - but yup, @alansill, I have it and it seems to fine overall. You need to enable the fan on the HAT via the RP2040, and I found that some of my cables that were fine for PoE on other devices were a little flaky on my cheap 2.5G PoE switch but I don't know if that's down to the HAT, or just the much higher power draw compared to other devices I've used. At least when the PL2 was set to the default 25 (or 15) it would just reset when a heavy benchmark was started. The same cable on my UniFi PoE Lite switch didn't do this, but then it was only 1GbE and made me sad :D When the PL2 was set to 6W it seemed fine on either switch and didn't impact performance by much (if at all) at a quick glance. With this cooling you're not getting those higher power levels for more than a split second (if that!) so it's pointless anyway.

If you have the case fan, and the PoE fan running then it may make things a little nicer for your NVMe but honestly, the overall cooling seems extremely lacklustre for such a device. I'd much rather they went for "Pico ITX" like the ROCK 5B or something to give themselves a bit more space for both cooling, and a larger form-factor NVMe drive!

@webash
Copy link

webash commented Aug 29, 2024

Anyone had a chance to benchmark or even just stat-dump the eMMC? Otherwise anyone have a view about whether it'll be suitable for the OS install of a Docker Swarm Cluster; likely fairly stateless as I'll be externalising most logs to a Loki/Prom instance, so there should be limited continuous writes. Can't find any information about the eMMC on Radxa's website.

EDIT: Mine hasn't arrived yet, so was hoping to get ahead of such if anyone else had already tested it

@rcarmo
Copy link

rcarmo commented Aug 29, 2024

I don't recall anyone having mentioned receiving a board with the eMMC soldered on. Mine came with the BGA location blank, and I don't think they're currently selling SKUs with it.

@willgryan
Copy link

I have an X4 8GB model w/ 64GB eMMC. Below are fio results from running YetAnotherBenchScript on Debian 12.5. Would you like to see anything else?

fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/mmcblk0p2):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 22.38 MB/s    (5.5k) | 77.02 MB/s    (1.2k)
Write      | 22.39 MB/s    (5.5k) | 77.43 MB/s    (1.2k)
Total      | 44.77 MB/s   (11.1k) | 154.46 MB/s   (2.4k)
           |                      |                     
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 119.52 MB/s    (233) | 117.75 MB/s    (114)
Write      | 125.87 MB/s    (245) | 125.59 MB/s    (122)
Total      | 245.39 MB/s    (478) | 243.34 MB/s    (236)

@schtschenok
Copy link

schtschenok commented Sep 3, 2024

Hey there! I just got my X4 (8GB), and my temps seem to be way better. At roughly 55c under load (with all cores running constantly at 2900MHz) & 25c idle. I tweaked something in BIOS so they don't ever lower their frequency under load from 2900, something that has to do with PL1/PL2. If the load is asymmetrical - some cores can go up to 3.4GHz at roughly 65c for a moment, but never 90 or anything like that. These thermals were the same for me both during synthetic stress tests (s-tui) and during the actual compilation of FFMPEG that I did, so it's probably not a testing methodology issue.

While assembling it, I noticed that the supplied thermal pad was just way too thin (and crumbly) and didn't make the full contact between the die and the radiator. They put two pieces of the same pad in the box, so I, without much hesitation, just slapped one on top of another, and it worked for me!

So my GeekBench results with two crumbly thermal pads slapped onto each other (and not even covering the whole chip's PCB area - only the shiny dies and some space around them) are even better than @geerlingguy's thermal paste results. Which is kind of weird, honestly.
https://browser.geekbench.com/v6/cpu/7615536

UPD: Without me checking the temps while benchmarking in a second ssh session it did even better https://browser.geekbench.com/v6/cpu/7615765

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants