Skip to content

Samsung Chromebook XE303C12 (Exynos5250)

Siarhei Siamashka edited this page Mar 30, 2016 · 3 revisions
Processor       : ARMv7 Processor rev 4 (v7l)
processor       : 0
BogoMIPS        : 1694.10

processor       : 1
BogoMIPS        : 1694.10

Features        : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpv4 idiva idivt 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xc0f
CPU revision    : 4

Hardware        : SAMSUNG EXYNOS5 (Flattened Device Tree)
Revision        : 0000
Serial          : 0000000000000000
tinymembench v0.4 (simple benchmark for memory throughput and latency)

==========================================================================
== Memory bandwidth tests                                               ==
==                                                                      ==
== Note 1: 1MB = 1000000 bytes                                          ==
== Note 2: Results for 'copy' tests show how many bytes can be          ==
==         copied per second (adding together read and writen           ==
==         bytes would have provided twice higher numbers)              ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
==         to first fetch data into it, and only then write it to the   ==
==         destination (source -> L1 cache, L1 cache -> destination)    ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
==         brackets                                                     ==
==========================================================================

 C copy backwards                                     :   1415.1 MB/s (0.7%)
 C copy backwards (32 byte blocks)                    :   1463.5 MB/s
 C copy backwards (64 byte blocks)                    :   2547.7 MB/s (0.5%)
 C copy                                               :   2856.3 MB/s (0.2%)
 C copy prefetched (32 bytes step)                    :   3140.5 MB/s
 C copy prefetched (64 bytes step)                    :   3245.3 MB/s
 C 2-pass copy                                        :   1586.7 MB/s
 C 2-pass copy prefetched (32 bytes step)             :   1883.0 MB/s
 C 2-pass copy prefetched (64 bytes step)             :   1947.1 MB/s
 C fill                                               :   6055.9 MB/s (0.4%)
 C fill (shuffle within 16 byte blocks)               :   2074.8 MB/s
 C fill (shuffle within 32 byte blocks)               :   2076.0 MB/s
 C fill (shuffle within 64 byte blocks)               :   2140.5 MB/s
 ---
 standard memcpy                                      :   2535.8 MB/s
 standard memset                                      :   6049.0 MB/s (0.4%)
 ---
 NEON read                                            :   3862.4 MB/s
 NEON read prefetched (32 bytes step)                 :   4618.3 MB/s
 NEON read prefetched (64 bytes step)                 :   4819.5 MB/s
 NEON read 2 data streams                             :   3937.7 MB/s
 NEON read 2 data streams prefetched (32 bytes step)  :   4601.5 MB/s
 NEON read 2 data streams prefetched (64 bytes step)  :   4865.7 MB/s
 NEON copy                                            :   2893.4 MB/s
 NEON copy prefetched (32 bytes step)                 :   3240.7 MB/s
 NEON copy prefetched (64 bytes step)                 :   3254.8 MB/s
 NEON unrolled copy                                   :   2588.8 MB/s
 NEON unrolled copy prefetched (32 bytes step)        :   3476.4 MB/s
 NEON unrolled copy prefetched (64 bytes step)        :   3516.2 MB/s
 NEON copy backwards                                  :   1479.6 MB/s
 NEON copy backwards prefetched (32 bytes step)       :   1641.4 MB/s
 NEON copy backwards prefetched (64 bytes step)       :   1649.4 MB/s
 NEON 2-pass copy                                     :   2354.0 MB/s
 NEON 2-pass copy prefetched (32 bytes step)          :   2647.0 MB/s
 NEON 2-pass copy prefetched (64 bytes step)          :   2663.0 MB/s
 NEON unrolled 2-pass copy                            :   1529.9 MB/s
 NEON unrolled 2-pass copy prefetched (32 bytes step) :   1898.6 MB/s
 NEON unrolled 2-pass copy prefetched (64 bytes step) :   1937.9 MB/s
 NEON fill                                            :   6058.0 MB/s (0.3%)
 NEON fill backwards                                  :   2099.7 MB/s
 VFP copy                                             :   2731.3 MB/s
 VFP 2-pass copy                                      :   1506.3 MB/s
 ARM fill (STRD)                                      :   6059.6 MB/s (0.4%)
 ARM fill (STM with 8 registers)                      :   6050.2 MB/s
 ARM fill (STM with 4 registers)                      :   6060.9 MB/s (0.1%)
 ARM copy prefetched (incr pld)                       :   3180.8 MB/s
 ARM copy prefetched (wrap pld)                       :   3061.7 MB/s
 ARM 2-pass copy prefetched (incr pld)                :   1815.2 MB/s
 ARM 2-pass copy prefetched (wrap pld)                :   1766.1 MB/s

==========================================================================
== Framebuffer read tests.                                              ==
==                                                                      ==
== Many ARM devices use a part of the system memory as the framebuffer, ==
== typically mapped as uncached but with write-combining enabled.       ==
== Writes to such framebuffers are quite fast, but reads are much       ==
== slower and very sensitive to the alignment and the selection of      ==
== CPU instructions which are used for accessing memory.                ==
==                                                                      ==
== Many x86 systems allocate the framebuffer in the GPU memory,         ==
== accessible for the CPU via a relatively slow PCI-E bus. Moreover,    ==
== PCI-E is asymmetric and handles reads a lot worse than writes.       ==
==                                                                      ==
== If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
== or preferably >300 MB/s), then using the shadow framebuffer layer    ==
== is not necessary in Xorg DDX drivers, resulting in a nice overall    ==
== performance improvement. For example, the xf86-video-fbturbo DDX     ==
== uses this trick.                                                     ==
==========================================================================

 NEON read (from framebuffer)                         :    324.1 MB/s
 NEON copy (from framebuffer)                         :    319.9 MB/s
 NEON 2-pass copy (from framebuffer)                  :    295.1 MB/s
 NEON unrolled copy (from framebuffer)                :    298.1 MB/s
 NEON 2-pass unrolled copy (from framebuffer)         :    287.2 MB/s
 VFP copy (from framebuffer)                          :    521.7 MB/s
 VFP 2-pass copy (from framebuffer)                   :    470.1 MB/s
 ARM copy (from framebuffer)                          :    284.8 MB/s
 ARM 2-pass copy (from framebuffer)                   :    260.9 MB/s

==========================================================================
== Memory latency test                                                  ==
==                                                                      ==
== Average time is measured for random memory accesses in the buffers   ==
== of different sizes. The larger is the buffer, the more significant   ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
== accesses. For extremely large buffer sizes we are expecting to see   ==
== page table walk with several requests to SDRAM for almost every      ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest).                                         ==
==                                                                      ==
== Note 1: All the numbers are representing extra time, which needs to  ==
==         be added to L1 cache latency. The cycle timings for L1 cache ==
==         latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
==         two independent memory accesses at a time. In the case if    ==
==         the memory subsystem can't handle multiple outstanding       ==
==         requests, dual random read has the same timings as two       ==
==         single reads performed one after another.                    ==
==========================================================================

block size : single random read / dual random read
      1024 :    0.0 ns          /     0.0 ns 
      2048 :    0.0 ns          /     0.0 ns 
      4096 :    0.0 ns          /     0.0 ns 
      8192 :    0.0 ns          /     0.0 ns 
     16384 :    0.0 ns          /     0.0 ns 
     32768 :    0.0 ns          /     0.0 ns 
     65536 :    5.2 ns          /     7.8 ns 
    131072 :    8.0 ns          /    10.5 ns 
    262144 :   12.5 ns          /    15.6 ns 
    524288 :   14.7 ns          /    18.2 ns 
   1048576 :   21.4 ns          /    29.0 ns 
   2097152 :   74.9 ns          /   109.4 ns 
   4194304 :  109.5 ns          /   149.4 ns 
   8388608 :  127.2 ns          /   169.8 ns 
  16777216 :  136.9 ns          /   182.6 ns 
  33554432 :  150.0 ns          /   204.6 ns 
  67108864 :  159.3 ns          /   221.3 ns 

Kernel 4.9.140-tegra #1 SMP PREEMPT Wed Mar 13 00:32:22 PDT 2019 aarch64 GNU/Linux Under xorg, no compositor active, no browser or other cpu hogs.

tinymembench v0.4.9 (simple benchmark for memory thr

==========================================================================
== Memory bandwidth tests                                               ==
==                                                                      ==
== Note 1: 1MB = 1000000 bytes                                          ==
== Note 2: Results for 'copy' tests show how many bytes can be          ==
==         copied per second (adding together read and writen           ==
==         bytes would have provided twice higher numbers)              ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
==         to first fetch data into it, and only then write it to the   ==
==         destination (source -> L1 cache, L1 cache -> destination)    ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
==         brackets                                                     ==
==========================================================================

 C copy backwards                                     :   2949.7 MB/s (3.8%)
 C copy backwards (32 byte blocks)                    :   3011.8 MB/s
 C copy backwards (64 byte blocks)                    :   3029.2 MB/s
 C copy                                               :   3642.2 MB/s (4.1%)
 C copy prefetched (32 bytes step)                    :   3824.4 MB/s (0.3%)
 C copy prefetched (64 bytes step)                    :   3825.3 MB/s (0.4%)
 C 2-pass copy                                        :   2726.2 MB/s
 C 2-pass copy prefetched (32 bytes step)             :   2902.6 MB/s (2.5%)
 C 2-pass copy prefetched (64 bytes step)             :   2928.3 MB/s (0.3%)
 C fill                                               :   8541.0 MB/s (0.2%)
 C fill (shuffle within 16 byte blocks)               :   8518.5 MB/s (2.1%)
 C fill (shuffle within 32 byte blocks)               :   8537.1 MB/s (0.1%)
 C fill (shuffle within 64 byte blocks)               :   8528.7 MB/s (0.2%)
 ---
 standard memcpy                                      :   3558.8 MB/s
 standard memset                                      :   8520.2 MB/s
 ---
 NEON LDP/STP copy                                    :   3633.9 MB/s (4.2%)
 NEON LDP/STP copy pldl2strm (32 bytes step)          :   1451.0 MB/s (0.3%)
 NEON LDP/STP copy pldl2strm (64 bytes step)          :   1450.9 MB/s (0.5%)
 NEON LDP/STP copy pldl1keep (32 bytes step)          :   3882.5 MB/s (3.9%)
 NEON LDP/STP copy pldl1keep (64 bytes step)          :   3884.0 MB/s (0.4%)
 NEON LD1/ST1 copy                                    :   3630.8 MB/s (0.3%)
 NEON STP fill                                        :   8537.8 MB/s
 NEON STNP fill                                       :   8544.9 MB/s (1.7%)
 ARM LDP/STP copy                                     :   3635.8 MB/s (0.3%)
 ARM STP fill                                         :   8544.8 MB/s (0.1%)
 ARM STNP fill                                        :   8549.2 MB/s (1.0%)
==========================================================================
== Framebuffer read tests.                                              ==
==                                                                      ==
== Many ARM devices use a part of the system memory as the framebuffer, ==
== typically mapped as uncached but with write-combining enabled.       ==
== Writes to such framebuffers are quite fast, but reads are much       ==
== slower and very sensitive to the alignment and the selection of      ==
== CPU instructions which are used for accessing memory.                ==
==                                                                      ==
== Many x86 systems allocate the framebuffer in the GPU memory,         ==
== accessible for the CPU via a relatively slow PCI-E bus. Moreover,    ==
== PCI-E is asymmetric and handles reads a lot worse than writes.       ==
==                                                                      ==
== If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
== or preferably >300 MB/s), then using the shadow framebuffer layer    ==
== is not necessary in Xorg DDX drivers, resulting in a nice overall    ==
== performance improvement. For example, the xf86-video-fbturbo DDX     ==
== uses this trick.                                                     ==
==========================================================================

 NEON LDP/STP copy (from framebuffer)                 :    766.0 MB/s
 NEON LDP/STP 2-pass copy (from framebuffer)          :    688.8 MB/s
 NEON LD1/ST1 copy (from framebuffer)                 :    770.6 MB/s (0.1%)
 NEON LD1/ST1 2-pass copy (from framebuffer)          :    681.3 MB/s (0.3%)
 ARM LDP/STP copy (from framebuffer)                  :    766.1 MB/s
 ARM LDP/STP 2-pass copy (from framebuffer)           :    689.1 MB/s


==========================================================================
== Memory latency test                                                  ==
==                                                                      ==
== Average time is measured for random memory accesses in the buffers   ==
== of different sizes. The larger is the buffer, the more significant   ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
== accesses. For extremely large buffer sizes we are expecting to see   ==
== page table walk with several requests to SDRAM for almost every      ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest).                                         ==
==                                                                      ==
== Note 1: All the numbers are representing extra time, which needs to  ==
==         be added to L1 cache latency. The cycle timings for L1 cache ==
==         latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
==         two independent memory accesses at a time. In the case if    ==
==         the memory subsystem can't handle multiple outstanding       ==
==         requests, dual random read has the same timings as two       ==
==         single reads performed one after another.                    ==
==========================================================================

block size : single random read / dual random read, [MADV_NOHUGEPAGE]
      1024 :    0.0 ns          /     0.1 ns 
      2048 :    0.0 ns          /     0.1 ns 
      4096 :    0.0 ns          /     0.1 ns 
      8192 :    0.0 ns          /     0.1 ns 
     16384 :    0.1 ns          /     0.1 ns 
     32768 :    1.7 ns          /     2.9 ns 
     65536 :    6.4 ns          /     9.5 ns 
    131072 :    9.6 ns          /    12.3 ns 
    262144 :   13.7 ns          /    17.0 ns 
    524288 :   15.8 ns          /    19.7 ns 
   1048576 :   17.3 ns          /    22.1 ns 
   2097152 :   42.1 ns          /    64.2 ns 
   4194304 :   98.5 ns          /   138.1 ns 
   8388608 :  143.9 ns          /   186.3 ns 
  16777216 :  167.2 ns          /   211.2 ns 
  33554432 :  180.1 ns          /   227.1 ns 
  67108864 :  200.0 ns          /   260.2 ns 
block size : single random read / dual random read, [MADV_HUGEPAGE]
      1024 :    0.0 ns          /     0.0 ns 
      2048 :    0.0 ns          /     0.0 ns 
      4096 :    0.0 ns          /     0.0 ns 
      8192 :    0.0 ns          /     0.0 ns 
     16384 :    0.0 ns          /     0.0 ns 
     32768 :    0.0 ns          /     0.0 ns 
     65536 :    6.4 ns          /     9.4 ns 
    131072 :    9.5 ns          /    12.2 ns 
    262144 :   11.2 ns          /    13.1 ns 
    524288 :   12.1 ns          /    13.5 ns 
   1048576 :   12.8 ns          /    13.6 ns 
   2097152 :   27.0 ns          /    33.0 ns 
   4194304 :   90.6 ns          /   127.8 ns 
   8388608 :  123.9 ns          /   153.8 ns 
  16777216 :  139.5 ns          /   161.2 ns 
  33554432 :  147.2 ns          /   163.6 ns 
  67108864 :  154.0 ns          /   167.6 ns 
Clone this wiki locally