Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 25 additions & 19 deletions entries/abouchez/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,13 @@ Here are the main ideas behind this implementation proposal:
- Process file in parallel using several threads (configurable, with `-t=16` by default);
- Fed each thread from 64MB chunks of input (because thread scheduling is unfair, it is inefficient to pre-divide the size of the whole input file into the number of threads);
- Each thread manages its own data, so there is no lock until the thread is finished and data is consolidated;
- Each station information (name and values) is packed into a record of exactly 64 bytes, with no external pointer/string, to match the CPU L1 cache size for efficiency;
- Use a dedicated hash table for the name lookup, with direct crc32c SSE4.2 hash - when `TDynArrayHashed` is involved, it requires a transient name copy on the stack, which is noticeably slower (see last paragraph of this document);
- Each station information (name and values) is packed into a record of exactly 16 bytes, with no external pointer/string, to match the CPU L1 cache size (64 bytes) for efficiency;
- Use a dedicated hash table for the name lookup, with crc32c perfect hash function - no name comparison nor storage is needed;
- Store values as 16-bit or 32-bit integers (i.e. temperature multiplied by 10);
- Parse temperatures with a dedicated code (expects single decimal input values);
- No memory allocation (e.g. no transient `string` or `TBytes`) nor any syscall is done during the parsing process to reduce contention and ensure the process is only CPU-bound and RAM-bound (we checked this with `strace` on Linux);
- Pascal code was tuned to generate the best possible asm output on FPC x86_64 (which is our target);
- Some dedicated x86_64 asm has been written to replace *mORMot* `crc32c` and `MemCmp` general-purpose functions and gain a last few percents (nice to have);
- Can optionally output timing statistics and hash value on the console to debug and refine settings (with the `-v` command line switch);
- Can optionally output timing statistics and resultset hash value on the console to debug and refine settings (with the `-v` command line switch);
- Can optionally set each thread affinity to a single core (with the `-a` command line switch).

## Why L1 Cache Matters
Expand All @@ -41,9 +40,11 @@ The "64 bytes cache line" trick is quite unique among all implementations of the

The L1 cache is well known in the performance hacking litterature to be the main bottleneck for any efficient in-memory process. If you want things to go fast, you should flatter your CPU L1 cache.

We are very lucky the station names are just big enough to fill no more than 64 bytes, with min/max values reduced as 16-bit smallint - resulting in temperature range of -3276.7..+3276.8 which seems fair on our planet according to the IPCC. ;)
Min/max values will be reduced as 16-bit smallint - resulting in temperature range of -3276.7..+3276.8 which seems fair on our planet according to the IPCC. ;)

In our first attempt, the `Station[]` array was in fact not aligned to 64 bytes itself. In fact, the RTL `SetLength()` does not align its data to the item size. So the pointer was aligned by 32 bytes, and any memory access would require filling two L1 cache lines. So we added some manual alignement of the data structure, and got 5% better performance.
In our first attempt, we stored the name into the `Station[]` array, so that each entry is 64 bytes long exactly. But since `crc32c` is a perfect hash function for our dataset, we could just store the 32-bit hash instead, for higher performance. On Intel/AMD/AARCH64 CPUs, we use hardware opcodes for this crc32c computation.

See https://en.wikipedia.org/wiki/Perfect_hash_function for reference.

## Usage

Expand Down Expand Up @@ -75,42 +76,43 @@ On my PC, it takes less than 5 seconds to process the 16GB file with 8/10 thread
Let's compare `abouchez` with a solid multi-threaded entry using file buffer reads and no memory map (like `sbalazs`), using the `time` command on Linux:

```
ab@dev:~/dev/github/1brc-ObjectPascal/bin$ time ./abouchez measurements.txt -t=10 >resmrel5.txt
ab@dev:~/dev/github/1brc-ObjectPascal/bin$ time ./abouchez measurements.txt -t=20 >resmormot.txt

real 0m4,216s
user 0m38,789s
sys 0m0,632s
real 0m2,350s
user 0m40,165s
sys 0m0,888s

ab@dev:~/dev/github/1brc-ObjectPascal/bin$ time ./sbalazs measurements.txt 20 >ressb6.txt
ab@dev:~/dev/github/1brc-ObjectPascal/bin$ time ./sbalazs measurements.txt 20 >ressb.txt

real 0m25,330s
user 6m44,853s
sys 0m31,167s
```
We used 20 threads for `sbalazs`, and 10 threads for `abouchez` because it was giving the best results for each program on our PC.
We used 20 threads for both executable, because it was giving the best results for each program on our PC.

Apart from the obvious global "wall" time reduction (`real` numbers), the raw parsing and data gathering in the threads match the number of threads and the running time (`user` numbers), and no syscall is involved by `abouchez` thanks to the memory mapping of the whole file (`sys` numbers, which contain only memory page faults).

The `memmap()` feature makes the initial/cold `abouchez` call slower, because it needs to cache all measurements data from file into RAM (I have 32GB of RAM, so the whole data file will remain in memory, as on the benchmark hardware):
```
ab@dev:~/dev/github/1brc-ObjectPascal/bin$ time ./abouchez measurements.txt -t=10 >resmrel4.txt
ab@dev:~/dev/github/1brc-ObjectPascal/bin$ time ./abouchez measurements.txt -t=20 >resmormot.txt

real 0m6,042s
user 0m53,699s
sys 0m2,941s
```
This is the expected behavior, and will be fine with the benchmark challenge, which ignores the min and max values during its 10 times run. So the first run will just warm up the file into memory.

On my Intel 13h gen processor with E-cores and P-cores, forcing thread to core affinity does not help:
On my Intel 13h gen processor with E-cores and P-cores, forcing thread to core affinity does not make any huge difference (we are within the error margin):
```
ab@dev:~/dev/github/1brc-ObjectPascal/bin$ ./abouchez measurements.txt -t=10 -v
Processing measurements.txt with 10 threads and affinity=false
Processing measurements.txt with 20 threads and affinity=false
result hash=8A6B746A,, result length=1139418, stations count=41343, valid utf8=1
done in 4.25s 3.6 GB/s
done in 2.36s 6.6 GB/s

ab@dev:~/dev/github/1brc-ObjectPascal/bin$ ./abouchez measurements.txt -t=10 -v -a
Processing measurements.txt with 10 threads and affinity=true
Processing measurements.txt with 20 threads and affinity=true
result hash=8A6B746A, result length=1139418, stations count=41343, valid utf8=1
done in 4.42s 3.5 GB/s
done in 2.44s 6.4 GB/s
```
Affinity may help on Ryzen 9, because its Zen 3 architecture is made of identical 16 cores with 32 threads, not this Intel E/P cores mess. But we will validate that on real hardware - no premature guess!

Expand Down Expand Up @@ -145,6 +147,8 @@ This `-t=1` run is for fun: it will run the process in a single thread. It will

Our proposal has been run on the benchmark hardware, using the full automation.

TO BE COMPLETED - NUMBERS BELOW ARE FOR THE OLD VERSION:

With 30 threads (on a busy system):
```
-- SSD --
Expand Down Expand Up @@ -191,7 +195,9 @@ It may be as expected:
- The Ryzen CPU has 16 cores with 32 threads, and it makes sense that using only the "real" cores with CPU+RAM intensive work is enough to saturate them;
- It is a known fact from experiment that forcing thread affinity is not a good idea, and it is always much better to let any modern Linux Operating System schedule the threads to the CPU cores, because it has a much better knowledge of the actual system load and status. Even on a "fair" CPU architecture like AMD Zen.

## Ending Note
## Old Version

TO BE DETAILED (WITH NUMBERS?)

You could disable our tuned asm in the project source code, and loose about 10% by using general purpose *mORMot* `crc32c()` and `CompareMem()` functions, which already runs SSE2/SSE4.2 tune assembly.

Expand Down
Loading