Skip to content

Commit

Permalink
ADD fixes and tests
Browse files Browse the repository at this point in the history
  • Loading branch information
ts-thomas committed Oct 17, 2019
1 parent 3c796e6 commit 22944a3
Show file tree
Hide file tree
Showing 30 changed files with 1,650 additions and 840 deletions.
56 changes: 30 additions & 26 deletions README.md
Expand Up @@ -302,20 +302,20 @@ Values represents operations per second, each benchmark task has to process a da
<table>
<tr></tr>
<tr>
<td><sup>Library</sup></td>
<td align=center><sup>KB</sup></td>
<td align=center><sup>RAM</sup></td>
<td align=center><sup>Create</sup></td>
<td align=center><sup>Replace</sup></td>
<td align=center><sup>Update</sup></td>
<td align=center><sup>Order</sup></td>
<td align=center><sup>Repaint</sup></td>
<td align=center><sup>Append</sup></td>
<td align=center><sup>Remove</sup></td>
<td align=center><sup>Toggle</sup></td>
<td align=center><sup>Clear</sup></td>
<td align=center><sup>Index</sup></td>
<td align=center><sup>Score</sup></td>
<td><sub>Library</sub></td>
<td align=center><sub>KB</sub></td>
<td align=center><sub>RAM</sub></td>
<td align=center><sub>Create</sub></td>
<td align=center><sub>Replace</sub></td>
<td align=center><sub>Update</sub></td>
<td align=center><sub>Order</sub></td>
<td align=center><sub>Repaint</sub></td>
<td align=center><sub>Append</sub></td>
<td align=center><sub>Remove</sub></td>
<td align=center><sub>Toggle</sub></td>
<td align=center><sub>Clear</sub></td>
<td align=center><sub>Index</sub></td>
<td align=center><sub>Score</sub></td>
</tr>
<tr>
<td><sub>mikado</sub></td>
Expand All @@ -324,14 +324,14 @@ Values represents operations per second, each benchmark task has to process a da
<td align=right><sub>18850</sub></td>
<td align=right><sub>7611</sub></td>
<td align=right><sub>38162</sub></td>
<td align=right><sub>12531</sub></td>
<td align=right><sub>27155</sub></td>
<td align=right><sub>248338</sub></td>
<td align=right><sub>32852</sub></td>
<td align=right><sub>26501</sub></td>
<td align=right><sub>33436</sub></td>
<td align=right><sub>26448</sub></td>
<td align=right><b><sub>999</sub></b></td>
<td align=right><b><sub>20731</sub></b></td>
<td align=right><b><sub>21597</sub></b></td>
</tr>
<tr></tr>
<tr>
Expand All @@ -347,7 +347,7 @@ Values represents operations per second, each benchmark task has to process a da
<td align=right><sub>8159</sub></td>
<td align=right><sub>1623</sub></td>
<td align=right><sub>4729</sub></td>
<td align=right><b><sub>268</sub></b></td>
<td align=right><b><sub>247</sub></b></td>
<td align=right><b><sub>3572</sub></b></td>
</tr>
<tr></tr>
Expand All @@ -364,7 +364,7 @@ Values represents operations per second, each benchmark task has to process a da
<td align=right><sub>5929</sub></td>
<td align=right><sub>2187</sub></td>
<td align=right><sub>12255</sub></td>
<td align=right><b><sub>212</sub></b></td>
<td align=right><b><sub>196</sub></b></td>
<td align=right><b><sub>1601</sub></b></td>
</tr>
<tr></tr>
Expand All @@ -381,7 +381,7 @@ Values represents operations per second, each benchmark task has to process a da
<td align=right><sub>6448</sub></td>
<td align=right><sub>1921</sub></td>
<td align=right><sub>10784</sub></td>
<td align=right><b><sub>207</sub></b></td>
<td align=right><b><sub>199</sub></b></td>
<td align=right><b><sub>1501</sub></b></td>
</tr>
<tr></tr>
Expand All @@ -398,7 +398,7 @@ Values represents operations per second, each benchmark task has to process a da
<td align=right><sub>6145</sub></td>
<td align=right><sub>1455</sub></td>
<td align=right><sub>11965</sub></td>
<td align=right><b><sub>245</sub></b></td>
<td align=right><b><sub>239</sub></b></td>
<td align=right><b><sub>1496</sub></b></td>
</tr>
<tr></tr>
Expand Down Expand Up @@ -432,7 +432,7 @@ Values represents operations per second, each benchmark task has to process a da
<td align=right><sub>2869</sub></td>
<td align=right><sub>605</sub></td>
<td align=right><sub>1693</sub></td>
<td align=right><b><sub>97</sub></b></td>
<td align=right><b><sub>90</sub></b></td>
<td align=right><b><sub>977</sub></b></td>
</tr>
<tr>
Expand All @@ -448,7 +448,7 @@ Values represents operations per second, each benchmark task has to process a da
<td align=right><sub>1582</sub></td>
<td align=right><sub>1128</sub></td>
<td align=right><sub>26836</sub></td>
<td align=right><b><sub>166</sub></b></td>
<td align=right><b><sub>162</sub></b></td>
<td align=right><b><sub>916</sub></b></td>
</tr>
<tr>
Expand All @@ -464,7 +464,7 @@ Values represents operations per second, each benchmark task has to process a da
<td align=right><sub>1530</sub></td>
<td align=right><sub>1074</sub></td>
<td align=right><sub>23952</sub></td>
<td align=right><b><sub>173</sub></b></td>
<td align=right><b><sub>172</sub></b></td>
<td align=right><b><sub>832</sub></b></td>
</tr>
<tr></tr>
Expand Down Expand Up @@ -498,7 +498,7 @@ Values represents operations per second, each benchmark task has to process a da
<td align=right><sub>1091</sub></td>
<td align=right><sub>831</sub></td>
<td align=right><sub>5370</sub></td>
<td align=right><b><sub>91</sub></b></td>
<td align=right><b><sub>90</sub></b></td>
<td align=right><b><sub>573</sub></b></td>
</tr>
<tr></tr>
Expand All @@ -515,7 +515,7 @@ Values represents operations per second, each benchmark task has to process a da
<td align=right><sub>122</sub></td>
<td align=right><sub>97</sub></td>
<td align=right><sub>1115</sub></td>
<td align=right><b><sub>46</sub></b></td>
<td align=right><b><sub>48</sub></b></td>
<td align=right><b><sub>147</sub></b></td>
</tr>
</table>
Expand Down Expand Up @@ -564,7 +564,7 @@ Instance methods:
Instance methods (not included in mikado.light.js):
- <a href="#view.refresh">view.__refresh__(\<node | index\>, \<payload\>)</a>
- <a href="#view.sync">view.__sync__(\<uncache?\>)</a>
- <a href="#view.purge">view.__purge__(\<template\>)</a>
- <a href="#view.purge">view.__purge__()</a>
- <a href="#view.find">view.__find__(data)</a>
- <a href="#view.search">view.__search__(data)</a>
- <a href="#view.where">view.__where__(payload)</a>
Expand Down Expand Up @@ -1569,10 +1569,12 @@ Purge all shared pools (factory pool and template pool):
view.purge();
```

<!--
Purge shared pools from a specific template:
```js
view.purge(template);
```
-->

<a name="helpers"></a>
### Useful Helpers
Expand Down Expand Up @@ -2587,6 +2589,7 @@ Clear shared pools of the current template:
view.purge();
```

<!--
Clear shared pools of all templates:
```js
Mikado.purge();
Expand All @@ -2596,6 +2599,7 @@ Clear shared pools of a specific template:
```js
Mikado.purge(template);
```
-->

Clear cache:
```js
Expand Down
40 changes: 18 additions & 22 deletions bench/README.md
@@ -1,9 +1,5 @@
# Benchmark of Template Rendering

The most important of all rules: Don't trust any benchmark, they may wrong. This is probably (!) the best benchmark comparison of raw templating rendering performance you will find in the web. Please read <a href="#details">the sections below</a> which explains why.

If you see any improvement, weather it is too less important or not, please open an issue (after you have fully read this page). I want to make sure to provide a real and fair situation in all cases.

Run the benchmark (non-keyed):<br>
<a href="https://raw.githack.com/nextapps-de/mikado/master/bench/">https://raw.githack.com/nextapps-de/mikado/master/bench/</a><br>

Expand All @@ -26,14 +22,12 @@ Run the benchmark (internal/data-driven):<br>
<tr></tr>
<tr>
<td>internal/data-driven</td>
<td>This mode runs through the same internal pool of data (same references, no new data from external or by creation) and compares the performance of data-driven paradigm when internal state changes.</td>
<td>This mode runs through the same internal pool of data (same references, no new data from external or by creation) and compares the performance of data-driven paradigm on internal state changes.</td>
</tr>
</table>

Comming soon: A new test section which measures the change of internal data changes instead of process newly created or external fetched data.

#### Test goal
This stress test focuses a real life use case, where new data is coming from a source (e.g. from a server or by creation during runtime) and should be rendered through a template. The different to other benchmark implementations is, that the given task is not known before the data was available. It measures the workload of a real use case.
This stress test focuses a real life use case, where new data is coming from a source to be rendered through a template (e.g. from a server or by creation during runtime). The different to other benchmark implementations is, that the given task is not known before the data was available.

This test measures the raw rendering performance. If you look for a benchmark which covers more aspects goto here:<br>
https://krausest.github.io/js-framework-benchmark/current.html
Expand All @@ -60,7 +54,7 @@ The score is calculated in relation to the median value of each test. That will
The file size and memory gets less relevance by applying the square root of these values.

#### Index
The score index is a close very stable non-relational representation where each score references to a specific place in a ranking table. The maximum possible score and also the best place is 1000, that requires a library to be best in each category (regardless of how much better the factor is, that's the difference to the score value).
The score index is a very stable representation where each score points to a specific place in a ranking table. The maximum possible score and also the best place is 1000, that requires a library to be best in each category (regardless of how much better the factor is, that's the difference to the score value).

<code>Index = Sum<sub>test</sub>(lib_ops / max_ops) / test_count * 1000</code>

Expand Down Expand Up @@ -100,7 +94,7 @@ The file size and memory gets less relevance by applying the square root of thes
<tr></tr>
<tr>
<td>Arrange</td>
<td>Toggles between:<br>1. Move first items to the end<br>2. move last item to the start<br>3. toggle seconds and fore-last items<br>4. re-arrange (4 ordered groups)</td>
<td>Toggles between:<br>1. swap second and fore-last item<br>2. re-arrange (4 shifted groups)</td>
</tr>
<tr></tr>
<tr>
Expand All @@ -120,7 +114,7 @@ The file size and memory gets less relevance by applying the square root of thes
<tr></tr>
<tr>
<td>Toggle</td>
<td>Toggles between "Remove" and "Append" (test for optimizations like: pagination, folding, resizing).</td>
<td>Toggles between "Remove" and "Append" (test for optimizations like: pagination, content folding, list resizing).</td>
</tr>
<tr></tr>
<tr>
Expand Down Expand Up @@ -155,15 +149,13 @@ Regardless the function is doing, every test has to run through the same logic.

#### Random item factory

The items were created by a random factory. To be fair the items comes from a pre-filled pool (5 slots a 100 items), so that keyed libraries get a chance to match same IDs. Also that comes closer to real use cases, because an application normally should no load unlimited new data.
The items were created by a random factory. The items comes from a pre-filled pool (5 slots a 100 items), so that keyed libraries get a chance to match same IDs.

Also the items has some fields, which aren't included by the template. That is also important, because in real live this situation is the most common. Most other benchmarks just provide data which is consumed by the template.
Also the items has some fields, which aren't included by the template. That is also important, because in this situation is the most common. Most other benchmarks just provide data which is consumed by the template.

#### Mimic data from a server or created during runtime

Since these benchmark try to get close to a real live situation, the items will be cloned before every test to mimic a fresh fetch from the server or the creation of a new items during runtime. Each test has a very specific job, so there are tests, where new data is coming from the outside but content stay unchanged, or IDs still remain (just important for keyed modes).

Many of other tests around, will do not fully clone objects, and those benchmarks will provide really wrong results when they try to compare the performance of incoming data (most of them just compares the process of already existing data).
The items will be cloned before every test to mimic a fresh fetch from the server or the creation of new items during runtime. The "data-driven" mode disables cloning and perform changes over and over through the same data.

#### Dedicated sandbox

Expand All @@ -179,24 +171,28 @@ You may see benchmarks which draws the rendering visible to the users screen. It

#### About requirements for tested libraries
1. Each library should provide at least its own features to change DOM. A test implementation should not force to implement something like `node.nodeValue = "..."` or `node.className = "..."` by hand.
This test is benchmarking library performance and not the performance made by an implementation of a developer. That is probably the biggest different to other benchmark tests.
The goal is to benchmark library performance and not the performance made by an implementation of a developer. That is probably the biggest different to other benchmark tests.

2. Also asynchronous/scheduled rendering is not allowed.

3. The keyed test requires a largely non-reusing paradigm. When a new item comes from the outside, the library does not reusing nodes (on different IDs).
3. The keyed test requires a largely non-reusing paradigm. When a new item comes from the outside, the library does not reusing nodes (on different keys/IDs).

#### About the test environment

This test also covers runtime optimizations of each library which is very important to produce meaningful benchmark results. The goal is to get closest to a real environment.
This test also covers runtime optimizations of each library which is very important to produce meaningful results.

That's also a difference to other tests. Other benchmarks may re-loading the whole page/application after every single test loop. This would be a good step away from a real environment and also cannot cover one of the biggest strength of Mikado which is based on several runtime optimizations.
<!--
Other benchmarks may re-loading the whole page/application after every single test loop. This would be a good step away from a real environment and also cannot cover one of the biggest strength of Mikado which is based on several runtime optimizations.
-->

#### About median values
Using the median value is very common to normalize the spread in results in a statistically manner. But using the median as benchmark samples, especially when code runs through a VM, the risk is high that the median value is getting back a false result. One of the most factors which is often overseen is the run of the garbage collector, which costs a significantly amount of time and runs randomly. A benchmark which is based on median results will effectively cut out the garbage collector and may produce wrong results. A benchmark based on a best run will absolutely cut off the garbage collector.
Using the median value is very common to normalize the spread in results in a statistically manner. But using the median as benchmark samples, especially when code runs through a VM, the risk is high that the median value is getting back a false result. One thing that is often overseen is the run of the garbage collector, which has a significantly cost and runs randomly. A benchmark which is based on median results will effectively cut out the garbage collector and may produce wrong results. A benchmark based on a best run will absolutely cut off the garbage collector.

This test implementation just using a median to map the results into a normalized scoring index. The results are based on the full computation time including the full run of the garbage collector. That also comes closest to a real environment.

#### About benchmark precision
It is not possible to provide absolute stable browser measuring. There are so many factors which has an impact of benchmarking that it makes no sense in trying to make "synthetic" fixes on things they cannot been fixed. Also every synthetic change may lead into wrong results and false interpreting. For my personal view the best benchmark just uses the browser without any cosmetics. That comes closest to the environment of an user who is using an application.

That all just become more complex when doing micro-benchmarking. Luckily this workload is by far big enough to produces stable results. Tests where shuffled before start, so you can proof by yourself that values of course differ from one run to another, but produce very stable results. Especially the ___index___ row provides you one of the most stable ranking indexes (the more stable a test is the more meaningful it is). There is absolutely no need for using benchmark.js especially for this workload, also it absolutely does not fit into a real live environment. No place for statistical nonsense, this isn't politics.
<!--
That all just become more complex when doing micro-benchmarking. Luckily this workload is by far big enough to produces stable results. Tests where shuffled before start, so you can proof by yourself that values of course differ from one run to another, but produce very stable results. Especially the ___index___ row provides you one of the most stable ranking indexes (the more stable a test is the more meaningful it is). There is absolutely no need for using benchmark.js especially for this workload, also it absolutely does not fit into a real live environment. No place for statistical nonsense, this isn't politics.
-->
10 changes: 2 additions & 8 deletions bench/bench.js
Expand Up @@ -107,17 +107,11 @@ queue.push({
test: null,
start: null,
prepare: function(index){
if(index % 4 === 0){ // swap
if(index % 2){ // swap
items.splice(1, 0, items.splice(items.length - 2, 1)[0]);
items.splice(items.length - 2, 0, items.splice(2, 1)[0]);
}
if(index % 3 === 0){ // move down
items.push(items.shift());
}
if(index % 2 === 0){ // move up
items.unshift(items.pop());
}
else{ // simple re-order
else{ // re-order
for(let i = 80; i < 90; i++) items.splice(i, 0, items.splice(10, 1)[0]);
for(let i = 30; i < 40; i++) items.splice(i, 0, items.splice(60, 1)[0]);
}
Expand Down

0 comments on commit 22944a3

Please sign in to comment.