-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement stack-based interpreter to improve performance #31
Comments
I just found an available register-based js vm, continuum. it did a great work, but unbelievable, its performance is even worse than sval. it makes me reconsider the performance of stack-based interperter written in js. test case: <!DOCTYPE html>
<html>
<body>
<script src="https://unpkg.com/sval"></script>
<script src="https://unpkg.com/continuum@0.3.5/continuum.js"></script>
<script>
const code = 'for(var i=0;i<100000;i++){}'
for (let i = 0; i < 5; i++) {
const interpreter = new Sval()
console.time(`sval ${i + 1}`)
interpreter.run(code)
console.timeEnd(`sval ${i + 1}`)
const realm = continuum.createRealm()
console.time(`continuum ${i + 1}`)
realm.evaluate(code)
console.timeEnd(`continuum ${i + 1}`)
console.log('')
}
</script>
</body>
</html> result in chrome:
|
I don't think it's a good idea to create a VM with javascript. It just emits more code, needs more allocations and results in more GC. Sval is great and easy. Firstly, we should try regular optimization ways such as Take below case for example, it takes about const interpreter = new Sval();
interpreter.run(`
var x = 0;
function functionOne() {
x++;
}
var iterations = 100000;
console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
functionOne()
};
console.timeEnd('Function #1')
console.log(x);
`); When optimizing with inline function, the time cost cuts down half to const interpreter = new Sval();
interpreter.run(`
var x = 0;
function functionOne() {
x++;
}
var iterations = 100000;
console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
x++;
};
console.timeEnd('Function #1')
console.log(x);
`); By the way, I run the ES6 version of Sval, it's 20% ~ 25% faster than ES5 version |
agree. I will think it over and try to cache something. actually I don't mean to implement an entire vm. just to flatten the ast by compiling into ir or bytecode, in order to decrease the performance reduction caused by maintaining scopes. |
Top cost of the above 5/6 are about property setter/setter, it seems V8 doesn't have good property accessing performance, it should be faster, right? Let's take a look at below test case: Accessing property with const map = {};
const strKeys = new Array(100000);
const numKeys = new Array(100000);
for(var i=0; i< 100000; i++) {
var key = 'Key' + i;
strKeys[i] = key;
numKeys[i] = i;
map[key] = i;
map[i] = i;
}
console.time('StrKey Lookup #1');
for(var i=0; i< 100000; i++) {
var key = strKeys[i];
var v = map[key];
v = v + 1;
}
console.timeEnd('StrKey Lookup #1');
console.time('NumKey Lookup #1');
for(var i=0; i< 100000; i++) {
var key = numKeys[i];
var v = map[key];
v = v + 1;
}
console.timeEnd('NumKey Lookup #1'); Surprise? Of course not, V8 blog gives us the explanation: For sval, we can build a data structure to deal with this, or use array instead of object to store data, and convert all string key(such as type) to number; And for those dynamic property setter, try to predefined the properties to reduce HiddenClass Generation. Say: var x = {}
x.a = 5;
x.b = 6 => var x = {
a: undefined,
b: undefined
}
x.a = 5;
x.b = 6 |
cool linear int keys allow us to malloc a linear continuous mem to store, and we can access the item by offsetting the pointer. but string keys need a hashtable to store the relationship between keys and indexes, in order to access the mem block by indexes to get the item. that's why int keys have a better performance. so, to store a dict, we need a hashtable and an array actually. if we store the data in a array, we still need a map/hashtable/object to maintain the mapping relationship between keys and indexes, which takes the same time as we directly store at a object. to avoid this problem and just use a array to store data, we need a stack-based interpreter instead of register-based or tree-walking interpreter. as you say, the predefinition of dynamic properties has a similar meaning of hoisting variables and functions. I am working on it now, predefining the variables and functions when creating scope, instead of mounting them on the scope context after creating. |
fengari did a great job of running lua over js. mark for reference - Fengari <script src="./fengari-web.js"></script>
<script type="application/lua">
local start = os.time()
for i=1, 10000000 do
end
print(os.time() - start) -- 0~1s cool :)
</script>
<script src="https://unpkg.com/sval"></script>
<script>
const interpreter = new Sval()
interpreter.run(`
var start = Date.now()
for (var i = 0; i < 10000000; i++) {}
console.log((Date.now() - start) / 1000) // around 6s :(
`)
</script> |
currently a stack-based interpreter has been roughly accomplished. a brenchmark demo can be found in #3b660d4, and the interpreter has a great performance promotion lol. const interpreter = new Sval()
interpreter.run(`
var start = Date.now()
for (var i = 0; i < 10000000; i++) {}
console.log((Date.now() - start) / 1000) // 3.45s now :D
`) |
When will this performance improvements reach master branch? Any ETA? I'm looking into moving from Interpreter-JS to Sval. PD: Quite a nice job. |
I will try to complete in August |
Hi :) sorry for the double post, I would love to use Sval too! Do you confirm end of August as a possible release date for the performance improvements? :) Thank you and great job!!! |
Nope. I don't have time to handle it this month, and I'm not sure when I can finish. :( |
That's fine anyway, thank you! :) Looking forward to seeing it completed! |
Any update? What's left to do? :) |
currently sval is a tree-walking interpreter and has a ordinary performance. compile the code into some kinds of intermediate representation based on stack to improve performance.
I have a demo as followed. it may be difficult and take some time to migrate.
The text was updated successfully, but these errors were encountered: