Mustache findVariableInStack slow? #255

Open
patrick-radius opened this Issue Mar 26, 2015 · 13 comments

Comments

Projects
None yet
4 participants
@patrick-radius

Hi,
we are convert an web application from Smarty to Mustache step-by-step, but after having converted some templates we see that findVariableInStack appears at the top of our heavy list during profiling.
We already us caching and cache_lambda_templates set to true.
Is there anything else we can do to get this down?

screen shot 2015-03-26 at 14 58 39

@bobthecow

This comment has been minimized.

Show comment
Hide comment
@bobthecow

bobthecow Mar 26, 2015

Owner

This has nothing to do with caching, as that's dealing with things at the compilation phase, and findVariableInStack is at runtime.

(As a side note: depending on what you return from lambda sections, you might not want to enable cache_lambda_templates. That's only a good idea if you regularly return the exact same template from a lambda section)

It's intense by design, because of the way the Mustache spec requires the context stack to work. This is exacerbated if you have crazy (deep) rendering context, or if you have really deep nested sections, just because that means there will be a lot of context stack to look through when something's a miss.

You can optimize it at your end by using a proper ViewModel, i.e. a class which provides a fairly close to 1:1 mapping between the data for your view and the tags in your template. If you're simply passing in your data model, you'll be doing lots of context traversal looking for the values that your view wants. Also, using a ViewModel is more in line with The Mustache Way™ :)

If you can find a way to optimize the context traversal code itself more, that would be awesome, and more than welcome.

Owner

bobthecow commented Mar 26, 2015

This has nothing to do with caching, as that's dealing with things at the compilation phase, and findVariableInStack is at runtime.

(As a side note: depending on what you return from lambda sections, you might not want to enable cache_lambda_templates. That's only a good idea if you regularly return the exact same template from a lambda section)

It's intense by design, because of the way the Mustache spec requires the context stack to work. This is exacerbated if you have crazy (deep) rendering context, or if you have really deep nested sections, just because that means there will be a lot of context stack to look through when something's a miss.

You can optimize it at your end by using a proper ViewModel, i.e. a class which provides a fairly close to 1:1 mapping between the data for your view and the tags in your template. If you're simply passing in your data model, you'll be doing lots of context traversal looking for the values that your view wants. Also, using a ViewModel is more in line with The Mustache Way™ :)

If you can find a way to optimize the context traversal code itself more, that would be awesome, and more than welcome.

@patrick-radius

This comment has been minimized.

Show comment
Hide comment
@patrick-radius

patrick-radius Mar 27, 2015

Thanks for your reply, i appreciate it very much.
We have specifically been working with 'ViewModels' or in our case 'Presenters' to create as close as a 1:1 mapping between template and data. But i'm affraid it's just the sheer magnitude of variables to look up that's biting us now.

Our context is at most 2 levels deep.

Thanks for your reply, i appreciate it very much.
We have specifically been working with 'ViewModels' or in our case 'Presenters' to create as close as a 1:1 mapping between template and data. But i'm affraid it's just the sheer magnitude of variables to look up that's biting us now.

Our context is at most 2 levels deep.

@bobthecow

This comment has been minimized.

Show comment
Hide comment
@bobthecow

bobthecow Mar 27, 2015

Owner

Those things are probably the low hanging fruit. It's great that you've already done them, but it does mean it'll be a bit harder to find the next place to optimize. Can you tell me more about your data and your Presenters?

Owner

bobthecow commented Mar 27, 2015

Those things are probably the low hanging fruit. It's great that you've already done them, but it does mean it'll be a bit harder to find the next place to optimize. Can you tell me more about your data and your Presenters?

@patrick-radius

This comment has been minimized.

Show comment
Hide comment
@patrick-radius

patrick-radius Mar 28, 2015

The use case in this situation is on a search results page where we are presenting filters/facets (besides the actual search result, of course).

A presenter, in our case, is just a simple class that provides an api to the template:

    /**
     * @return string
     */
    public function url()
    {
        return urldecode($this->item['url']);
    }

    /**
     * @return string
     */
    public function id()
    {
        return $this->item['range'];
    }

    /**
     * @return string
     */
    public function label()
    {
        return $this->item['range'];
    }

    /**
     * @return bool
     */
    public function hascount()
    {
        return isset($this->item['count']);
    }

    /**
     * @return string
     */
    public function count()
    {
        if ($this->hascount()) {
            return $this->item['count'];
        }
        return '';
    }

    /**
     * @return bool
     */
    public function isselected()
    {
        return isset($this->item['selected']) && $this->item['selected'];
    }

Only the class also implements ArrayAccess interface to keep it compatible with some older Smarty templates that are not yet converted. Could that be an issue?

The use case in this situation is on a search results page where we are presenting filters/facets (besides the actual search result, of course).

A presenter, in our case, is just a simple class that provides an api to the template:

    /**
     * @return string
     */
    public function url()
    {
        return urldecode($this->item['url']);
    }

    /**
     * @return string
     */
    public function id()
    {
        return $this->item['range'];
    }

    /**
     * @return string
     */
    public function label()
    {
        return $this->item['range'];
    }

    /**
     * @return bool
     */
    public function hascount()
    {
        return isset($this->item['count']);
    }

    /**
     * @return string
     */
    public function count()
    {
        if ($this->hascount()) {
            return $this->item['count'];
        }
        return '';
    }

    /**
     * @return bool
     */
    public function isselected()
    {
        return isset($this->item['selected']) && $this->item['selected'];
    }

Only the class also implements ArrayAccess interface to keep it compatible with some older Smarty templates that are not yet converted. Could that be an issue?

@bobthecow

This comment has been minimized.

Show comment
Hide comment
@bobthecow

bobthecow Apr 1, 2015

Owner

It tests for methods first, so ArrayAccess shouldn't be a problem.

Owner

bobthecow commented Apr 1, 2015

It tests for methods first, so ArrayAccess shouldn't be a problem.

@bobthecow

This comment has been minimized.

Show comment
Hide comment
@bobthecow

bobthecow Apr 1, 2015

Owner

And 👍 for using a Presenter :)

Owner

bobthecow commented Apr 1, 2015

And 👍 for using a Presenter :)

@bobthecow

This comment has been minimized.

Show comment
Hide comment
@bobthecow

bobthecow Apr 1, 2015

Owner

Can you give me a sense of how slow this is for you? Is it taking up a significant part of the request? Is it slower than when you used Smarty? Or is it just that it's slower than other parts of Mustache rendering? If that's the case, it's totally understandable, because other than the context lookup, a precompiled template does almost nothing :)

Owner

bobthecow commented Apr 1, 2015

Can you give me a sense of how slow this is for you? Is it taking up a significant part of the request? Is it slower than when you used Smarty? Or is it just that it's slower than other parts of Mustache rendering? If that's the case, it's totally understandable, because other than the context lookup, a precompiled template does almost nothing :)

@patrick-radius

This comment has been minimized.

Show comment
Hide comment
@patrick-radius

patrick-radius Apr 1, 2015

As u can see from the profiling screenshot, the method findVariableInStack is taking 345ms, where a significant part (the grey area in the callers section at the bottom) is the method itself.

On a request that takes around 1.5 second in total, i would say this is a considerable bottleneck and it is much slower than the smarty counterpart (which didn't even show up in profiling at all)

As u can see from the profiling screenshot, the method findVariableInStack is taking 345ms, where a significant part (the grey area in the callers section at the bottom) is the method itself.

On a request that takes around 1.5 second in total, i would say this is a considerable bottleneck and it is much slower than the smarty counterpart (which didn't even show up in profiling at all)

@mxdpeep

This comment has been minimized.

Show comment
Hide comment
@mxdpeep

mxdpeep Oct 8, 2015

maybe a demo example would be nice so Bob can test it?

mxdpeep commented Oct 8, 2015

maybe a demo example would be nice so Bob can test it?

@patrick-radius

This comment has been minimized.

Show comment
Hide comment
@patrick-radius

patrick-radius Oct 9, 2015

I would be happy to provide an example if i knew how to set such a thing up...
Unfortunately it is not as easy as in the JS community with their gists and their codepens.
Any pointers on what's a good way to provide such a useful example?

I would be happy to provide an example if i knew how to set such a thing up...
Unfortunately it is not as easy as in the JS community with their gists and their codepens.
Any pointers on what's a good way to provide such a useful example?

@bobthecow

This comment has been minimized.

Show comment
Hide comment
@bobthecow

bobthecow Oct 12, 2015

Owner

The easiest would probably be a gist with a PHP file I can run and play around with. It doesn't need to run on the web, I've got a copy of PHP handy ;)

Owner

bobthecow commented Oct 12, 2015

The easiest would probably be a gist with a PHP file I can run and play around with. It doesn't need to run on the web, I've got a copy of PHP handy ;)

@nebulousGirl

This comment has been minimized.

Show comment
Hide comment
@nebulousGirl

nebulousGirl Oct 27, 2015

I am having the same issue. Everything works fine on my computer using a WAMP server, but when I deploy to my production server Mustache gets really slow (3-4 times slower) while the rest of the codebase is faster (2-3 times faster).

On my Windows computer: Application boot takes 100ms and Mustache, 110ms
On my production server: Application boot takes 40ms and Mustache, 380ms

I am using some dynamic context data. So, I will test with a purer environment to see if I still have the issue.

Any idea on what could cause findVaiableInStack to go haywire?

I am having the same issue. Everything works fine on my computer using a WAMP server, but when I deploy to my production server Mustache gets really slow (3-4 times slower) while the rest of the codebase is faster (2-3 times faster).

On my Windows computer: Application boot takes 100ms and Mustache, 110ms
On my production server: Application boot takes 40ms and Mustache, 380ms

I am using some dynamic context data. So, I will test with a purer environment to see if I still have the issue.

Any idea on what could cause findVaiableInStack to go haywire?

@nebulousGirl

This comment has been minimized.

Show comment
Hide comment
@nebulousGirl

nebulousGirl Oct 27, 2015

Through testing, I found my bottleneck in offsetExists in my data class. I was using uniqid and forgot to set the second parameter to true so it was slow on the Unix server, but not on Windows.

I don't think this will help for the other case though.

Have you tried commenting data until you find which class is slow?
It could be a recursion problem as Mustache looks up keys in contexts too. Maybe giving sample data with the view causing the problem would help debug.

Through testing, I found my bottleneck in offsetExists in my data class. I was using uniqid and forgot to set the second parameter to true so it was slow on the Unix server, but not on Windows.

I don't think this will help for the other case though.

Have you tried commenting data until you find which class is slow?
It could be a recursion problem as Mustache looks up keys in contexts too. Maybe giving sample data with the view causing the problem would help debug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment