Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maximum execution time exceeded on every page save #19

Closed
drnasin opened this issue Jan 7, 2018 · 11 comments
Closed

Maximum execution time exceeded on every page save #19

drnasin opened this issue Jan 7, 2018 · 11 comments
Labels

Comments

@drnasin
Copy link
Contributor

drnasin commented Jan 7, 2018

I have cca 70 pages indexed. Every time I save the page the content will be saved but the page will hang and cause Execution time out

Grav reports problem here: tntsearch\vendor\teamtnt\tntsearch\src\Indexer\TNTIndexer.php, function saveDoclist, around here:

            try {
                $stmt->execute();
            } catch (\Exception $e) {
                //we have a duplicate
                echo $e->getMessage();
            }

Not sure what to do except for disabling the plugin...
UPDATE: once disabled, the whole process of saving a page became much faster...

@leotiger
Copy link

Similar issue here, especially in the context of page routes created on demand, my grav site runs into execution timeout issues. I will check a bit deeper as this may be due as well to specific settings.

@rhukster
Copy link
Member

I've added options to let you disable page events, and just use the manual indexer process. However, this is not a long-term solution. The problem is I can't replicate this issue. I need a site that has this problem that I could perhaps install locally and see what's going on.

@gamahachaa
Copy link

After upgrading to 2.0.4 while indexing, I stumbled upon
PHP Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 130968 bytes) in /home/qook/app/qook/system/src/Grav/Common/Page/Page.php on line 254
Increasing to the max didn't help
changing the config filter to :

filter:
  items:
    taxonomy@:
      category:
          - doc

Instead of

filter:
  items:
    - root@.descendants

did help.

@rhukster
Copy link
Member

Can you please try the latest versus on in github develop branch?

@gamahachaa
Copy link

Tried. and the error is now triggered @the start of the indexing. the indexing then traces it is adding pages then stop after ~65 pages (same behavior - same page- as with master branch, except for the error that was triggered at the end).

I tried back with taxo filter. indexing went ok though "libpng warning: Interlace handling should be turned on when using png_read_image" is outputed several time before indexing starts adding pages (~700)

@rhukster
Copy link
Member

Is it possible to get a dump of your site?

@gamahachaa
Copy link

I will try to narrow it down and get back to you (by MP?)

@gamahachaa
Copy link

gamahachaa commented Mar 1, 2019

I narrowed it down.
Errors is triggered each time an "empty" (with no .md) folder is found.
setting system.pages.hide_empty_folders: true was no help

@gamahachaa
Copy link

Hello Andy is it normal behavior ? We would need to be able to leave empty folders (like media)

@rhukster
Copy link
Member

rhukster commented Mar 4, 2019

this is not normal, sounds like a bug.. i will take a look when i get a moment.. just super busy with client work at th emo.

@rhukster rhukster added bug and removed investigating labels Mar 4, 2019
@rhukster
Copy link
Member

rhukster commented Mar 5, 2019

This seems to be already fixed in TNTSearch 3.0 beta. So i must of found/fixed it at some point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants