Replies: 2 comments
-
will leave this open so someone can address it, not sure what the root cause is but likely has something to do with statistics.ts (see here) I was able to work around this by adding crawler.stats.reset() after every call to crawler.run - so my new main function looks like this:
|
Beta Was this translation helpful? Give feedback.
-
The commit you linked is exactly the one I was referring to when we discussed this, I think you were testing an older version that didn't have this fix. Note that this should never happen, as the histogram should never reach more than 4 elements (or in general, max request retries + 1 elements) - as each position defines the number of retries. That commit is just getting around one perf issue which resulted in exceeding the stack size, so instead of
You should not reuse the crawler instance, |
Beta Was this translation helpful? Give feedback.
-
Which package is this bug report for? If unsure which one to select, leave blank
None
Issue description
After running the crawler for some period of time, the retry histogram gets extremely long and I get an out of memory crash.
Crawler logs:
My maints file runs the crawler as a node server which responds to requests.
Code sample
Package version
Docker playwright chrome 16
Node.js version
Docker playwright chrome 16
Operating system
No response
Apify platform
I have tested this on the
next
releaseNo response
Other context
No response
Beta Was this translation helpful? Give feedback.
All reactions