New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should queued task take care about closing the page? #34
Comments
It is done automatically. As soon as your task function finishes (or a timeout happens) the opened resources will be closed. You do not need to worry about that. If you want to see it in action, you can run puppeteer locally and disable headless mode. |
Thanks for the response, @thomasdondorf. Another question came to my mind: is the |
Another question: can I control page termination myself without relying on automation? It seems for now it getting closed (lost context) before all operations are completed. |
The page will be closed as soon as your task function is executed. But you can use a Promise to wait until you are done. Then you call the await cluster.task(async ({ data, page }) => {
await new Promise((resolve) => {
// do some asynchronous stuff
// maybe call an async function like setTimeout?
setTimeout(() => {
// do more stuff...
// When we are done we call resolve() to resolve the promise
resolve(); // this will finish the task function and the page will be closed
}, 3000);
});
}); Be careful with asynchronous functions though. Asynchronously thrown errors cannot be caught by the library. So don't forget try-catch blocks where necessary. |
My use case is the following: create a cluster with
Cluster.CONCURRENCY_BROWSER
and never close it.As you can see above, the cluster's queue gets filled once RabbitMQ sends something. This means the process is kinda daemon and shouldn't be stopped. I'm worry about of whether the pages that cluster creates should be closed (
await page.close()
afterconst metadata = await crawler.crawl(resource, page);
) once not needed anymore or is it done automatically?The text was updated successfully, but these errors were encountered: