-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Phantomjs, performance #458
Comments
deploy multiple phantomjs instance with a load balance frontend before connect to one fetcher. e.g. this docker-compose structure.
|
it is cool idea about haproxy, thank you for sharing. I see in logs of phatomjs that there are very long requests, about 70 seconds, i dunno why :( Can it be because js_script? And what will happen if js_script for some reason will fail? |
In most rendering a page is slow, it will wait till every resources in the page loaded.
|
I mean what will happen if error accures? Fetch will fail or what? I see that plenty of my fetches are in active state. I don't care why it accures i understand that there can be plenty of reasons. Before migrating to pyspider i had phantomjs ruled by selenium which hammered page with js script again and again with defined timeout. |
nothing happen, just like it hasn't executed. You should detect in your script with some assert, and let the task retry. |
Hello! I tried make configuration with docker compose. So i made such configuration file
Problem is that all working ok, but i can't start task in dashboard and can't see current progrems, verywherre i see connect to scheduler error My configuration file looks like this:
What i miss? |
I run little test to ensure that scheduler accessible from webui: root@ee2a19090d65:/home/ubuntu/conf# root@ee2a19090d65:/home/ubuntu/conf# |
Ok, i got that i need use --scheduler-rpc: Currently i don't see any errors, dashboard working well. But now my problem that spider is just don't work when i start him. This what i see i log: Look of tasks in dashboard: Source code of spider: |
command of processor doesn't contains |
You know you are awesome mate, i just stucked on this. Was crazy about setting up properly. But now all working ok! Dunno how i missed this. Is there any chance you can crosspost such posts in english too? http://blog.binux.me/2016/05/deployment-of-demopyspiderorg/ I noticed this right now and looks like it is very usefull, but with translator post become very broken. Can i support you somehow? I don't have much, but 20 bucks is 20 bucks?) I have paypal and can pay by credit card directly. |
You can find the clue from log, scheduler had select(dispatch) the task, fetcher have received it, but no next. And on dashboard, you should find pending messages between fetcher and processor. Yes, I will translate that post and put it into docs.pyspider.org |
Ok i got it. I have tried this configuration in field. I see that with time phantomjs answering slower and slower and at the end i see this error: I tried with --poolsize 10, nothing changed. I am not swaping or something, i have enough ram. Phantomjs memory cosumption is very high, it is easily pass 1gb limit. Can't believe it is real. |
rendering a page is very slow and heavy, all of the current handless browser implement have memory leak issue. I want to port js render to splash and one of implement of electron, but it wouldn't solve the problem, just give you another choice. I restart phantomjs instance frequently to avoid memory leak issue. |
I checked, if i load link which i am interested in phantomjs it consumes around 200mb of ram. How hard will write something which handle one request with one version of phantomjs and after that phantomjs just die? I think maybe i can modify phantomjs_fetcher.js to just exit after single request? And balance possible errors with haproxy? Also looked at splash, it is very intresting. |
It's very easy to kill a render server, we are running splash in business, the whole instance can be killed easily by some certain web page. |
Can't get what you mean. You mean that there are in the wild pages, which can kill reder server or what? I mean i want phantomjs to execute only one request, so i can lower possible memory leaks to minimum. |
I mean it's not easy to implement a "reliable" render server, yes, execute and re-fork for one request can somehow isolate the failure between requests (but need more resources to fork). But there are some web pages, with only one request can kill the render service, you still need to monitor the processor and kill them when needed. |
Tried single request per one phantomjs instance. Now i can get stable render time, and no so noticable memory leaks. But sometimes i can see 500 errors again, but i expect this. Without any control service, which will dispatch queries right i don't think there will be error free behaviour. I tried splash too, i tried ab him with my task, 10 concurency and 1000 requests, it failed at 200 with qt error. |
Can be this error because of phantomjs instance died because of phantom.exit(); ? { |
have no idea, need more logs when it happened. |
Found pretty hard scaling performance of my spider when i need js render. What i understand that i need decrease number of poolsize for fetcher am i right? So i start 3 phantomjs+fetcher and set for them poolsize 10. Am i right? Maybe you have something to append?
The text was updated successfully, but these errors were encountered: