-
Notifications
You must be signed in to change notification settings - Fork 272
Description
Expected behaviour
When running as a Node.js module, the service gives enough time for each PhantomJS worker to finish exporting the chart.
Actual behaviour
lib/phantompool.js has a hard-coded timeoutThreshold, set at 3.5 seconds. If new work is being passed to the export server, it will restart/terminate any existing workers than have been working for more than this specified threshold.
While I understand the purpose of this mechanism, the timeout is too restrictive for large time series/line charts. When a chart contains over 100,000 discrete points with 30+ series, the workers routinely take 4-6 seconds to finish the export operation. When charts contain 500,000+ points and 200+ series, it can take 10s+ to finish the export.
Reproduction steps
- Run the export server as a Node.js module
- Attempt to export several time series/line charts in sequence, about 1 every 5 seconds to trigger the kill routine to check for workers that have been running for longer than 3.5s. Make sure each chart contains at least 100,000 data points and 10+ series (the more data, the likely the problem to occur)
- Periodically, you will see the following error:
0x04 error when performing chart generation: please check your input data
If you add additional logging/debugging, you will notice that the route cause is this bit on lines 348-350 in lib/phantompool.js:
if (worker.working && t - worker.workStartTime > settings.timeoutThreshold) {
worker.restart(true);
}This is a mechanism that restarts a worker if the timeoutThreshold has been exceeded. This causes the export of the chart to fail.
Proposed solution
Allow timeoutThreshold to be a configurable parameter when initializing the worker pool in Node.js mode (I don't think this applies to CLI/HTTP server, as I did not find a reference to this there). The default 3.5s value is likely sufficient for most use-cases, but for large charts it can be problematic.
I have taken a stab at this in the following pull request: #172