-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speeding up the prediction #2030
Comments
@orenmatar and @tcuongd do you know if anyone has made an attempt on this yet? Otherwise I might take a try at integrating that code. |
@winedarksea I don't know of any attempt... and would love to see one. If you have any questions about the code - I'd be happy to help. |
@orenmatar thanks, hoping to get this done in the next week or two (perhaps a foolish hope). First have to figure out the new cmdstanpy backend for the develop |
some questions for you @orenmatar
elif prophet_obj.growth == "flat":
sample_trends = np.zeros(k, len(forecast_df)) I also played around with this variation based on the flat_trend function but it is wrong, being not centered around 0 elif prophet_obj.growth == "flat":
sample_trends = np.repeat(
prophet_obj.params['m'] * np.ones_like(future_time_series),
k, axis=0
) |
Good luck! looking forward to seeing this implemented! |
I've been away on vacation in Italy, but I'm back! I have a question which really is for the Prophet maintainers if they will come to the party? I am FB Internal and from what I can tell Ben Letham is entirely moved on to other projects - for example in Workspaces for Prophet he ignored call outs to him from others when there was a bug in the internal build. So request to @dmitryvinn and @tcuongd as to whom to coordinate work on? My basic question is how many internal functions supporting the prediction need to be maintained. The simplest way to integrate these speed ups would be to cut the number of functions - because most of them are nested in for loops that will no longer be needed. OLD/CURRENT WAY
New Way
Would be obsolete Alternatives: A possible compromise solution would be to use the old way when mcmc_samples is True, which users are already expecting to be slow, and use the new way (with a trimmed down number of functions supporting it) for other cases. |
@winedarksea How is it going with the project? |
@orenmatar I messaged Ben Letham internally and his current priority is updating the internal platforming for Prophet at Meta which I may be helping with. But he did look at your work and liked it, so I think we will get this in - although that's a matter of months probably. Neural Prophet (which is from a different team) uses PyTorch as well, and I think having a PyTorch alternative backend would be great since many people already have some familiarity with it. Altlhough 60 ms is hardly slow to begin with! |
@winedarksea I think 60ms is slow compared to most linear regressions which is what Prophet ultimately is. some datasets really do contain hundreds of thousands of items if not millions, so these gains could be meaningful. but for sure it is more important to optimize the predict method... Thanks for the update! |
@orenmatar do you have a fork where you implemented the changes? Thanks! |
@nicolaerosia I don't... I only implemented it as a side function to be called separately and @winedarksea started working on a fork. Not too difficult to replace the existing confidence interval with the new one. I'll be happy to help if there's a real chance it will be incorporated into the main package. |
@orenmatar I have hopes for inclusion, it seems like @tcuongd is active maintaining! |
@nicolaerosia sure thing. |
@winedarksea I think it's a good idea, do you have a branch I can take a look at? Thanks a lot! |
sure, I need to publish my changes, I'll try to do that tomorrow |
I was working on scaling up Prophet to be used as a forecast engine for hundreds of thousands of items for a project I'm working on, and realized that predicting with Prophet takes the vast majority of the running time,
After some research I figured out a way to speed it up significantly, and wrote the following post:
https://towardsdatascience.com/how-to-run-facebook-prophet-predict-x100-faster-cce0282ca77d#4b11-31bf17c6772
I think it would be great if something like this was implemented.
The text was updated successfully, but these errors were encountered: