You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With status quo, you only have to modify predict_function and add logic there, it could be just random predictions. This makes predictoor easy to onboard, easy to use, yet customizable.
Yet we do want to show how to work with models. Because it improves UX for what predictoors will actually do, ie use models
Its default / main instructions are to show simple-as-possible end-to-end flow with just random predictions here
and then we give progressively more complex flows. Quoting it directly:
Simple: To-the-point example, with simple input data (just ETH price) and simple model (linear dynamical model)
Model optimization: Same as Simple with added optimization using cross-validation to select best hyperparameters.
Compare models: Build models that predict 1-12 hours ahead in one shot. Compare linear, SVM, RF, and NN models.
Strategy here / discussion between Berkay & Trent
T: For predictoor let's follow the same recipe, tuned appropriately. That is:
flow, with random values
flow, with predictions from Richard or Jaime models
(space for more flows)
B: Sounds good to me, should we create an examples folder and move these changes there?
T: Yes, sure. But it's all gotta be linked from the pdr-backend/pdr_backend/predictoor README. So that would-be predictoors would first play with the simple flow, then be aware of the other ones to be able to graduate into them.
So overall the README would add complexity in two dimensions:
local network --> remote testnet --> remote production net
simple model --> more complex sample model --> user's model
The README should give a reasonable path for the user to traverse these dimensions, starting at {local network, simple model} --> ending at {remote production net, user's model}
trentmc
changed the title
Create framework for >1 predictoors
[predictoor] Enable (a) super-simple flow with random predictions, yet (b) examples of using actual models
Aug 18, 2023
trentmc
changed the title
[predictoor] Enable (a) super-simple flow with random predictions, yet (b) examples of using actual models
[predictoor] Enable (a) simple flow with random predictions, yet (b) examples of using actual models
Aug 18, 2023
#49)
Towards #54 "For predictoor stakeholder, enable (a) simple flow with random predictions, yet (b) examples of using actual models", and oceanprotocol/pdr-private#6 "algovera predictoor mvp"
(The only failing pytest is unrelated to this PR. It's reported in #69. So ignore it here. All other tests pass)
[Part of epic #50 "Ship MVP"]
Background / motivation
Inspiration
The predict-eth / challenge DF README has a useful pattern.
Strategy here / discussion between Berkay & Trent
T: For predictoor let's follow the same recipe, tuned appropriately. That is:
B: Sounds good to me, should we create an examples folder and move these changes there?
T: Yes, sure. But it's all gotta be linked from the pdr-backend/pdr_backend/predictoor README. So that would-be predictoors would first play with the simple flow, then be aware of the other ones to be able to graduate into them.
So overall the README would add complexity in two dimensions:
B: Do they need to go through using the example model? We can describe the callback function and the parameters:
https://github.com/oceanprotocol/pdr-backend/blob/main/pdr_backend/predictoor/predict.py#L50
and let them know that there's an examples/predictoor folder. Then the example can have it's own readme.
T: Yeah, that's probably enough, as long as it's obvious how to hook it in. (And it keeps our code simpler and easier to maintain)
TODOs
Implement what is described in the tail end of the B & T "strategy" discussion above.
Notes
This was inspired by having to reconcile PR #49 "migrate real-time prediction loop from pdr-model-experiments"
The text was updated successfully, but these errors were encountered: