-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move calcrewards.py into its own pypi lib. #382
Comments
Do we really need ocean.py & brownie ? So cand 5 looks fine to me |
To be clear: this library would be used by
The library itself would not have ocean.py and probably not Brownie. It won't have subgraph queries either. It will be a deadly simple python script. The point is, it will be in exactly one place, so that we don't risks DRY violations. And yes therefore cand 5 is the way to go. I plan to implement it. But we need to ship the reward function updates first. (PR is there.) |
Given Predictoor, DF Challenge, DF Predictoor: this is super-low priority, and will look different when revisited. And we have way too many issues. So closing this. |
Why: unlock DFers optimizing on it
Background
We recently released df.md in ocean.py, as a script to lock Ocean for veOCEAN, publish assets, point veOCEAN to assets, and fake-consume the assets.
We want to help data farmers to calculate active rewards more precisely, in python. This is useful for wash consume or other mechanisms around building value flows.
Yet so far, df.md is fairly simplistic. It doesn't see the Reward Function (RF), which means sub-optimal rewards:
For df.md to have more optimal allocation, it needs to see the RF.
Datapoints:
Goals
Candidate approaches
Analysis
Cands 1, 2, 3: cons are too severe.
Cand 4 might be fine. But cand 5 is cleaner yet. Therefore, recommend Cand 5.
TODOs
Basically, implement Cand 5.
The text was updated successfully, but these errors were encountered: