Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move calcrewards.py into its own pypi lib. #382

Closed
3 tasks
trentmc opened this issue Nov 30, 2022 · 3 comments
Closed
3 tasks

Move calcrewards.py into its own pypi lib. #382

trentmc opened this issue Nov 30, 2022 · 3 comments
Labels
Priority: Low Status: WontFix This will not be worked on

Comments

@trentmc
Copy link
Member

trentmc commented Nov 30, 2022

Why: unlock DFers optimizing on it

Background

We recently released df.md in ocean.py, as a script to lock Ocean for veOCEAN, publish assets, point veOCEAN to assets, and fake-consume the assets.

We want to help data farmers to calculate active rewards more precisely, in python. This is useful for wash consume or other mechanisms around building value flows.

Yet so far, df.md is fairly simplistic. It doesn't see the Reward Function (RF), which means sub-optimal rewards:

  • It's sub-optimal for wash-consume. It just has the wash-consumer publish an asset, then max out their consumes on the asset. But that's not quite optimal yet. Eg >1 actors running df.md will be competing with each other. And will be less optimal yet, as the Data Farming Rewards Function (RF) gets more sophisticated.
  • It's sub-optimal for non-wash-consume cases. Some actors won't want to wash-consume now; and when wash-consume becomes unprofitable (by DF29), we want to make it easy for people to optimally allocate their veOCEAN (and publish), to maximize rewards.

For df.md to have more optimal allocation, it needs to see the RF.

Datapoints:

  • ocean.py repo is meant for external use. It exposes DF/VE smart contracts. It includes df.md script
  • df-py repo isn't meant for external use. Just for use by OPF for distributing rewards, and to power df-webapp.
  • RF is implemented inside df-py's calcrewards.py module.
  • We have a separate project to "decentralize & automate df-py"

Goals

  • Key goal: df.md can see RF
  • Key goal: no DRY violations on RF. While DRY is important for sw eng in general, it's super-critical for RF, because RF in >1 place will cause many problems, that could lead to stakers getting incorrect rewards (a big no-no).
  • Goal: keep df-py repo for internal use only
  • Goal: the solution here isn't counterproductive to work on "decentralize & automate df-py"

Candidate approaches

  1. Copy & paste calcrewards.py into ocean.py repo, expose it as new DF-related methods. Con: DRY violation
  2. New repo that imports ocean.py as a lib, has copy-and-paste of calcrewards.py, and then expands on df.md. Con: DRY violation
  3. Move all df-py code into ocean.py. Con: bloats ocean.py, confused use of internal-vs-external, slower progress on df-py
  4. Make a pypi lib of df-py, have df.md import both ocean.py and df-py. Con: (small/med) most of df-py isn't meant for external use so it complicates things.
  5. Move df-py's calcrewards.py into its own repo with its own pypi lib, have df-py import it, have df.md import it. Con: (small) one more repo & pypi lib to think about

Analysis

Cands 1, 2, 3: cons are too severe.

Cand 4 might be fine. But cand 5 is cleaner yet. Therefore, recommend Cand 5.

TODOs

Basically, implement Cand 5.

  • Create new repo for calcrewards. Copy over calcrewards.py, and tests. Ship as pypi library.
  • Refactor df-py to use new pypi repo
  • In df.md, add a comment "use repo __ to have the objective function in the loop". (Then, the follow-up issue df-py#392 handles more fancy use of this in df.md)
@trentmc trentmc changed the title Enable df.md to see RF without DRY violations, to unlock more optimal wash-consume. How: move calcrewards.py into its own pypi lib Move calcrewards.py into its own pypi lib. Why: unlock DFers optimizing on it Dec 10, 2022
@trentmc trentmc self-assigned this Dec 10, 2022
@idiom-bytes idiom-bytes changed the title Move calcrewards.py into its own pypi lib. Why: unlock DFers optimizing on it Move calcrewards.py into its own pypi lib. Dec 16, 2022
@alexcos20
Copy link
Member

alexcos20 commented Jan 10, 2023

Do we really need ocean.py & brownie ?
For the moment, calcRewards does some subgraph queries and calculate rewards..
This can be a simple python script.

So cand 5 looks fine to me

@trentmc
Copy link
Member Author

trentmc commented Jan 10, 2023

To be clear: this library would be used by

  • df-py
  • ocean.py readme df.md
  • anyone who riffs on df.md for their own automated approach to DF

The library itself would not have ocean.py and probably not Brownie.

It won't have subgraph queries either.

It will be a deadly simple python script. The point is, it will be in exactly one place, so that we don't risks DRY violations.

And yes therefore cand 5 is the way to go.

I plan to implement it. But we need to ship the reward function updates first. (PR is there.)

@trentmc
Copy link
Member Author

trentmc commented May 4, 2023

Given Predictoor, DF Challenge, DF Predictoor: this is super-low priority, and will look different when revisited. And we have way too many issues. So closing this.

@trentmc trentmc closed this as completed May 4, 2023
@trentmc trentmc added the Status: WontFix This will not be worked on label May 4, 2023
@trentmc trentmc removed their assignment May 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority: Low Status: WontFix This will not be worked on
Projects
None yet
Development

No branches or pull requests

3 participants