Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

roadmapping #71

Open
1 of 24 tasks
Tracked by #70
serapath opened this issue Feb 17, 2021 · 0 comments
Open
1 of 24 tasks
Tracked by #70

roadmapping #71

serapath opened this issue Feb 17, 2021 · 0 comments

Comments

@serapath
Copy link
Member

serapath commented Feb 17, 2021

@todo milestone2: 13.07.2020 - ????


previous roadmap:

future roadmapping:

milestones from below:

milestones

Month 1

  • no economics
  • no UI
  • substrate logic (node)
  • js service that reacts to the node

Month 2

  • add UI
  • implement basic economic model
  • community test

Month 3

  • improving economic model
  • documentation

Development Roadmap

Milestone 1 (Month 1)

Implement basic JS & Substrate logic

  1. We will be using SRML for balances, sudo and write our own module dat-verify (=substrate logic (node))
    • it will verify hypercores in substrate runtime
    • we will use datrs (or something inspired by it) to make that work
    • details:
      • randomly selects dat archives and emits events
      • verifies data coming in from the service
      • make the node run on docker
      • implement structs for Proof and Node
      • implement and harden randomness
      • implement timing logic
      • add on-initialize logic
      • add register_backup to add and count users
      • implement logic to submit dat addresses for pinning
      • implement unregistering and initial challenge-response
  2. We will implement adapter.js (=js service that reacts to the node)
    • it will use polkadot.js.org api and hypercore js libraries to encode and decode hypercores
    • we will use dat sdk and/or dat-store's service.js to communicate with adapter.js
    • details:
      • listening for events on the node
      • submitting data to the node (proofs and archives) or responding with the merkle proof from the dat archives

Deliverables:

  1. We will deliver a working SRML module
  2. We will create a docker container that runs a substrate node using the module
  3. We will deliver a basic javascript module as a helper to interact with the node
  4. We will record a screencast that explains how a user can spin up one of those Substrate nodes.
  5. Once the node is up, it will be possible to send test transactions that will show how the new functionality works and will create a screencast which shows step by step how it works

Month 2 (Month 2)

Implement basic economics & UI logic

  1. We will use the balances module to:
    • create a simple credit based system
    • by pinning you mint credits, by having your archive pinned, you burn credits
    • mint amount should be > than the burn amount to solve bootstrapping.(the burn amount should be defined by a market)
    • when you submit the dat you also set the price you're willing to pay for the service
    • priority service: users who pin more have priority to get their data pinned first
    • details:
      • Write a basic module that calls balances to mint and burn balances based on the outcomes of dat-verify
      • implement minting tokens if you are seeding (earning) and successfully solve challenges
      • implement burning creators' tokens when their data is pinned (payment)
  2. implement a rough and basic UI for expert users to try out the system as a whole
  3. run a little closed alpha (community) test and monitor and analyse usage to improve the economic model
  4. We will write detailed documentation and create a screencast to show how to use it

Deliverables:

  1. We will deliver a refined working SRML module
  2. We will deliver a refined javascript module that helps interacting with the node
  3. We will deliver a basic web UI which works with a locally running substrate node
  4. We will create a docker container that runs all of this
  5. We will record a screencast that explains how a user can spin up the docker and use it

Month 3 (Month 3)

Implement refine economics, UI and write documentation

  1. We will run a public beta and monitor and analyse usage to improve the economic model
  2. We will implement a convenient UI/UX
    • it will use and wrap the work from previous milestones to make it easy for each of the user roles:
      1. pinners (seeders)
        • register to become pinners
        • get random dats to pin
        • they get paid for their work
      2. dat creators (requestors)
        • they submit dats to be pinned to keep their data available while their devices are offline
      3. node operators (should be seeders)
        • run substrate node
        • have to have enough disk space
        • reliable connection
        • get paid only when they seed (proof = succesful challenge )
        • paid in tokens (minted when each payment needs happen)
      4. data consumers (public)
        • reads the data
    • we will use electron to build a desktop task bar application
    • details:
      • registering availability and requesting pinning
  3. We will write detailed documentation and create video workshops for users to understand how to use it

Deliverables:

  1. We will deliver a working electron task bar application to run the substrate node and UI
  2. We will write a small report with the results from the analysis of our public beta
  3. We will refine and describe the economic model we are using
  4. We will record a screencast to show how to install and use the electron app to pin your data or let it be pinned
  5. We will write detailed documentation which explains all features and how to use them

Future Milestones

We plan to further improve the electron app and the substrate node and economics around it to make datdot work reliable in production.
This might require further grant applications and eventually we might be able to be self sustainable, but that depends on the economic model
we will end up using. One big motivation for us is to use this as a reliable building block for future and past projects, where people need to manage their personal data

@serapath serapath pinned this issue Feb 17, 2021
@serapath serapath mentioned this issue Feb 17, 2021
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant