Skip to content
This repository has been archived by the owner on Aug 13, 2021. It is now read-only.

[User-Story] 100MB upload and availability #16

Closed
8 tasks done
holisticode opened this issue Jun 5, 2019 · 9 comments
Closed
8 tasks done

[User-Story] 100MB upload and availability #16

holisticode opened this issue Jun 5, 2019 · 9 comments

Comments

@holisticode
Copy link
Contributor

holisticode commented Jun 5, 2019

Rationale

In order to fulfill #2, several intermediate milestores need to be reached.

This user story is about reaching the 100MB milestone.

User-Story

A Dapp developer is able to upload and download 100MB files with high degree of consistent and success to and from swarm through a dedicated swarm node using the swarm public API.

Epic links

#2

Acceptance criteria

  • Upload of 100MB every 3min.
  • xxxxxx - Git commit or tag of Swarm codebase under test
  • xxxxxx - Git commit or tag of the test setup infrastructure (helm charts, yaml configuration, etc.) under test
  • pic - Grafana screenshot of Smoke tests dashboard over the last 24 hours
  • X - Failure/Timeout rate <1% for Smoke tests runs
  • X - Average upload measurements: 150sec. (or 666 KBps)
  • X - Average download measurements: 125sec. (or 800 KBps)
  • ... - Short text explanation of the Smoke tests dashboard, so that it is clear what the reviewer is looking at.

Environment configuration

  1. Storage resources:

    • size: 3Gi
    • class: gp2
  2. Requests:

    • memory: 512Mi
    • cpu: 0.5
  3. Limits:

    • memory: 1024Mi
    • cpu: 1
  4. Number of Swarm nodes: 50

  5. Deployment running on on-demand instances.

  6. Smoke test pod affinity - must be on the same node as Uploader node (by default node-0).

This is the current environment configuration we run Swarm with on test deployments, that is subject to change.

Planned milestone

0.4.2

Related Issues

  1. simplify netstore / delivery / fetcher swarm#1309
@nolash
Copy link

nolash commented Jun 7, 2019

@nonsense @holisticode average measurements are measured on lo?

@nonsense
Copy link

nonsense commented Jun 7, 2019

@nolash could you be more specific about what lo means? I don't understand your question.

@nolash
Copy link

nolash commented Jun 7, 2019

Sorry, loopback interface - that is, upload is done from the same host as the running swarm node.

@nonsense
Copy link

nonsense commented Jun 7, 2019

@nolash yes, this is point 6, but we actually don't have it implemented right now.

@nonsense
Copy link

Given the Acceptance criteria I think this is story is mostly complete, and it highlighted a few issues we should work on in the future.

Acceptance criteria

  • Upload of 100MB every 3min.
  • Git commit or tag of Swarm codebase under test
  • Git commit or tag of the test setup infrastructure (helm charts, yaml configuration, etc.) under test
  • Grafana screenshot of Smoke tests dashboard over the last 24 hours
  • Failure/Timeout rate <1% for Smoke tests runs
  • Average upload measurements: 150sec. (or 666 KBps)
  • Average download measurements: 125sec. (or 800 KBps)
  • Short text explanation of the Smoke tests dashboard, so that it is clear what the reviewer is looking at.

Screenshot 2019-06-19 at 10 49 40


On the Fetchers dashboard we see the average lifetime of a fetcher for a given chunk, and see that the 95%ile is around 10ms-15ms, whereas the max is around 400ms. Fetchers during syncing have a much higher lifetime, around 500ms, and this is something we should address with the refactor of the pull syncing protocol.

On the Timeouts dashboard we see a few Global timeouts and a few individual Search timeouts - I don't expect to see any global timeouts, as the test is waiting for syncing to be complete, before trying to download content, so this is something we should understand better in the future.

Overall this seems to be an improvement compared to previous versions of Swarm.

@acud
Copy link
Member

acud commented Jun 19, 2019

Awesome work @nonsense 👏 🙏

@zelig
Copy link
Member

zelig commented Jun 20, 2019

@nonsense @acud 👏 🙇 ❤️

@FantasticoFox
Copy link
Contributor

How do we get confirmation to close. It would be great if this is verified from a third party to confirm success :)

@holisticode
Copy link
Contributor Author

@FantasticoFox in my opinon the above screenshots and config links etc. are the confirmation. For example, the screenshot shows that the upload happened with 100000kb (100MB), look at the legend below the first widget on the upper left.

The other widgets show that there were no failures (second widget upper right), you can see that it is a print of the last 24 hours (header upper right)

You have in the second row the upload times (left) and download times (right), with their respective average values in the avg column in the legend below.

@acud acud closed this as completed Jul 4, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants