Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Traffic shaping for bitswap plan #352

Merged
merged 1 commit into from Jan 17, 2020
Merged

Conversation

dirkmc
Copy link
Contributor

@dirkmc dirkmc commented Jan 17, 2020

Allows client to set latency and bandwidth as test parameters

@dirkmc dirkmc mentioned this pull request Jan 17, 2020
@Stebalien Stebalien merged commit 8b806e0 into master Jan 17, 2020
@Stebalien Stebalien deleted the feat/bitswap-traffic-shaping branch January 17, 2020 21:08
Copy link
Contributor

@raulk raulk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that @Stebalien approved and merged while I was reviewing, but here's my feedback. (I was going to approve too).


latency := time.Duration(runenv.IntParam("latency_ms")) * time.Millisecond
bandwidth := runenv.IntParam("bandwidth_mb")
writer.Write(sync.NetworkSubtree(hostname), &sync.NetworkConfig{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At some point, we'll probably want some API sugar for the network operations, in the same vein as WaitNetworkInitialized().

@@ -170,11 +185,12 @@ func Transfer(runenv *runtime.RunEnv) {
runenv.OK()
}

func setupSeed(ctx context.Context, node *utils.Node, fileSize int) (cid.Cid, error) {
func setupSeed(ctx context.Context, node *utils.Node, fileSize int, runenv *runtime.RunEnv) (cid.Cid, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we intend to use the runenv in this function going forward?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I'll remove in the next PR I do

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in #353

return err
}

// TODO: just put the hostname inside the runenv?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can certainly do that with the local runners, but not sure how this would work with k8s. @nonsense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way I copied this from the DHT testplan, as with a bunch of the code in this utils package. It would be good to put it all in a central location somewhere

Copy link
Member

@nonsense nonsense Jan 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be problematic in case we run concurrently the same testplan in multiple namespaces in the future. pod names are unique within a namespace, but we have sidecar running per host right now. os.Hostname returns the pod name in k8s by default (https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields). I wouldn't worry too much about it now, but maybe change the comment to say unique testplan id, rather than hostname, because as far as I understand we just need a unique testplan identifier here, so that we have a unique subtree, and we use hostname as such.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in #353

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants