Skip to content
This repository was archived by the owner on Aug 17, 2025. It is now read-only.

Conversation

black-adder
Copy link
Contributor

#2

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 66d84a7 on support_tag_metricsv2 into 69ff5f9 on master.

1 similar comment
@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 66d84a7 on support_tag_metricsv2 into 69ff5f9 on master.

metrics/local.go Outdated
// "name|tag1=value1|...|tagN=valueN", where tag names are
// sorted alphabetically.
func getKey(name string, tags map[string]string) string {
var keys []string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pre-allocate

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 421d63d on support_tag_metricsv2 into 69ff5f9 on master.

metrics/local.go Outdated
for k := range tags {
keys = append(keys, k)
keys[i] = k
i++
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can still use append, you just need to allocate with initial size: make([]string, 0, len(tags))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

haha remember we discussed this before? I thought the same but after running benchmarks, append is slower than direct indexing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that case was different, there you already had an index variable incremented as part of the loop. So it made the code cleaner and faster. Here I don't think it's faster, just uglier due to i being in the outside scope.

Anyway, fine either way, this isn't a performance critical piece.

Copy link
Contributor Author

@black-adder black-adder Feb 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, I'll fix it here because I agree it does look cleaner. I'll also fix the flaky tests as part of this PR.

@yurishkuro
Copy link
Member

The 1.6 build is still failing

@coveralls
Copy link

coveralls commented Feb 3, 2017

Coverage Status

Coverage remained the same at 100.0% when pulling 82b5a24 on support_tag_metricsv2 into 69ff5f9 on master.

numGoroutines := runtime.NumGoroutine()
defer func() {
assert.Equal(t, numGoroutines, runtime.NumGoroutine(), "Leaked at least one goroutine.")
}()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you think this is a source of flakiness?

Copy link
Contributor Author

@black-adder black-adder Feb 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my guess is the deferred localbackend.stop() and this defer func have a race condition. Ofc the localbackend.stop() will run first but the check that stops the actual go routine doesn't get called before this function gets called. I can't reproduce locally so it's just a guess.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like you're correct. The stop() function waits until the bg routine that's being stopped writes to the stopped channel, and then stop() exists. But the bg routine may still linger and trip the count.

We could do the test differently, without actually testing that the go-routine died, but only testing that the function has exited, e.g. by flipping an atomic var or a waitgroup.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think there's value in doing this? adding an atomic var/waitgroup just to ensure that the function exited?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I think it is useful to test that when you call stop() the backend actually stops. Since you're removing the go-routine count validation, you are removing that functional test, without replacement.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 5f6f8ee on support_tag_metricsv2 into 69ff5f9 on master.

@yurishkuro
Copy link
Member

I think with the waitgroup in place you do want to remove the check for go-routines, because it is inherently prone to race conditions.

@coveralls
Copy link

coveralls commented Feb 6, 2017

Coverage Status

Coverage remained the same at 100.0% when pulling 4ef00f3 on support_tag_metricsv2 into 69ff5f9 on master.

@black-adder black-adder merged commit 6ac3e6f into master Feb 6, 2017
@black-adder black-adder deleted the support_tag_metricsv2 branch February 6, 2017 18:15
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants