-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Improve request / response handling #117
Conversation
This reverts commit 7b53854.
…to mike/pairing # Conflicts: # main.go
Kicked the tires a bit on this, and so far looks good in my testing, but curious to see what others find. I'm also seeing non-negative durations! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great! I now have no events missing a request or response pair, and no negative durations 🎉
With Debug enabled, I do still see errors in the logs like malformed HTTP status code \"\\xbc\\xac0\\xf3\\x15\\xc1[mٱ\\xaf
, but we can look further into that as a follow-up.
I've added a question on the swap out from sync.Map
to regular go map
, and a typo suggestion.
response *http.Response | ||
responseTimestamp time.Time | ||
} | ||
|
||
func newRequestResponseMatcher() httpMatcher { | ||
return httpMatcher{ | ||
messages: &sync.Map{}, | ||
messages: make(map[string]entry), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I admittedly have no experience using sync.Map
outside of this, but I'm curious why we chose to swap it out for a regular Go map. Reviewing the docs, it says that while most code should use a plain Go map,
The Map type is optimized for two common use cases: (1) when the entry for a given key is only ever written once but read many times, as in caches that only grow, or (2) when multiple goroutines read, write, and overwrite entries for disjoint sets of keys. In these two cases, use of a Map may significantly reduce lock contention compared to a Go map paired with a separate Mutex or RWMutex.
I'm not sure if we fall into this category and would be curious to see results in dogfooding. I guess sync.Map
may be premature optimization at the expense of type safety and common usage? This blog notes
The Go team identified situations in the standard lib where performance wasn’t great. There were cases where items were fetched from data structures wrapped in a sync.RWMutex, under high read scenarios while deployed on very high multi-core setups and performance suffered considerably.
The flip side is that using sync.Map
for the wrong use case can result in lowered performance, so I'm not positive where our agent stands in these scenarios and curious for your take.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we need to do a few operations:
- check if key exists
- If yes;
- update it with the request/response half we have
- remove key from map
- return entry
- if no;
- create new entry with request/response half we have
- add to map
- return nil
- If yes;
The checking if it exists and creating a new entry if not needs to be thread safe, as we could receive both the request and response at the same time and both do the look up to find no entry, create an entry and then race to set it in the map. The updated logic means we can do it all under one lock.
The sync.Map may be able to replace what we have but as you pointed out, in certain circumstances it's slower and I'm not sure it's clearer to use. I think the separate map and mutex is simple and easy to understand.
## Which problem is this PR solving? After updating the http matcher to use a normal map in #117, we didn't remove entry keys from the map which caused it to continually grow. ## Short description of the changes - Delete entries from the map after finding a match ## How to verify that this has the expected result Memory consumption will be more consistent as the map does not hold items forever.
Which problem is this PR solving?
We're seeing inconsistent request / response pairs being reported by the agent. After investigation, this came down to two problems:
Short description of the changes
How to verify that this has the expected result
Events should be more consistent in matching request & response pairs, meaning more events being dispatched.