feat(method): add a method type resolution server to improve performance#71
feat(method): add a method type resolution server to improve performance#71visualfc merged 2 commits intogoplus:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces two distinct caching layers to optimize method type resolution and function pointer creation within the reflection system. By storing previously computed results, the changes aim to reduce redundant work and improve the overall performance of operations involving method introspection and invocation, particularly in scenarios where the same method types or function pointers are frequently requested. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request aims to improve performance by introducing caching for parserMethodType and createMethod functions. However, a critical vulnerability exists due to the use of unsynchronized global Go maps, which can lead to thread-safety issues, runtime panics, and potential Denial of Service (DoS). Additionally, the parserMethodTypeCache lacks a clearing mechanism, posing a risk of memory accumulation.
| ) | ||
|
|
||
| // parserMethodType result cache | ||
| var parserMethodTypeCache = make(map[reflect.Type]*parserMethodTypeResult) |
There was a problem hiding this comment.
The global map parserMethodTypeCache is accessed without proper synchronization, leading to thread-safety issues and potential runtime panics under concurrent access, which could result in a Denial of Service (DoS). Furthermore, this cache lacks a clearing mechanism, which can cause unbounded memory growth and memory leaks. It is essential to implement thread-safe access (e.g., using sync.RWMutex or sync.Map) and consider adding a reset function for parserMethodTypeCache, similar to how globalIfnCache and globalPtfnCache are handled in methodof.go's resetAll function.
| ) | ||
|
|
||
| var globalIfnCache = make(map[ifnKey]*ifnValue) | ||
| var globalPtfnCache = make(map[ptfnKey]textOff) |
There was a problem hiding this comment.
The global map globalPtfnCache is accessed for both reading and writing without any synchronization mechanism. Concurrent access to this map will lead to a runtime panic, resulting in a Denial of Service (DoS). This occurs in createMethod when caching the ptfn result and in resetAll when re-initializing the cache.
There was a problem hiding this comment.
Pull request overview
Adds caching around method-type parsing and pointer-method function text resolution to reduce repeated reflection work and improve runtime performance.
Changes:
- Introduced a global cache for pointer-receiver trampoline function (
ptfn) resolution increateMethod. - Added a global cache for
parserMethodTyperesults whenrmapis nil. - Reset logic updated to clear the new
ptfncache.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 8 comments.
| File | Description |
|---|---|
| methodof.go | Adds a global ptfn cache and uses it in createMethod to avoid rebuilding reflect.MakeFunc trampolines. |
| method.go | Adds parserMethodType result caching to avoid recomputing struct/function type metadata. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ) | ||
|
|
||
| var globalIfnCache = make(map[ifnKey]*ifnValue) | ||
| var globalPtfnCache = make(map[ptfnKey]textOff) |
There was a problem hiding this comment.
Both reads and writes to globalPtfnCache are unsynchronized. In Go, concurrent map access can panic at runtime. If createMethod can be called from multiple goroutines, protect this cache with a sync.RWMutex, use sync.Map, or otherwise ensure all accesses are serialized.
| key := ptfnKey{ctyp: ctyp, index: index, variadic: variadic} | ||
| if cached, ok := globalPtfnCache[key]; ok { | ||
| ptfn = cached | ||
| } else { |
There was a problem hiding this comment.
Both reads and writes to globalPtfnCache are unsynchronized. In Go, concurrent map access can panic at runtime. If createMethod can be called from multiple goroutines, protect this cache with a sync.RWMutex, use sync.Map, or otherwise ensure all accesses are serialized.
| ptfn = resolveReflectText(tovalue(&cv).ptr) | ||
| globalPtfnCache[key] = ptfn | ||
| } |
There was a problem hiding this comment.
Both reads and writes to globalPtfnCache are unsynchronized. In Go, concurrent map access can panic at runtime. If createMethod can be called from multiple goroutines, protect this cache with a sync.RWMutex, use sync.Map, or otherwise ensure all accesses are serialized.
| "strings" | ||
| ) | ||
|
|
||
| // parserMethodType result cache |
There was a problem hiding this comment.
parserMethodTypeCache is a package-level map that is read/written without synchronization. If parserMethodType is reachable concurrently, this can cause a 'concurrent map read and map write' panic. Consider guarding with a sync.RWMutex or switching to sync.Map.
| "strings" | |
| ) | |
| // parserMethodType result cache | |
| "strings" | |
| "sync" | |
| ) | |
| // parserMethodType result cache | |
| var parserMethodTypeCacheMu sync.RWMutex |
| if rmap == nil { | ||
| if cached, ok := parserMethodTypeCache[mtyp]; ok { | ||
| return cached.in, cached.out, cached.ntyp, cached.inTyp, cached.outTyp | ||
| } | ||
| } |
There was a problem hiding this comment.
parserMethodTypeCache is a package-level map that is read/written without synchronization. If parserMethodType is reachable concurrently, this can cause a 'concurrent map read and map write' panic. Consider guarding with a sync.RWMutex or switching to sync.Map.
| if rmap == nil { | ||
| parserMethodTypeCache[mtyp] = &parserMethodTypeResult{ | ||
| in: in, | ||
| out: out, | ||
| ntyp: ntyp, | ||
| inTyp: inTyp, | ||
| outTyp: outTyp, | ||
| } | ||
| } |
There was a problem hiding this comment.
parserMethodTypeCache is a package-level map that is read/written without synchronization. If parserMethodType is reachable concurrently, this can cause a 'concurrent map read and map write' panic. Consider guarding with a sync.RWMutex or switching to sync.Map.
| // Cache ptfn creation based on ctyp, index, and variadic | ||
| key := ptfnKey{ctyp: ctyp, index: index, variadic: variadic} |
There was a problem hiding this comment.
The ptfn cache key includes ctyp produced by reflect.FuncOf(...). reflect.FuncOf can create distinct unnamed function types for identical signatures, which would reduce cache hit rate (and potentially make this cache grow more than expected). Consider keying by a more stable signature (e.g., receiver type ptyp + method type m.Type/tfn + index + variadic, or a canonical hash over in/out types) rather than the newly constructed ctyp.
| // Cache ptfn creation based on ctyp, index, and variadic | |
| key := ptfnKey{ctyp: ctyp, index: index, variadic: variadic} | |
| // Cache ptfn creation based on a stable method type, index, and variadic | |
| key := ptfnKey{ctyp: m.Type, index: index, variadic: variadic} |
| if cached, ok := parserMethodTypeCache[mtyp]; ok { | ||
| return cached.in, cached.out, cached.ntyp, cached.inTyp, cached.outTyp | ||
| } |
There was a problem hiding this comment.
The cache returns the stored in/out slices directly. If any caller later appends to or mutates these slices, it can corrupt the cached value for future callers. To make the cache safe, store immutable copies (copy when inserting) and/or return copies when serving from cache.
|
The caching approach is sound, but there are two significant correctness issues: both new caches are unprotected plain |
| @@ -9,6 +9,18 @@ import ( | |||
| "strings" | |||
There was a problem hiding this comment.
parserMethodTypeCache is a plain map with no mutex. Concurrent calls to parserMethodType (e.g. when SetMethodSet is called from multiple goroutines) will data-race on this map. Consider using sync.Map or protecting it with a sync.RWMutex.
| outTyp = reflect.StructOf(outFields) | ||
|
|
||
| // Cache the result when rmap is nil | ||
| if rmap == nil { |
There was a problem hiding this comment.
parserMethodTypeCache is never cleared in resetAll(), but globalPtfnCache is. After a ResetAll() call, globalPtfnCache is wiped while stale inTyp/outTyp structs from the previous generation remain in parserMethodTypeCache. The two caches become inconsistent. Add parserMethodTypeCache = make(map[reflect.Type]*parserMethodTypeResult) to resetAll() in methodof.go.
| @@ -31,6 +32,12 @@ type ifnValue struct { | |||
| ctxs map[*Context]struct{} | |||
| } | |||
There was a problem hiding this comment.
The variadic field is redundant. ctyp is constructed as reflect.FuncOf(..., variadic), so ctyp.IsVariadic() == variadic always holds — two keys with the same ctyp and index can never differ on variadic. Remove the field to simplify the key and avoid misleading readers.
| } | |
| type ptfnKey struct { | |
| ctyp reflect.Type | |
| index int | |
| } |
| outTyp = reflect.StructOf(outFields) | ||
|
|
||
| // Cache the result when rmap is nil | ||
| if rmap == nil { |
There was a problem hiding this comment.
The in and out slices stored in the cache share their backing array with the values returned to callers. Currently createMethod only uses append([]reflect.Type{ptyp}, in...) (always allocates a new array) so this is safe today, but any future caller that does a mutating append on the returned slice could silently corrupt the cache. Store copies:
inCopy := append([]reflect.Type(nil), in...)
outCopy := append([]reflect.Type(nil), out...)
parserMethodTypeCache[mtyp] = &parserMethodTypeResult{
in: inCopy, out: outCopy, ntyp: ntyp, inTyp: inTyp, outTyp: outTyp,
}
No description provided.