adapter: store redis complex types as proto#390
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request implements a significant change in how Redis complex data types are stored, moving from JSON serialization to Protocol Buffers. The primary goal is to enhance data storage efficiency and consistency. The new implementation includes a robust migration strategy, allowing the system to seamlessly handle existing JSON-serialized data by automatically converting it to the new Protobuf format upon subsequent write operations, thus ensuring backward compatibility without requiring a separate data migration step. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request is a solid improvement, migrating the storage format for Redis complex types from JSON to Protobuf. This change should bring performance and storage efficiency benefits. The implementation is well-executed, featuring a backward-compatible unmarshaling path that supports both Protobuf and the legacy JSON format, ensuring a smooth migration. The accompanying tests are thorough, covering the new format, the legacy fallback, and the read-old/write-new migration path. I have a couple of suggestions to refactor small pieces of duplicated code in the new unmarshaling functions to enhance maintainability. Overall, this is a high-quality contribution.
| func unmarshalSetValue(raw []byte) (redisSetValue, error) { | ||
| if len(raw) == 0 { | ||
| return redisSetValue{}, nil | ||
| } | ||
| if hasStoredRedisPrefix(raw, storedRedisSetProtoPrefix) { | ||
| msg := &pb.RedisSetValue{} | ||
| if err := gproto.Unmarshal(raw[len(storedRedisSetProtoPrefix):], msg); err != nil { | ||
| return redisSetValue{}, errors.WithStack(err) | ||
| } | ||
| out := redisSetValueFromProto(msg) | ||
| sortStrings(out.Members) | ||
| return out, nil | ||
| } | ||
|
|
||
| var out redisSetValue | ||
| if err := json.Unmarshal(raw, &out); err != nil { | ||
| return redisSetValue{}, errors.WithStack(err) | ||
| } | ||
| sortStrings(out.Members) | ||
| return out, nil | ||
| } |
There was a problem hiding this comment.
The sortStrings(out.Members) call is duplicated in both the protobuf and JSON unmarshaling paths. You can refactor this to avoid duplication by moving the sort call to after the if/else block that handles the two formats. This will make the code cleaner and easier to maintain.
func unmarshalSetValue(raw []byte) (redisSetValue, error) {
if len(raw) == 0 {
return redisSetValue{}, nil
}
var out redisSetValue
if hasStoredRedisPrefix(raw, storedRedisSetProtoPrefix) {
msg := &pb.RedisSetValue{}
if err := gproto.Unmarshal(raw[len(storedRedisSetProtoPrefix):], msg); err != nil {
return redisSetValue{}, errors.WithStack(err)
}
out = redisSetValueFromProto(msg)
} else {
if err := json.Unmarshal(raw, &out); err != nil {
return redisSetValue{}, errors.WithStack(err)
}
}
sortStrings(out.Members)
return out, nil
}| func unmarshalZSetValue(raw []byte) (redisZSetValue, error) { | ||
| if len(raw) == 0 { | ||
| return redisZSetValue{}, nil | ||
| } | ||
| if hasStoredRedisPrefix(raw, storedRedisZSetProtoPrefix) { | ||
| msg := &pb.RedisZSetValue{} | ||
| if err := gproto.Unmarshal(raw[len(storedRedisZSetProtoPrefix):], msg); err != nil { | ||
| return redisZSetValue{}, errors.WithStack(err) | ||
| } | ||
| out := redisZSetValueFromProto(msg) | ||
| sortZSetEntries(out.Entries) | ||
| return out, nil | ||
| } | ||
|
|
||
| var out redisZSetValue | ||
| if err := json.Unmarshal(raw, &out); err != nil { | ||
| return redisZSetValue{}, errors.WithStack(err) | ||
| } | ||
| sortZSetEntries(out.Entries) | ||
| return out, nil | ||
| } |
There was a problem hiding this comment.
Similar to unmarshalSetValue, the sortZSetEntries(out.Entries) call is duplicated across the protobuf and JSON decoding paths. This can be refactored to have a single sort call after decoding from either format, which will improve code clarity and reduce redundancy.
func unmarshalZSetValue(raw []byte) (redisZSetValue, error) {
if len(raw) == 0 {
return redisZSetValue{}, nil
}
var out redisZSetValue
if hasStoredRedisPrefix(raw, storedRedisZSetProtoPrefix) {
msg := &pb.RedisZSetValue{}
if err := gproto.Unmarshal(raw[len(storedRedisZSetProtoPrefix):], msg); err != nil {
return redisZSetValue{}, errors.WithStack(err)
}
out = redisZSetValueFromProto(msg)
} else {
if err := json.Unmarshal(raw, &out); err != nil {
return redisZSetValue{}, errors.WithStack(err)
}
}
sortZSetEntries(out.Entries)
return out, nil
}There was a problem hiding this comment.
Pull request overview
This PR moves Redis adapter storage for complex Redis types (hash/set/zset/stream) from legacy JSON blobs to a deterministic, versioned protobuf payload stored in the underlying KV engine, while preserving backwards compatibility via JSON fallback.
Changes:
- Add
redis_internal.proto(and generated Go bindings) defining protobuf payloads for Redis complex types. - Introduce a new Redis storage codec that prefixes + deterministically marshals protobuf payloads and falls back to legacy JSON decoding.
- Add codec and migration tests to validate legacy-read/no-rewrite behavior and rewrite-on-next-mutation behavior; update the stream unmarshal benchmark accordingly.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| proto/redis_internal.proto | Defines protobuf messages for Redis complex value payloads stored in KV. |
| proto/redis_internal.pb.go | Generated Go types for the new Redis internal proto schema. |
| proto/Makefile | Adds redis_internal.proto to the protobuf generation target. |
| adapter/redis_storage_codec.go | Implements prefixed deterministic proto encoding + legacy JSON decoding for stored Redis values. |
| adapter/redis_storage_codec_test.go | Unit tests for proto round-trips and legacy JSON fallback decoding. |
| adapter/redis_storage_migration_test.go | Integration-style tests ensuring legacy JSON is not rewritten on read but is rewritten to proto on subsequent writes. |
| adapter/redis_compat_types_benchmark_test.go | Updates benchmark to measure proto-based marshalStreamValue/unmarshalStreamValue. |
| adapter/redis_compat_types.go | Removes legacy JSON marshal/unmarshal implementations now replaced by the new codec. |
No description provided.