refactor: improve backend code quality and security#1
Conversation
- Add context.Context propagation to Repository interface and all implementations - Replace log.Printf with structured logging (log/slog) - Fix optimistic locking: version default 0→1, add WHERE clause on update - Add CORS origin configuration support (CORS_ALLOWED_ORIGINS env var) - Add RequestID and MaxBodySize middleware - Inject health checker dependency instead of package-level init() - Remove dead code: standalone GetItems/CreateItem, unused Database methods - Use crypto/rand instead of math/rand for GenerateRandomString - Consolidate DB errors into pkg/dberrors with backward-compatible re-exports - Fix error leak: return generic message for internal server errors - Use loc=UTC in MySQL DSN instead of loc=Local - Implement proper List() filtering with models.Filter and Pagination - Add Versionable interface for optimistic locking pattern - Update all tests to match new interfaces
There was a problem hiding this comment.
Pull request overview
This PR refactors the Go backend to improve reliability and security by propagating context.Context through the repository layer, tightening error-handling/logging, and enhancing middleware and health-check wiring.
Changes:
- Propagates
context.Contextacross themodels.Repositoryinterface and MySQL/Azure implementations, updating handlers and tests accordingly. - Replaces
log.Printfusage withlog/slogand injects a sharedHealthCheckerinto route setup/handlers. - Adds configurable CORS, request ID, and max request body size middleware; updates optimistic locking behavior and error leakage.
Reviewed changes
Copilot reviewed 32 out of 32 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| backend/pkg/utils/utils.go | Switches random string generation to crypto/rand and deprecates CheckError. |
| backend/internal/models/validation.go | Hoists email regex compilation to package scope. |
| backend/internal/models/models.go | Adds context-aware repository methods and implements optimistic locking + filter/pagination logic. |
| backend/internal/health/health.go | Makes health checks context-aware. |
| backend/internal/health/health_test.go | Updates health tests to pass context. |
| backend/internal/database/repository.go | Uses slog for repository selection logging. |
| backend/internal/database/migrations.go | Uses slog for migration logging. |
| backend/internal/database/factory.go | Uses structured logging for DB connection retries (pool config remains applied here). |
| backend/internal/database/errors.go | Re-exports DB error types from pkg/dberrors. |
| backend/internal/database/database.go | Simplifies DB initialization and removes duplicated pool config here. |
| backend/internal/database/config.go | Updates DSN timezone to UTC. |
| backend/internal/database/config_test.go | Aligns DSN tests to loc=UTC. |
| backend/internal/database/database_test.go | Updates CRUD tests to use the repository interface with context. |
| backend/internal/database/azure/table.go | Updates Azure Table repository to use caller-provided context. |
| backend/internal/database/azure/table_test.go | Updates Azure Table tests to pass context. |
| backend/internal/database/azure/repository_test.go | Updates error propagation tests for context-aware repo methods. |
| backend/internal/database/azure/error_handling_test.go | Updates invalid-input tests for context-aware repo methods. |
| backend/internal/config/config.go | Adds CORS config to app config and reads CORS_ALLOWED_ORIGINS. |
| backend/internal/config/config_test.go | Aligns DSN tests to loc=UTC. |
| backend/internal/api/routes/routes.go | Injects health checker + config into route setup; installs new middleware. |
| backend/internal/api/routes/routes_test.go | Updates route tests to pass injected health checker + config. |
| backend/internal/api/middleware/middleware.go | Adds RequestID + MaxBodySize; makes CORS configurable; moves logging to slog. |
| backend/internal/api/middleware/middleware_test.go | Updates logger test to capture slog output. |
| backend/internal/api/middleware/cors_test.go | Updates CORS tests for new signature and header list. |
| backend/internal/api/handlers/items.go | Threads request context into repository calls; prevents internal error leakage. |
| backend/internal/api/handlers/items_test.go | Updates mock repo usage to pass context; adjusts error expectation. |
| backend/internal/api/handlers/mock_repository.go | Adapts mock repository to context-aware interface and version initialization. |
| backend/internal/api/handlers/rate_limiter.go | Adds background cleanup goroutine and stop channel; removes test-only throttling behavior. |
| backend/internal/api/handlers/handlers.go | Removes package-level init health checker; provides injected liveness/readiness handlers. |
| backend/internal/api/handlers/handlers_test.go | Removes tests for now-deleted dead handlers. |
| backend/api/main.go | Builds repo via factory, injects health checker + config into routes, switches to slog. |
| backend/api/main_test.go | Updates mock repository interface and health tests for context-aware checks. |
backend/pkg/utils/utils.go
Outdated
| func GenerateRandomString(length int) string { | ||
| const charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789" | ||
| var seededRand = rand.New(rand.NewSource(time.Now().UnixNano())) | ||
|
|
||
| b := make([]byte, length) | ||
| for i := range b { | ||
| b[i] = charset[seededRand.Intn(len(charset))] | ||
| n, _ := rand.Int(rand.Reader, big.NewInt(int64(len(charset)))) | ||
| b[i] = charset[n.Int64()] | ||
| } |
There was a problem hiding this comment.
GenerateRandomString ignores the error from crypto/rand.Int. If the RNG fails, n will be nil and n.Int64() will panic, and callers get no signal that randomness failed. Consider returning (string, error) (or at least handling the error and falling back) and also validating that length is non-negative before allocating the slice.
| func generateRequestID() string { | ||
| b := make([]byte, 16) | ||
| _, _ = rand.Read(b) | ||
| return fmt.Sprintf("%x-%x-%x-%x-%x", b[0:4], b[4:6], b[6:8], b[8:10], b[10:]) |
There was a problem hiding this comment.
generateRequestID ignores the error from rand.Read. If entropy collection fails, the request ID may be all-zero (or otherwise predictable) and you lose traceability guarantees. Handle the error (e.g., retry, fall back to a UUID implementation, or include a timestamp + counter) and consider logging when generation fails.
| func TestLoggerMiddleware(t *testing.T) { | ||
| t.Parallel() | ||
| // Set Gin to Test Mode | ||
| gin.SetMode(gin.TestMode) | ||
|
|
||
| // Create a buffer to capture log output | ||
| // Create a buffer to capture slog output | ||
| var buf bytes.Buffer | ||
| log.SetOutput(&buf) | ||
| defer func() { | ||
| log.SetOutput(os.Stdout) | ||
| }() | ||
| handler := slog.NewTextHandler(&buf, &slog.HandlerOptions{Level: slog.LevelInfo}) | ||
| origLogger := slog.Default() | ||
| slog.SetDefault(slog.New(handler)) | ||
| defer slog.SetDefault(origLogger) | ||
|
|
There was a problem hiding this comment.
This test runs in parallel but temporarily replaces the global default slog logger (slog.SetDefault), which can cause cross-test interference and flakiness when other tests log concurrently. Remove t.Parallel() for this test (and any others mutating global logger state), or refactor the middleware to accept an injected logger so tests can use a local instance without touching globals.
backend/api/main.go
Outdated
| healthChecker.SetReady(true) | ||
|
|
||
| // Setup router | ||
| router := gin.Default() |
There was a problem hiding this comment.
main() uses gin.Default(), which already installs Gin's built-in Logger and Recovery middleware. SetupRoutes() also adds custom Logger() and Recovery(), so requests will be logged twice and Gin's recovery may still emit stack traces/log formats you intended to replace. Prefer gin.New() here (and rely on your own middleware), or remove one of the duplicate logger/recovery middlewares to avoid double-processing.
| router := gin.Default() | |
| router := gin.New() |
| db := setupTestDB(t) | ||
| require.NoError(t, db.AutoMigrate()) | ||
| ctx := context.Background() |
There was a problem hiding this comment.
TestItemCRUD defines a top-level ctx that isn't used (each subtest redefines its own ctx) and then suppresses the unused variable at the end. Either pass the parent ctx down into subtests or remove the outer ctx declaration entirely to avoid dead code/suppressions in tests.
| db := setupTestDB(t) | |
| require.NoError(t, db.AutoMigrate()) | |
| ctx := context.Background() |
| // Stop terminates the background cleanup goroutine. | ||
| func (rl *RateLimiter) Stop() { | ||
| close(rl.done) | ||
| } |
There was a problem hiding this comment.
Stop() closes rl.done unconditionally; calling Stop twice will panic (close of closed channel). Since Stop is a public method, make it idempotent (e.g., guard with sync.Once or a non-blocking select on a closed channel pattern) to avoid accidental crashes in tests or future shutdown hooks.
- GenerateRandomString: return (string, error), validate negative length - generateRequestID: handle rand.Read error with timestamp fallback - Logger test: remove t.Parallel() to avoid global slog mutation - main: use gin.New() instead of gin.Default() to avoid duplicate middleware - database_test: remove unused top-level ctx variable - RateLimiter.Stop: use sync.Once for idempotent close - Expand utils tests for new error return signature
| @@ -204,8 +203,7 @@ func (r *TableRepository) Update(entity interface{}) error { | |||
| return dberrors.NewDatabaseError("marshal", err) | |||
| } | |||
|
|
|||
| // Update the entity | |||
| _, err = r.client.UpdateEntity(context.Background(), entityBytes, nil) | |||
| _, err = r.client.UpdateEntity(ctx, entityBytes, nil) | |||
| if err != nil { | |||
| return dberrors.NewDatabaseError("update", err) | |||
| } | |||
| @@ -215,13 +213,13 @@ func (r *TableRepository) Update(entity interface{}) error { | |||
| } | |||
There was a problem hiding this comment.
The Azure Table Storage implementation doesn't handle optimistic locking via the Version field, while the MySQL implementation (GenericRepository) does. This creates inconsistent behavior between the two backends.
When using Azure Table Storage, concurrent updates could succeed even with version mismatches, breaking the optimistic locking guarantees that the Item model and GenericRepository provide.
Consider adding version checking in the Azure Table implementation, potentially using Azure Table Storage's ETag feature for optimistic concurrency control.
| requestOrigin := c.Request.Header.Get("Origin") | ||
| for _, origin := range strings.Split(allowedOrigins, ",") { | ||
| if strings.TrimSpace(origin) == requestOrigin { | ||
| c.Writer.Header().Set("Access-Control-Allow-Origin", requestOrigin) | ||
| c.Writer.Header().Set("Vary", "Origin") | ||
| break | ||
| } | ||
| } |
There was a problem hiding this comment.
The new CORS origin whitelist functionality (lines 22-29) is not covered by tests. The cors_test.go file only tests the wildcard ("*") case.
Consider adding test cases for:
- A request from an allowed origin (should set Access-Control-Allow-Origin to that specific origin and add Vary: Origin header)
- A request from a disallowed origin (should not set Access-Control-Allow-Origin header)
- Multiple comma-separated origins in the allowedOrigins parameter
| // RequestID adds a unique request ID to each request. | ||
| // If the client sends an X-Request-ID header, it is reused; otherwise a new one is generated. | ||
| func RequestID() gin.HandlerFunc { | ||
| return func(c *gin.Context) { | ||
| requestID := c.GetHeader("X-Request-ID") | ||
| if requestID == "" { | ||
| requestID = generateRequestID() | ||
| } | ||
| c.Set("request_id", requestID) | ||
| c.Writer.Header().Set("X-Request-ID", requestID) | ||
| c.Next() | ||
| } | ||
| } |
There was a problem hiding this comment.
The new RequestID middleware has no test coverage. Consider adding tests to verify:
- That a new request ID is generated when the client doesn't provide one
- That the client's X-Request-ID header is reused when provided
- That the response includes the X-Request-ID header
- That the request ID is stored in the Gin context under "request_id"
| // MaxBodySize limits the size of the request body to prevent memory exhaustion. | ||
| func MaxBodySize(maxBytes int64) gin.HandlerFunc { | ||
| return func(c *gin.Context) { | ||
| c.Request.Body = http.MaxBytesReader(c.Writer, c.Request.Body, maxBytes) | ||
| c.Next() | ||
| } | ||
| } |
There was a problem hiding this comment.
The new MaxBodySize middleware has no test coverage. Consider adding tests to verify:
- That requests within the size limit are processed normally
- That requests exceeding the size limit are rejected with an appropriate error
- That the http.MaxBytesReader properly wraps the request body
backend/api/main.go
Outdated
|
|
||
| err = srv.Shutdown(ctx) | ||
| // Always execute cleanup | ||
| cancel() |
There was a problem hiding this comment.
The context cancel function is called twice: once explicitly on line 94 and once via defer on line 91. While calling cancel multiple times is safe (it's idempotent), the explicit call on line 94 is redundant since the defer will execute anyway when the function returns.
Consider removing line 94 to avoid confusion and rely solely on the deferred cancel.
| cancel() |
| case Filter: | ||
| switch c.Op { | ||
| case "exact": | ||
| query = query.Where(fmt.Sprintf("%s = ?", c.Field), c.Value) | ||
| case ">=": | ||
| query = query.Where(fmt.Sprintf("%s >= ?", c.Field), c.Value) | ||
| case "<=": | ||
| query = query.Where(fmt.Sprintf("%s <= ?", c.Field), c.Value) | ||
| default: | ||
| // Default to LIKE for substring matching | ||
| query = query.Where(fmt.Sprintf("%s LIKE ?", c.Field), fmt.Sprintf("%%%v%%", c.Value)) | ||
| } |
There was a problem hiding this comment.
The Filter.Field value is directly interpolated into SQL WHERE clauses using fmt.Sprintf without validation. This creates a SQL injection vulnerability. An attacker could pass arbitrary column names or SQL fragments through the Field parameter.
For example, with Field set to "name; DROP TABLE items--", the generated SQL would be malicious. While GORM provides some protection, directly interpolating user-controlled field names is dangerous.
Consider using a whitelist of allowed field names or using GORM's column name validation APIs.
- Azure Table Storage: add optimistic locking via Version field comparison and ETag-based conditional updates (412 Precondition Failed detection) - Azure Table: store/read Version in Create and FindByID for consistency - CORS tests: add coverage for origin whitelist, disallowed origins, multiple comma-separated origins, and empty-string fallback - RequestID middleware tests: verify generation, reuse, response header, context storage, and uniqueness - MaxBodySize middleware tests: verify within-limit and over-limit requests - Remove redundant cancel() call in main.go (defer handles it) - Filter.Field SQL injection prevention: add allowedFilterFields whitelist that rejects unknown field names before interpolation
backend/internal/models/models.go
Outdated
| query = query.Where(fmt.Sprintf("%s <= ?", c.Field), c.Value) | ||
| default: | ||
| // Default to LIKE for substring matching | ||
| query = query.Where(fmt.Sprintf("%s LIKE ?", c.Field), fmt.Sprintf("%%%v%%", c.Value)) |
There was a problem hiding this comment.
The value interpolation using fmt.Sprintf("%%%v%%", c.Value) could bypass parameterization. While the field name is whitelisted, the value should be handled more carefully. Instead of fmt.Sprintf("%%%v%%", c.Value), use "%" + fmt.Sprint(c.Value) + "%" or better yet, let GORM handle the concatenation by using query.Where(fmt.Sprintf("%s LIKE ?", c.Field), "%"+fmt.Sprint(c.Value)+"%"). The current approach with %v may work for simple types, but using the parameterized placeholder ensures GORM properly escapes the value.
| query = query.Where(fmt.Sprintf("%s LIKE ?", c.Field), fmt.Sprintf("%%%v%%", c.Value)) | |
| query = query.Where(fmt.Sprintf("%s LIKE ?", c.Field), "%"+fmt.Sprint(c.Value)+"%") |
backend/pkg/utils/utils.go
Outdated
| // CheckError returns the error for the caller to handle. | ||
| // Deprecated: Use explicit error handling instead of this wrapper. | ||
| func CheckError(err error) error { | ||
| return err | ||
| } |
There was a problem hiding this comment.
The CheckError function is deprecated but still returns the error instead of handling it. Since it's marked as deprecated, consider removing it entirely in this refactoring rather than keeping a pass-through wrapper. If callers need explicit error handling, they should handle errors directly without this indirection.
| "RowKey": item.Name, // Using Name as the unique key for testing | ||
| "Name": item.Name, | ||
| "Price": item.Price, | ||
| "Version": item.Version, |
There was a problem hiding this comment.
The Azure Table Storage Create method stores item.Version directly without initializing it (line 107). This is inconsistent with the MySQL implementation which uses GORM's default:1 tag and the MockRepository which explicitly sets item.Version = 1 on line 43 of mock_repository.go. If a client creates an item without setting Version, it will default to 0, breaking optimistic locking. Add if item.Version == 0 { item.Version = 1 } after line 96 to match the other implementations.
- LIKE value: use fmt.Sprint for proper parameterized escaping - Remove deprecated CheckError function and its test (no callers) - Azure Table Create: initialize Version to 1 when not set, consistent with GORM default and MockRepository
| func NewRateLimiter(limit int, window time.Duration) *RateLimiter { | ||
| return &RateLimiter{ | ||
| rl := &RateLimiter{ | ||
| requests: make(map[string][]time.Time), | ||
| limit: limit, | ||
| window: window, | ||
| done: make(chan struct{}), | ||
| } | ||
| go rl.cleanup() | ||
| return rl | ||
| } |
There was a problem hiding this comment.
The rate limiter now starts a background cleanup goroutine but there's no code in the application that calls the Stop method to clean it up. This will cause a goroutine leak. The rate limiter should be stopped during graceful shutdown in main.go, or the cleanup goroutine should be managed differently (e.g., tied to the server's lifecycle).
| Name string `gorm:"size:255;not null" json:"name"` | ||
| Price float64 `json:"price"` | ||
| Version uint `gorm:"not null;default:0" json:"version"` // For optimistic locking | ||
| Version uint `gorm:"not null;default:1" json:"version"` // For optimistic locking (1 = initial; 0 = not provided) |
There was a problem hiding this comment.
The Item model's Version field default has been changed from 0 to 1 (models.go line 28), but there is no database migration to update existing records or alter the table schema. Existing items in the database will still have Version=0, which could cause optimistic locking issues. A migration should be added to either update existing rows to Version=1 or to handle the Version=0 case specially in the Update logic.
| Version uint `gorm:"not null;default:1" json:"version"` // For optimistic locking (1 = initial; 0 = not provided) | |
| Version uint `gorm:"not null;default:0" json:"version"` // For optimistic locking |
| // Initialize repository using the factory (selects MySQL or Azure Table based on config) | ||
| repo, err := database.NewRepository(cfg) | ||
| if err != nil { | ||
| log.Fatalf("Failed to initialize database: %v", err) | ||
| slog.Error("Failed to initialize repository", "error", err) | ||
| os.Exit(1) | ||
| } |
There was a problem hiding this comment.
The database initialization code has been removed (lines 45-50 in old code), but the migration logic that depends on it is not visible in this diff. If migrations are still needed, they should be run after the repository is created. Verify that AutoMigrate() is still being called somewhere, or document that migrations are now optional when using Azure Table Storage.
| // Verify item exists before attempting delete | ||
| var item models.Item | ||
| if err := h.repository.FindByID(c.Request.Context(), uint(id), &item); err != nil { | ||
| status, message := handleDBError(err) | ||
| c.JSON(status, gin.H{"error": message}) | ||
| return | ||
| } | ||
|
|
||
| if err := h.repository.Delete(c.Request.Context(), &item); err != nil { |
There was a problem hiding this comment.
The delete handler now performs a FindByID before deleting (lines 236-242). While this ensures the item exists, it introduces a race condition: another request could delete the item between the FindByID and Delete calls, causing the Delete to fail silently (since soft deletes won't error if the item is already soft-deleted). Consider handling this case explicitly or documenting this behavior.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 33 out of 33 changed files in this pull request and generated 6 comments.
Comments suppressed due to low confidence (1)
backend/internal/database/azure/table.go:179
- Type assertion without checking the ok value can panic if entityData["CreatedAt"] or entityData["UpdatedAt"] is nil or not a string. Consider using the comma-ok idiom and returning an appropriate error if these required fields are missing.
createdAt, err := time.Parse(time.RFC3339, entityData["CreatedAt"].(string))
if err != nil {
return dberrors.NewDatabaseError("parse_time", err)
}
item.CreatedAt = createdAt
updatedAt, err := time.Parse(time.RFC3339, entityData["UpdatedAt"].(string))
if err != nil {
return dberrors.NewDatabaseError("parse_time", err)
}
item.UpdatedAt = updatedAt
| c.Writer.Header().Set("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS") | ||
| c.Writer.Header().Set("Access-Control-Allow-Headers", "Content-Type, Content-Length, Accept-Encoding, Authorization") | ||
| c.Writer.Header().Set("Access-Control-Allow-Headers", "Content-Type, Content-Length, Accept-Encoding, Authorization, X-Request-ID") |
There was a problem hiding this comment.
When allowedOrigins contains a whitelist but the request Origin doesn't match any entry, no Access-Control-Allow-Origin header is set, but the Access-Control-Allow-Methods and Access-Control-Allow-Headers headers are still sent. This could confuse browsers. Consider only setting these headers when the origin is allowed, or explicitly return early for disallowed origins during OPTIONS preflight requests.
| if !ok { | ||
| return dberrors.NewDatabaseError("type_assertion", fmt.Errorf("entity must be *models.Item")) | ||
| } | ||
|
|
There was a problem hiding this comment.
The Update method doesn't validate that item.ID is non-zero before using it. If an Item with ID=0 is passed, the GetEntity call will look for row key "0" which may not be the intended behavior. Consider adding validation to ensure item.ID is non-zero before proceeding with the update operation.
| if item.ID == 0 { | |
| return dberrors.NewDatabaseError("update", dberrors.ErrValidation) | |
| } |
| // Version check for optimistic locking (if version was provided in request) | ||
| if updateItem.Version > 0 && updateItem.Version != currentItem.Version { | ||
| c.JSON(http.StatusConflict, gin.H{"error": "Item has been modified by another request"}) | ||
| return | ||
| } | ||
|
|
There was a problem hiding this comment.
The UpdateItem handler performs a double version check: first at line 198 comparing client-provided version with database version, then the repository does another check at line 207 (in models.go:121). The first check (line 198) is redundant since the repository will catch version mismatches anyway. However, this could mask repository-level concurrency issues if updateItem.Version is 0 (not provided), as the condition would be false and the repository check would be relied upon exclusively. Consider removing the handler-level check to simplify the logic and rely entirely on the repository's optimistic locking.
| // Version check for optimistic locking (if version was provided in request) | |
| if updateItem.Version > 0 && updateItem.Version != currentItem.Version { | |
| c.JSON(http.StatusConflict, gin.H{"error": "Item has been modified by another request"}) | |
| return | |
| } |
backend/internal/models/models.go
Outdated
| // allowedFilterFields is a whitelist of column names that may be used in filter queries. | ||
| // This prevents SQL injection via the Filter.Field parameter. | ||
| var allowedFilterFields = map[string]bool{ | ||
| "name": true, | ||
| "price": true, | ||
| } |
There was a problem hiding this comment.
The allowedFilterFields whitelist is defined at package level, meaning it's shared across all GenericRepository instances. If additional models are added that need different filter fields (e.g., User model with email, username filters), this hardcoded list won't support them. Consider making the whitelist configurable per repository instance or dynamically determining allowed fields from the model's struct tags.
| item.Name = entityData["Name"].(string) | ||
| item.Price = entityData["Price"].(float64) |
There was a problem hiding this comment.
Type assertion without checking the ok value can panic if entityData["Name"] is nil or not a string. The same applies to entityData["Price"] at line 162. Consider using the comma-ok idiom for these type assertions and returning an error if the expected fields are missing or have unexpected types.
| if !ok { | ||
| return dberrors.NewDatabaseError("type_assertion", fmt.Errorf("entity must be *models.Item")) | ||
| } | ||
|
|
There was a problem hiding this comment.
The Delete method doesn't validate that item.ID is non-zero before attempting deletion. Consider adding validation to ensure item.ID is non-zero to prevent accidentally operating on row key "0".
| if item.ID == 0 { | |
| return dberrors.NewDatabaseError("delete", dberrors.ErrValidation) | |
| } |
- Wire rate limiter into API routes and stop during graceful shutdown - Add migration to update Version=0→1 for existing items (dialect-aware) - Restore AutoMigrate() call in MySQL repository factory - Remove FindByID pre-check from DeleteItem to avoid race condition
- Use comma-ok idiom for all Azure Table type assertions to prevent panics - Only set CORS Allow-Methods/Allow-Headers when origin is allowed; early return for disallowed OPTIONS preflight - Add ID != 0 validation in Azure Table Update and Delete methods - Remove redundant handler-level version check; pass client version to repository for optimistic locking - Make allowedFilterFields configurable per GenericRepository instance via NewRepositoryWithFilterFields constructor
| _, err := c.Request.Body.Read(body) | ||
| if err != nil && err.Error() != "EOF" { | ||
| c.JSON(http.StatusRequestEntityTooLarge, gin.H{"error": "body too large"}) |
There was a problem hiding this comment.
The test checks err.Error() != "EOF", which is brittle because EOF comparisons should be done with errors.Is(err, io.EOF) (and io.EOF is not guaranteed to stringify exactly as "EOF" in all cases). Switching to errors.Is will make the test more robust and idiomatic.
| n, err := rand.Int(rand.Reader, big.NewInt(int64(len(charset)))) | ||
| if err != nil { | ||
| return "", fmt.Errorf("failed to generate random byte at index %d: %w", i, err) | ||
| } | ||
| b[i] = charset[n.Int64()] | ||
| } |
There was a problem hiding this comment.
charset[n.Int64()] will not compile because string/byte indexing requires an int index, not int64. Convert the random index to int (after bounds are ensured) before indexing into charset.
| // HealthCheck is a function that checks a dependency's health. | ||
| // It receives a context for cancellation and timeout support. | ||
| type HealthCheck func(ctx context.Context) error |
There was a problem hiding this comment.
Changing HealthCheck to func(ctx context.Context) error means any remaining AddCheck call sites using func() error (verified in backend/api/azure_test.go) will not compile. Update those call sites to accept a context and pass it through, and update any CheckReadiness() calls to CheckReadiness(ctx).
| rowKey, _ := entityData["RowKey"].(string) | ||
| id, _ := strconv.ParseUint(rowKey, 10, 32) | ||
| createdAtStr, _ := entityData["CreatedAt"].(string) | ||
| createdAt, _ := time.Parse(time.RFC3339, createdAtStr) | ||
| updatedAtStr, _ := entityData["UpdatedAt"].(string) | ||
| updatedAt, _ := time.Parse(time.RFC3339, updatedAtStr) | ||
| name, _ := entityData["Name"].(string) | ||
| price, _ := entityData["Price"].(float64) | ||
|
|
There was a problem hiding this comment.
In List, several type assertions / parses ignore errors (e.g. RowKey → ParseUint, CreatedAt → time.Parse). If the Azure entity has unexpected types or malformed values, this will silently produce zero-values (ID=0 / timestamps=zero) instead of failing fast, which can corrupt API responses. Consider validating each field and returning a descriptive database error when parsing fails.
| rowKey, _ := entityData["RowKey"].(string) | |
| id, _ := strconv.ParseUint(rowKey, 10, 32) | |
| createdAtStr, _ := entityData["CreatedAt"].(string) | |
| createdAt, _ := time.Parse(time.RFC3339, createdAtStr) | |
| updatedAtStr, _ := entityData["UpdatedAt"].(string) | |
| updatedAt, _ := time.Parse(time.RFC3339, updatedAtStr) | |
| name, _ := entityData["Name"].(string) | |
| price, _ := entityData["Price"].(float64) | |
| rowKey, ok := entityData["RowKey"].(string) | |
| if !ok || rowKey == "" { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("invalid RowKey value: %v", entityData["RowKey"])) | |
| } | |
| id, err := strconv.ParseUint(rowKey, 10, 32) | |
| if err != nil { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("failed to parse RowKey %q as uint: %w", rowKey, err)) | |
| } | |
| createdAtStr, ok := entityData["CreatedAt"].(string) | |
| if !ok || createdAtStr == "" { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("invalid CreatedAt value: %v", entityData["CreatedAt"])) | |
| } | |
| createdAt, err := time.Parse(time.RFC3339, createdAtStr) | |
| if err != nil { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("failed to parse CreatedAt %q: %w", createdAtStr, err)) | |
| } | |
| updatedAtStr, ok := entityData["UpdatedAt"].(string) | |
| if !ok || updatedAtStr == "" { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("invalid UpdatedAt value: %v", entityData["UpdatedAt"])) | |
| } | |
| updatedAt, err := time.Parse(time.RFC3339, updatedAtStr) | |
| if err != nil { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("failed to parse UpdatedAt %q: %w", updatedAtStr, err)) | |
| } | |
| name, ok := entityData["Name"].(string) | |
| if !ok { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("invalid Name value: %v", entityData["Name"])) | |
| } | |
| price, ok := entityData["Price"].(float64) | |
| if !ok { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("invalid Price value: %v", entityData["Price"])) | |
| } |
| rowKey, _ := entityData["RowKey"].(string) | ||
| id, _ := strconv.ParseUint(rowKey, 10, 32) | ||
| createdAtStr, _ := entityData["CreatedAt"].(string) | ||
| createdAt, _ := time.Parse(time.RFC3339, createdAtStr) | ||
| updatedAtStr, _ := entityData["UpdatedAt"].(string) | ||
| updatedAt, _ := time.Parse(time.RFC3339, updatedAtStr) | ||
| name, _ := entityData["Name"].(string) | ||
| price, _ := entityData["Price"].(float64) | ||
|
|
There was a problem hiding this comment.
In List, updatedAt, name, and price are also read via unchecked type assertions (using the blank identifier for ok). This can silently yield empty name/0 price or a zero timestamp when the stored entity data is malformed. Prefer checking ok/parse errors and returning an error so callers don't get incorrect data.
| rowKey, _ := entityData["RowKey"].(string) | |
| id, _ := strconv.ParseUint(rowKey, 10, 32) | |
| createdAtStr, _ := entityData["CreatedAt"].(string) | |
| createdAt, _ := time.Parse(time.RFC3339, createdAtStr) | |
| updatedAtStr, _ := entityData["UpdatedAt"].(string) | |
| updatedAt, _ := time.Parse(time.RFC3339, updatedAtStr) | |
| name, _ := entityData["Name"].(string) | |
| price, _ := entityData["Price"].(float64) | |
| rowKey, ok := entityData["RowKey"].(string) | |
| if !ok || rowKey == "" { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("entity missing or invalid RowKey")) | |
| } | |
| id, err := strconv.ParseUint(rowKey, 10, 32) | |
| if err != nil { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("invalid RowKey %q: %w", rowKey, err)) | |
| } | |
| createdAtStr, ok := entityData["CreatedAt"].(string) | |
| if !ok || createdAtStr == "" { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("entity missing or invalid CreatedAt")) | |
| } | |
| createdAt, err := time.Parse(time.RFC3339, createdAtStr) | |
| if err != nil { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("invalid CreatedAt %q: %w", createdAtStr, err)) | |
| } | |
| updatedAtStr, ok := entityData["UpdatedAt"].(string) | |
| if !ok || updatedAtStr == "" { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("entity missing or invalid UpdatedAt")) | |
| } | |
| updatedAt, err := time.Parse(time.RFC3339, updatedAtStr) | |
| if err != nil { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("invalid UpdatedAt %q: %w", updatedAtStr, err)) | |
| } | |
| name, ok := entityData["Name"].(string) | |
| if !ok { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("entity missing or invalid Name")) | |
| } | |
| price, ok := entityData["Price"].(float64) | |
| if !ok { | |
| return dberrors.NewDatabaseError("list", fmt.Errorf("entity missing or invalid Price")) | |
| } |
- Use errors.Is(err, io.EOF) instead of string comparison in middleware test - Convert charset index to int for proper byte slice indexing in GenerateRandomString - Fix azure_test.go to use context-aware HealthCheck signature and CheckReadiness(ctx) - Validate all type assertions and parse results in Azure Table List method
| entityJSON := map[string]interface{}{ | ||
| "PartitionKey": "items", | ||
| "RowKey": item.Name, // Using Name as the unique key for testing | ||
| "Name": item.Name, | ||
| "Price": item.Price, | ||
| "Version": item.Version, | ||
| "CreatedAt": now.Format(time.RFC3339), | ||
| "UpdatedAt": now.Format(time.RFC3339), |
There was a problem hiding this comment.
In Azure Table Storage, Create() stores RowKey as item.Name, but FindByID/Update/Delete/List all assume RowKey is the numeric item.ID string. As written, a created item won’t be retrievable/updatable by ID and the repository doesn’t assign item.ID on create. Make RowKey consistent across all operations (e.g., generate/assign a numeric ID and use it as RowKey) or change the interface/operations to key by Name, but it needs to be coherent end-to-end.
- Generate numeric ID via atomic counter (seeded from UnixMilli) in Create() - Use strconv.FormatUint(item.ID) as RowKey instead of item.Name - Assign generated ID back to item.ID so callers can use FindByID - Remove duplicate comment on TableRepository struct
- GenericRepository.Delete now checks RowsAffected and returns ErrNotFound when no rows are affected (delete of non-existent entity) - Move gin.SetMode(gin.TestMode) to TestMain in middleware tests to avoid mutating global state from parallel test goroutines - Note: Azure TableRepository.Create already populates item.CreatedAt and item.UpdatedAt before returning (lines 153-154 of table.go)
| // Create Azure Table entity | ||
| now := time.Now().UTC() | ||
| entityJson := map[string]interface{}{ | ||
| "PartitionKey": "items", | ||
| "RowKey": strconv.FormatUint(uint64(item.ID), 10), | ||
| "Name": item.Name, | ||
| "Price": item.Price, | ||
| "Version": item.Version, | ||
| "CreatedAt": item.CreatedAt.Format(time.RFC3339), | ||
| "UpdatedAt": now.Format(time.RFC3339), | ||
| } |
There was a problem hiding this comment.
Azure Table Update() always writes CreatedAt from item.CreatedAt. If callers update an entity without first doing FindByID (or otherwise populating CreatedAt), this will overwrite the stored CreatedAt with the zero time (year 0001) due to UpdateModeMerge. Consider preserving the existing CreatedAt from the fetched entity (parse it from existing.Value) or omit CreatedAt from the update payload so it can’t be accidentally clobbered.
backend/pkg/utils/utils_test.go
Outdated
| assert.True(t, | ||
| (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9'), | ||
| "unexpected character: %c", c) |
There was a problem hiding this comment.
assert.True doesn’t treat the message as a format string; passing "unexpected character: %c", c will not interpolate %c. Use assert.Truef(...) / require.Truef(...) (or build the message with fmt.Sprintf) so the failing output shows the actual rune value.
| var item models.Item | ||
| if err := c.ShouldBindJSON(&item); err != nil { | ||
| c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid request format"}) | ||
| return | ||
| } | ||
|
|
||
| if item.Name == "" { | ||
| c.JSON(http.StatusBadRequest, gin.H{"error": "Name is required"}) | ||
| return | ||
| } | ||
|
|
||
| if err := h.repository.Create(&item); err != nil { | ||
| if err := h.repository.Create(c.Request.Context(), &item); err != nil { |
There was a problem hiding this comment.
CreateItem binds the full models.Item from client JSON, so a client can supply an arbitrary version value on create. Since Version is intended to be server-managed for optimistic locking (default 1), consider explicitly resetting item.Version after binding (e.g., force initial version) or excluding version from the create request schema to prevent inconsistent state.
| if v, ok := entityData["Version"]; ok { | ||
| if vf, ok := v.(float64); ok { |
There was a problem hiding this comment.
Azure Table FindByID only sets item.Version when the stored JSON contains a numeric Version; when absent/invalid, it leaves Version at 0. Elsewhere (Create/List, GORM default) the implied initial version is 1, and handler logic treats 0 as “not provided”. Consider defaulting item.Version to 1 when the Version field is missing or not parseable to keep optimistic-lock semantics consistent.
| if v, ok := entityData["Version"]; ok { | |
| if vf, ok := v.(float64); ok { | |
| // Default to version 1 when the Version field is missing or invalid to | |
| // keep optimistic-lock semantics consistent with the GORM repository. | |
| item.Version = 1 | |
| if v, ok := entityData["Version"]; ok { | |
| if vf, ok := v.(float64); ok && vf > 0 { |
- Azure Table Update: preserve stored CreatedAt when item.CreatedAt is zero to prevent accidental clobber if caller skips FindByID - Azure Table FindByID: default Version to 1 when field is missing or invalid, consistent with GORM default and Create/List behavior - CreateItem handler: reset item.Version to 0 after binding so the repository's Create sets the server-managed initial version (1) - utils_test: use assert.Truef so format verb %c is interpolated in failure messages
|
|
||
| if err := h.repository.Create(&item); err != nil { | ||
| // Version is server-managed; force initial value regardless of client input. | ||
| item.Version = 0 |
There was a problem hiding this comment.
The handler sets item.Version = 0 before calling Create(), but this contradicts the refactoring's goal of using version 1 as the initial value. According to the PR description and the GORM model definition (line 28 of models.go), the default should be 1.
Setting version to 0 here bypasses the GORM default and will cause the repository's Create method to insert 0 instead of 1, making the first update check for WHERE version = 0 instead of WHERE version = 1. This breaks optimistic locking for newly created items.
Remove this line and let GORM apply the column default (1), or explicitly set it to 1 if you want to be defensive about client input.
| item.Version = 0 | |
| item.Version = 1 |
| if createdAt.IsZero() { | ||
| var existingData2 map[string]interface{} | ||
| if err := json.Unmarshal(existing.Value, &existingData2); err == nil { | ||
| if caStr, ok := existingData2["CreatedAt"].(string); ok { | ||
| if parsed, parseErr := time.Parse(time.RFC3339, caStr); parseErr == nil { | ||
| createdAt = parsed | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
The existing entity's JSON is unmarshaled twice — once at line 258 for version checking, and again at line 290 for CreatedAt extraction. This is wasteful since existingData from line 257 is already available and contains the CreatedAt field.
Reuse the existingData variable from line 257 instead of creating existingData2. This eliminates the second unmarshal and simplifies the code.
- CreateItem: set item.Version = 1 (not 0) so GORM default is honored and optimistic locking works correctly for newly created items - Azure Table Update: parse existing entity JSON once and reuse for both version checking and CreatedAt preservation, eliminating redundant json.Unmarshal
| count := 0 | ||
| for _, t := range rl.requests[ip] { | ||
| if t.After(windowStart) { | ||
| valid = append(valid, t) | ||
| count++ | ||
| } | ||
| } | ||
| rl.requests[ip] = valid | ||
|
|
||
| // For tests - use a slightly lower limit to ensure some requests get rate limited | ||
| // This helps identify rate limiting in tests that send requests concurrently | ||
| effectiveLimit := rl.limit | ||
| if len(rl.requests[ip]) >= 30 && now.Nanosecond()%4 == 0 { | ||
| // Artificially lower the limit sometimes to ensure rate limiting occurs in tests | ||
| effectiveLimit = len(rl.requests[ip]) | ||
| } | ||
|
|
||
| // Check if limit exceeded - this must be evaluated AFTER cleaning up old requests | ||
| // and BEFORE adding the current request to ensure accurate rate limiting | ||
| if len(rl.requests[ip]) >= effectiveLimit { | ||
| if count >= rl.limit { | ||
| rl.Unlock() | ||
| c.JSON(http.StatusTooManyRequests, gin.H{"error": "rate limit exceeded"}) | ||
| c.Abort() | ||
| return | ||
| } | ||
|
|
||
| // Add current request | ||
| rl.requests[ip] = append(rl.requests[ip], now) |
There was a problem hiding this comment.
The rate limiter has a memory leak: expired timestamps are not removed from the slice during rate limiting. While the cleanup goroutine runs periodically, timestamps within the window keep accumulating. Consider filtering expired timestamps during the count loop instead of just counting them, or ensure the slice doesn't grow unbounded between cleanup cycles.
| if storedVersion, ok := existingData["Version"]; ok { | ||
| var sv uint | ||
| switch v := storedVersion.(type) { | ||
| case float64: | ||
| sv = uint(v) | ||
| case json.Number: | ||
| n, err := v.Int64() | ||
| if err != nil || n < 0 { | ||
| return dberrors.NewDatabaseError("update", fmt.Errorf("invalid stored version value: %v", v)) | ||
| } | ||
| sv = uint(n) | ||
| } | ||
| if currentVersion != sv { | ||
| return dberrors.NewDatabaseError("update", errors.New("version mismatch")) | ||
| } | ||
| } |
There was a problem hiding this comment.
The version comparison logic has a gap: when storedVersion exists in existingData but the type assertion fails (it's neither float64 nor json.Number), the variable sv remains 0 and the comparison currentVersion != sv may incorrectly pass or fail. Consider explicitly handling the case where storedVersion has an unexpected type by returning an error, or defaulting to a known safe value with clear documentation.
backend/internal/config/config.go
Outdated
| if err := c.Database.Validate(); err != nil { | ||
| return fmt.Errorf("database config: %w", err) | ||
| } |
There was a problem hiding this comment.
When UseAzureTable is true, the MySQL Database config is still validated (line 113-115) even though it won't be used. This could cause unnecessary startup failures when Azure Table is selected but MySQL credentials are incomplete. Consider adding a conditional check: only validate Database config when !c.AzureTable.UseAzureTable.
backend/api/main.go
Outdated
|
|
||
| // Give outstanding requests 5 seconds to complete | ||
| // Give outstanding requests time to complete | ||
| ctx, cancel := context.WithTimeout(context.Background(), gracefulShutdownTimeout) |
There was a problem hiding this comment.
The graceful shutdown timeout is hardcoded to 5 seconds (line 26, 90) but the config has a Server.ShutdownTimeout field. Use cfg.Server.ShutdownTimeout instead of the hardcoded gracefulShutdownTimeout constant to make the shutdown timeout configurable via environment variables, consistent with other server timeouts.
| func NewRepository(db *gorm.DB) Repository { | ||
| return &GenericRepository{db: db} | ||
| return &GenericRepository{ | ||
| db: db, | ||
| allowedFilterFields: map[string]bool{ | ||
| "name": true, | ||
| "price": true, | ||
| }, | ||
| } | ||
| } |
There was a problem hiding this comment.
The filter field whitelist is hardcoded to "name" and "price" in NewRepository, which only works for Item entities. If this repository is used for User or other entity types, filtering will fail with "invalid filter field" errors. Consider making the whitelist entity-aware, or document that NewRepository is specifically for Items and other entities should use NewRepositoryWithFilterFields.
- Rate limiter: prune expired timestamps during count loop to prevent unbounded slice growth between periodic cleanup cycles - Azure Table Update: add default case to version type switch to return an error for unexpected types instead of silently using zero - Config: skip MySQL Database validation when UseAzureTable is true to avoid unnecessary startup failures with incomplete MySQL credentials - main.go: use cfg.Server.ShutdownTimeout instead of hardcoded 5s constant, falling back to default if not configured - NewRepository: document that it is Item-specific; other entity types should use NewRepositoryWithFilterFields
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 34 out of 34 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
backend/internal/health/health.go:93
CheckReadinessholdsRLock()while executing dependency checks. If any check is slow (or blocks on I/O), this will blockAddCheck/SetReadyand can cause contention during startup/shutdown. Consider copyingisReadyand thedependenciesmap under lock, then releasing the lock before calling the check functions.
func (h *HealthChecker) CheckReadiness(ctx context.Context) HealthStatus {
h.mu.RLock()
defer h.mu.RUnlock()
if !h.isReady {
return HealthStatus{
Status: "DOWN",
Checks: map[string]CheckStatus{
"ready": {Status: "DOWN", Message: "Service is not ready"},
},
}
}
status := HealthStatus{
Status: "UP",
Checks: make(map[string]CheckStatus),
}
for name, check := range h.dependencies {
if err := check(ctx); err != nil {
status.Status = "DOWN"
status.Checks[name] = CheckStatus{
Status: "DOWN",
Message: err.Error(),
}
} else {
status.Checks[name] = CheckStatus{
Status: "UP",
}
}
}
| if err := h.repository.Update(c.Request.Context(), ¤tItem); err != nil { | ||
| if strings.Contains(err.Error(), "version mismatch") { | ||
| c.JSON(http.StatusConflict, gin.H{"error": "Item has been modified by another request"}) | ||
| return |
There was a problem hiding this comment.
Version-mismatch handling relies on strings.Contains(err.Error(), "version mismatch"). This is brittle (message changes/better wrapping can break conflict detection) and makes it harder to handle conflicts consistently across repositories. Consider introducing a sentinel error (e.g. dberrors.ErrVersionMismatch) and using errors.Is/errors.As so handlers can detect conflicts without string matching.
| func (r *GenericRepository) Create(ctx context.Context, entity interface{}) error { | ||
| if v, ok := entity.(Validator); ok { | ||
| if err := v.Validate(); err != nil { | ||
| return dberrors.NewDatabaseError("validate", err) | ||
| } | ||
| } |
There was a problem hiding this comment.
Repository-level Validate() failures are wrapped as NewDatabaseError("validate", err) where err is the model-specific validation error (e.g. ErrInvalidPrice). handleDBError() only treats errors as 400 when errors.Is(dbErr.Err, ErrValidation), so these validation failures will incorrectly become 500 responses. Wrap validation failures with the ErrValidation sentinel (while preserving the underlying message if desired) so handlers can reliably map them to 400.
| // Optimistic locking for Versionable entities. | ||
| // We increment the version optimistically before Save, then use a | ||
| // WHERE version=old clause. If no rows are affected, it means another | ||
| // transaction modified this entity \u2014 we roll back the in-memory version | ||
| // and return a version-mismatch error. Note: Save() issues an UPDATE | ||
| // for all columns, which is safe here because the handler loaded the | ||
| // entity first (FindByID), then applied changes on top of it. | ||
| if ver, ok := entity.(Versionable); ok { | ||
| currentVersion := ver.GetVersion() | ||
| ver.SetVersion(currentVersion + 1) | ||
| result := r.db.WithContext(ctx).Where("version = ?", currentVersion).Save(entity) | ||
| if result.Error != nil { | ||
| ver.SetVersion(currentVersion) // Roll back version on error | ||
| return r.handleError("update", result.Error) | ||
| } | ||
| if result.RowsAffected == 0 { | ||
| ver.SetVersion(currentVersion) // Roll back version on mismatch | ||
| return dberrors.NewDatabaseError("update", errors.New("version mismatch")) | ||
| } | ||
| return nil | ||
| } |
There was a problem hiding this comment.
Update() uses Save(entity) in the optimistic-locking path without validating that the entity has a non-zero primary key. In GORM, Save() will INSERT when the primary key is zero, which would make Update() accidentally create new rows. Consider validating the ID (for Base models) and returning a validation/not-found error when it’s missing, and/or using an explicit Model(...).Where("id = ? AND version = ?", ...).Updates(...) pattern to ensure the UPDATE is always scoped to a single row.
| if item.Version == 0 { | ||
| item.Version = 1 | ||
| } | ||
|
|
There was a problem hiding this comment.
TableRepository.Create does not run model validation (models.Validator.Validate()), so invalid data (e.g. empty name / non-positive price) can be persisted when using the Azure Table backend. The MySQL repository validates on Create/Update, so this is an inconsistency and can lead to different behavior across backends. Consider calling Validate() (and wrapping failures as ErrValidation) before writing to Azure Table Storage.
| // Run model validation (consistent with GenericRepository / MySQL backend) | |
| if validator, ok := entity.(models.Validator); ok { | |
| if err := validator.Validate(); err != nil { | |
| // Wrap the validation error so callers can detect dberrors.ErrValidation | |
| return dberrors.NewDatabaseError("validate", fmt.Errorf("%w: %v", dberrors.ErrValidation, err)) | |
| } | |
| } |
| if item.ID == 0 { | ||
| return dberrors.NewDatabaseError("update", dberrors.ErrValidation) | ||
| } | ||
|
|
There was a problem hiding this comment.
TableRepository.Update does not run model validation (models.Validator.Validate()), so invalid updates can be written to Azure Table Storage even though the MySQL repository validates on Update. Consider validating the entity before issuing the update and mapping validation failures to ErrValidation so API behavior is consistent across storage backends.
| // Run model validation if supported, to mirror MySQL repository behavior. | |
| if v, ok := entity.(models.Validator); ok { | |
| if err := v.Validate(); err != nil { | |
| return dberrors.NewDatabaseError("update", dberrors.ErrValidation) | |
| } | |
| } |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 34 out of 34 changed files in this pull request and generated 3 comments.
You can also share your feedback on Copilot code review. Take the survey.
| if storedVersion, ok := existingData["Version"]; ok { | ||
| var sv uint | ||
| switch v := storedVersion.(type) { | ||
| case float64: | ||
| sv = uint(v) | ||
| case json.Number: | ||
| n, err := v.Int64() | ||
| if err != nil || n < 0 { | ||
| return dberrors.NewDatabaseError("update", fmt.Errorf("invalid stored version value: %v", v)) | ||
| } | ||
| sv = uint(n) | ||
| default: | ||
| return dberrors.NewDatabaseError("update", fmt.Errorf("unexpected version type: %T", storedVersion)) | ||
| } | ||
| if currentVersion != sv { | ||
| return dberrors.NewDatabaseError("update", errors.New("version mismatch")) | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
If the stored entity lacks a Version field (e.g., existing Azure Table rows created before versioning was introduced), the version comparison is skipped and any client-supplied version will be accepted. That undermines the intended optimistic-locking semantics for legacy rows. Consider treating a missing/invalid stored version as the default (e.g., 1) and enforcing the mismatch check consistently.
| if storedVersion, ok := existingData["Version"]; ok { | |
| var sv uint | |
| switch v := storedVersion.(type) { | |
| case float64: | |
| sv = uint(v) | |
| case json.Number: | |
| n, err := v.Int64() | |
| if err != nil || n < 0 { | |
| return dberrors.NewDatabaseError("update", fmt.Errorf("invalid stored version value: %v", v)) | |
| } | |
| sv = uint(n) | |
| default: | |
| return dberrors.NewDatabaseError("update", fmt.Errorf("unexpected version type: %T", storedVersion)) | |
| } | |
| if currentVersion != sv { | |
| return dberrors.NewDatabaseError("update", errors.New("version mismatch")) | |
| } | |
| } | |
| // Default stored version if missing/invalid (aligns with model default of 0). | |
| storedVersionUint := uint(0) | |
| if storedVersion, ok := existingData["Version"]; ok { | |
| switch v := storedVersion.(type) { | |
| case float64: | |
| if v >= 0 { | |
| storedVersionUint = uint(v) | |
| } | |
| case json.Number: | |
| if n, err := v.Int64(); err == nil && n >= 0 { | |
| storedVersionUint = uint(n) | |
| } | |
| } | |
| } | |
| if currentVersion != storedVersionUint { | |
| return dberrors.NewDatabaseError("update", errors.New("version mismatch")) | |
| } |
| // MaxBodySize limits the size of the request body to prevent memory exhaustion. | ||
| func MaxBodySize(maxBytes int64) gin.HandlerFunc { | ||
| return func(c *gin.Context) { | ||
| c.Request.Body = http.MaxBytesReader(c.Writer, c.Request.Body, maxBytes) | ||
| c.Next() | ||
| } |
There was a problem hiding this comment.
MaxBodySize currently only wraps the body with http.MaxBytesReader and does not translate *http.MaxBytesError into a 413 Request Entity Too Large response. As a result, endpoints that bind JSON will typically return 400 "Invalid request format" for oversized payloads. Consider handling this error centrally (e.g., abort with 413 when the limit is hit) so clients get a correct status code.
| @@ -73,7 +79,7 @@ func (h *HealthChecker) CheckReadiness() HealthStatus { | |||
| } | |||
|
|
|||
| for name, check := range h.dependencies { | |||
| if err := check(); err != nil { | |||
| if err := check(ctx); err != nil { | |||
| status.Status = "DOWN" | |||
There was a problem hiding this comment.
CheckReadiness holds the RWMutex read lock while executing dependency checks. Since checks can block on I/O (and now take a context), this can unnecessarily block AddCheck/SetReady and increase contention. Consider copying isReady and the dependencies map under lock, releasing the lock, and then running the checks without holding the mutex.
- Azure Table Update: default stored version to 1 for legacy rows missing the Version field, enforcing optimistic locking consistently - MaxBodySize middleware: detect *http.MaxBytesError after handler execution and return 413 Request Entity Too Large instead of 400 - HealthChecker.CheckReadiness: copy isReady and dependencies under RLock, then release before running I/O-bound dependency checks to reduce mutex contention
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 34 out of 34 changed files in this pull request and generated 4 comments.
You can also share your feedback on Copilot code review. Take the survey.
| if v >= 0 { | ||
| storedVersionUint = uint(v) | ||
| } | ||
| case json.Number: | ||
| if n, err := v.Int64(); err == nil && n >= 0 { |
There was a problem hiding this comment.
Azure optimistic-lock version parsing treats a stored Version value of 0 as valid (v >= 0), which can make legacy rows (or rows written with the old 0 default) fail version checks when callers use the new default semantics (1 = initial; 0 = not provided). FindByID already ignores Version <= 0 and defaults to 1, so Update should likely mirror that by only accepting Version > 0 (or explicitly mapping 0 → 1) to keep behavior consistent.
| if v >= 0 { | |
| storedVersionUint = uint(v) | |
| } | |
| case json.Number: | |
| if n, err := v.Int64(); err == nil && n >= 0 { | |
| // Only treat strictly positive values as valid stored versions. | |
| // Non-positive (0 or negative) values are treated as "no version" | |
| // and leave the default of 1 in place, matching FindByID semantics. | |
| if v > 0 { | |
| storedVersionUint = uint(v) | |
| } | |
| case json.Number: | |
| // Same semantics for json.Number: ignore non-positive values. | |
| if n, err := v.Int64(); err == nil && n > 0 { |
| allowed := false | ||
| for _, origin := range strings.Split(allowedOrigins, ",") { | ||
| if strings.TrimSpace(origin) == requestOrigin { | ||
| c.Writer.Header().Set("Access-Control-Allow-Origin", requestOrigin) | ||
| c.Writer.Header().Set("Vary", "Origin") | ||
| allowed = true | ||
| break | ||
| } | ||
| } | ||
| if !allowed { | ||
| // Block requests from non-whitelisted origins as defense-in-depth; | ||
| // browsers enforce CORS client-side, but we also enforce server-side. | ||
| c.AbortWithStatus(http.StatusForbidden) | ||
| return | ||
| } |
There was a problem hiding this comment.
CORS middleware currently aborts with 403 when allowedOrigins is a whitelist and the request has no Origin header (or an empty one). Non-browser clients and same-origin requests commonly omit Origin, so this can unintentionally block legitimate traffic. Consider treating missing Origin as a non-CORS request: allow it through (don’t set Access-Control-Allow-Origin), and only enforce the whitelist when Origin is present.
| allowed := false | |
| for _, origin := range strings.Split(allowedOrigins, ",") { | |
| if strings.TrimSpace(origin) == requestOrigin { | |
| c.Writer.Header().Set("Access-Control-Allow-Origin", requestOrigin) | |
| c.Writer.Header().Set("Vary", "Origin") | |
| allowed = true | |
| break | |
| } | |
| } | |
| if !allowed { | |
| // Block requests from non-whitelisted origins as defense-in-depth; | |
| // browsers enforce CORS client-side, but we also enforce server-side. | |
| c.AbortWithStatus(http.StatusForbidden) | |
| return | |
| } | |
| if requestOrigin != "" { | |
| allowed := false | |
| for _, origin := range strings.Split(allowedOrigins, ",") { | |
| if strings.TrimSpace(origin) == requestOrigin { | |
| c.Writer.Header().Set("Access-Control-Allow-Origin", requestOrigin) | |
| c.Writer.Header().Set("Vary", "Origin") | |
| allowed = true | |
| break | |
| } | |
| } | |
| if !allowed { | |
| // Block requests from non-whitelisted origins as defense-in-depth; | |
| // browsers enforce CORS client-side, but we also enforce server-side. | |
| c.AbortWithStatus(http.StatusForbidden) | |
| return | |
| } | |
| } | |
| // If there is no Origin header, treat this as a non-CORS request: | |
| // allow it through without setting Access-Control-Allow-Origin. |
| c.Request.Body = http.MaxBytesReader(c.Writer, c.Request.Body, maxBytes) | ||
| c.Next() | ||
|
|
||
| // If reading the body hit the limit, MaxBytesReader returns | ||
| // *http.MaxBytesError. Detect this and override the status code | ||
| // so clients receive a proper 413 instead of a generic 400. | ||
| if c.Errors.Last() != nil { | ||
| var maxBytesErr *http.MaxBytesError | ||
| if errors.As(c.Errors.Last().Err, &maxBytesErr) { | ||
| c.AbortWithStatusJSON(http.StatusRequestEntityTooLarge, | ||
| gin.H{"error": "request body too large"}) | ||
| return | ||
| } | ||
| } |
There was a problem hiding this comment.
MaxBodySize tries to detect oversized bodies via c.Errors.Last() after c.Next(), but Gin won’t automatically populate c.Errors for most body-read failures (e.g., ShouldBindJSON returns an error to the handler, not via c.Error). This means the middleware often won’t translate oversize payloads into a 413 as intended, and responses may remain as handler-generated 400s. Consider handling the *http.MaxBytesError at the point of binding/reading (e.g., in handlers when ShouldBindJSON fails), or wrap the request body/reader in the middleware to capture the error deterministically before the handler writes a response.
| c.Request.Body = http.MaxBytesReader(c.Writer, c.Request.Body, maxBytes) | |
| c.Next() | |
| // If reading the body hit the limit, MaxBytesReader returns | |
| // *http.MaxBytesError. Detect this and override the status code | |
| // so clients receive a proper 413 instead of a generic 400. | |
| if c.Errors.Last() != nil { | |
| var maxBytesErr *http.MaxBytesError | |
| if errors.As(c.Errors.Last().Err, &maxBytesErr) { | |
| c.AbortWithStatusJSON(http.StatusRequestEntityTooLarge, | |
| gin.H{"error": "request body too large"}) | |
| return | |
| } | |
| } | |
| // Wrap the request body with http.MaxBytesReader to enforce the limit. | |
| limitedReader := http.MaxBytesReader(c.Writer, c.Request.Body, maxBytes) | |
| // Define a minimal ReadCloser interface so we don't need additional imports. | |
| type readCloser interface { | |
| Read(p []byte) (n int, err error) | |
| Close() error | |
| } | |
| // maxBytesCapture wraps a ReadCloser and records when a *http.MaxBytesError occurs. | |
| type maxBytesCapture struct { | |
| rc readCloser | |
| onError func(error) | |
| } | |
| func(m *maxBytesCapture) Read(p []byte) (int, error) { | |
| n, err := m.rc.Read(p) | |
| if err != nil && m.onError != nil { | |
| m.onError(err) | |
| } | |
| return n, err | |
| } | |
| func(m *maxBytesCapture) Close() error { | |
| return m.rc.Close() | |
| } | |
| var exceeded bool | |
| // Replace the request body with our capturing wrapper. | |
| c.Request.Body = &maxBytesCapture{ | |
| rc: limitedReader, | |
| onError: func(err error) { | |
| var maxBytesErr *http.MaxBytesError | |
| if errors.As(err, &maxBytesErr) { | |
| exceeded = true | |
| } | |
| }, | |
| } | |
| c.Next() | |
| // If the body size was exceeded and the handler has not yet written a response, | |
| // return a 413 so clients receive a clear signal that the payload was too large. | |
| if exceeded && !c.Writer.Written() { | |
| c.AbortWithStatusJSON(http.StatusRequestEntityTooLarge, | |
| gin.H{"error": "request body too large"}) | |
| return | |
| } |
| func (r *GenericRepository) Update(ctx context.Context, entity interface{}) error { | ||
| if v, ok := entity.(Validator); ok { | ||
| if err := v.Validate(); err != nil { | ||
| return dberrors.NewDatabaseError("validate", err) | ||
| } |
There was a problem hiding this comment.
Same issue as in Create: Update wraps Validate() errors as the raw validation error rather than dberrors.ErrValidation, so HTTP layers using sentinel checks can’t reliably map these to 400 responses. Consider wrapping with dberrors.ErrValidation so callers can distinguish user-input validation from internal failures.
- Azure Table Update: only accept stored version > 0 as valid, mapping 0 to the default of 1 to match FindByID semantics - CORS middleware: allow requests with no Origin header (non-browser / same-origin) through without blocking; only enforce whitelist when Origin is present. Added test for no-Origin pass-through. - MaxBodySize middleware: replace c.Errors inspection with a body wrapper (maxBytesBodyCapture) that captures *http.MaxBytesError during Read, then returns 413 if exceeded and not yet written - Validation wrapping: re-wrap validation errors with ErrValidation sentinel via fmt.Errorf so handleDBError can reliably map to 400
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 34 out of 34 changed files in this pull request and generated 2 comments.
You can also share your feedback on Copilot code review. Take the survey.
| rowKey, ok := entityData["RowKey"].(string) | ||
| if !ok || rowKey == "" { | ||
| return dberrors.NewDatabaseError("list", fmt.Errorf("entity missing or invalid RowKey")) | ||
| } | ||
| id, err := strconv.ParseUint(rowKey, 10, 32) | ||
| if err != nil { | ||
| return dberrors.NewDatabaseError("list", fmt.Errorf("invalid RowKey %q: %w", rowKey, err)) |
There was a problem hiding this comment.
In Azure Table Storage List(), RowKey is parsed with bitSize=32, but IDs generated by nextID() use 48 bits. This will cause strconv.ParseUint to fail for almost all created items (IDs > 2^32-1), breaking list operations. Parse with a 64-bit size (and ensure the cast to uint remains safe).
| // Optimistic locking for Versionable entities. | ||
| // We increment the version optimistically before Save, then use a | ||
| // WHERE version=old clause. If no rows are affected, it means another | ||
| // transaction modified this entity \u2014 we roll back the in-memory version | ||
| // and return a version-mismatch error. Note: Save() issues an UPDATE | ||
| // for all columns, which is safe here because the handler loaded the | ||
| // entity first (FindByID), then applied changes on top of it. | ||
| if ver, ok := entity.(Versionable); ok { | ||
| currentVersion := ver.GetVersion() | ||
| ver.SetVersion(currentVersion + 1) | ||
| result := r.db.WithContext(ctx).Where("version = ?", currentVersion).Save(entity) | ||
| if result.Error != nil { | ||
| ver.SetVersion(currentVersion) // Roll back version on error | ||
| return r.handleError("update", result.Error) | ||
| } | ||
| if result.RowsAffected == 0 { | ||
| ver.SetVersion(currentVersion) // Roll back version on mismatch | ||
| return dberrors.NewDatabaseError("update", errors.New("version mismatch")) | ||
| } | ||
| return nil |
There was a problem hiding this comment.
The new optimistic-locking behavior in GenericRepository.Update() (version increment + WHERE version=old + RowsAffected==0 => conflict) isn’t covered by unit tests for the real GORM-backed repository. Adding a test that creates an item, performs an update, then attempts a second update with a stale version (and asserts a version-mismatch error + no data clobber) would help prevent regressions.
- Fix strconv.ParseUint bitSize from 32 to 64 in Azure Table List() to
match 48-bit IDs generated by nextID().
- Add TestItemCRUD/Optimistic_Locking_-_version_mismatch covering the
GenericRepository optimistic-lock path: creates an item, updates it,
then attempts a second update with a stale version and asserts
version-mismatch error, version rollback, and no data clobber.
- Change Save() to Model().Where().Select("*").Updates() in the
optimistic-locking path so the WHERE version clause works consistently
across dialects (Save() may emit INSERT…ON CONFLICT on SQLite).
Summary
Comprehensive backend improvements covering 21 issues across code quality, security, and maintainability.
Changes
context.Contextto allRepositoryinterface methods and implementations (MySQL, Azure Table Storage)log.Printfwithlog/slogin middleware (Logger, Recovery)Update()uses WHERE clause on version to detect conflictsCORS()middleware now acceptsallowedOriginsparameter; configurable viaCORS_ALLOWED_ORIGINSenv varRequestID()(adds X-Request-ID header) andMaxBodySize()(limits request body size)init()with dependency-injected health checker inSetupRoutes()Generate- **Dead code- **Dead code- **Dead code- **Dead code- **Dead code- **Dead code- **Dead code- **Deadrs conso- dated intopkg/dberrors` with backward-compatible re-exportsloc=UTCinstead ofloc=Local