-
-
Notifications
You must be signed in to change notification settings - Fork 0
[feature] pkg, env, and boost: Unit tests + Coverage Workflow #66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis change adds comprehensive unit and integration tests across multiple packages, introducing new test files for authentication, CLI helpers, HTTP handlers and middleware, GORM support, markdown parsing, media handling, password and string utilities, JSON parsing, validation, and logging. Additionally, a GitHub Actions workflow for running Go tests with coverage reporting is introduced. It also includes minor fixes such as correcting method name typos and adding a package-level variable for secret file path configuration. Changes
Sequence Diagram(s)sequenceDiagram
participant Developer
participant GitHub Actions
participant Test Runner
Developer->>GitHub Actions: Push or PR to main
GitHub Actions->>Test Runner: Setup Go, checkout code
GitHub Actions->>Test Runner: Run tests with coverage
Test Runner-->>GitHub Actions: Test results & coverage report
GitHub Actions-->>Developer: Report status
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~18 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @gocanto, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the project's reliability and maintainability by introducing a comprehensive suite of unit tests across the core pkg module and its various subpackages. The changes focus on validating existing functionality and ensuring the robustness of key components, laying the groundwork for improved code quality and future development.
Highlights
- Comprehensive Unit Testing: Added a broad suite of unit tests covering various subpackages within
pkg, includingauth,cli,client,gorm,http,llogs,markdown,media, and general utilities. - Increased Code Coverage: The new tests aim to significantly improve the overall code coverage for the
pkgmodule, enhancing reliability and maintainability. - Validation of Core Functionality: Specific tests validate critical components such as encryption/decryption, token handling, HTTP request/response parsing, database error handling, file operations, and string utility functions.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a comprehensive suite of unit tests and a coverage workflow which will significantly improve the project's quality and maintainability. The tests cover a wide range of packages and functionalities.
My review focuses on enhancing the robustness of these new tests. I've pointed out several places where unhandled errors in test setup could lead to panics or misleading test outcomes. I've also suggested strengthening assertions in some tests to ensure they are verifying behavior correctly. Addressing these points will make the new test suite even more valuable and reliable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (17)
pkg/cli/helpers_test.go (1)
5-8: Consider enhancing test coverage for the ClearScreen function.While this basic smoke test ensures the function doesn't panic, consider adding more comprehensive tests to verify the actual behavior, such as mocking the command execution or testing error handling scenarios.
You could enhance the test coverage by:
- Testing the error handling path when the command fails
- Verifying that the correct command is executed
- Using dependency injection to make the function more testable
+import ( + "os/exec" + "testing" +) + +func TestClearScreen_ErrorHandling(t *testing.T) { + // Test that function handles non-existent command gracefully + // This would require refactoring ClearScreen to accept a command parameter + // or using a test helper that mocks exec.Command +}pkg/validator_test.go (1)
11-30: Comprehensive validation testing approach.The test effectively covers multiple scenarios: valid data, invalid data, error recording, and JSON serialization of errors. Consider adding more specific test cases for individual validation rules.
+func TestValidator_SpecificRules(t *testing.T) { + v := GetDefaultValidator() + + // Test email validation specifically + invalidEmail := &user{Email: "invalid-email", Name: "John", Code: "123"} + if ok, _ := v.Passes(invalidEmail); ok { + t.Fatalf("expected email validation to fail") + } + + // Test length validation specifically + invalidCode := &user{Email: "a@b.com", Name: "John", Code: "12"} + if ok, _ := v.Passes(invalidCode); ok { + t.Fatalf("expected code length validation to fail") + } +}pkg/parser_test.go (1)
30-35: Good error handling test coverage.The error test effectively verifies that the function properly handles file not found scenarios. Consider adding tests for other error conditions like invalid JSON format.
+func TestParseJsonFileInvalidJson(t *testing.T) { + dir := t.TempDir() + file := dir + "/invalid.json" + content := `{"name":"john","age":}` // Invalid JSON + if err := os.WriteFile(file, []byte(content), 0644); err != nil { + t.Fatalf("write: %v", err) + } + + _, err := ParseJsonFile[jsonSample](file) + if err == nil { + t.Fatalf("expected JSON parsing error") + } +}pkg/llogs/files_logs_test.go (1)
27-34: Enhance test coverage for date formatting.The test only verifies the prefix but doesn't validate that the date formatting works correctly, which is a key part of the
DefaultPathfunctionality.func TestDefaultPath(t *testing.T) { - e := &env.Environment{Logs: env.LogsEnvironment{Dir: "foo-%s", DateFormat: "2006"}} - fl := FilesLogs{env: e} - p := fl.DefaultPath() - if !strings.HasPrefix(p, "foo-") { - t.Fatalf("path prefix") - } + tests := []struct { + name string + dir string + dateFormat string + expectFunc func(string) bool + }{ + { + name: "year format", + dir: "foo-%s", + dateFormat: "2006", + expectFunc: func(p string) bool { return strings.HasPrefix(p, "foo-") && len(p) > 8 }, + }, + { + name: "full date format", + dir: "log-%s.txt", + dateFormat: "2006-01-02", + expectFunc: func(p string) bool { return strings.Contains(p, "-") && strings.HasSuffix(p, ".txt") }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + e := &env.Environment{Logs: env.LogsEnvironment{Dir: tt.dir, DateFormat: tt.dateFormat}} + fl := FilesLogs{env: e} + p := fl.DefaultPath() + if !tt.expectFunc(p) { + t.Fatalf("path %q doesn't match expected pattern for %s", p, tt.name) + } + }) + } }.github/workflows/tests.yml (1)
20-20: Consider simplifying package selection.The explicit package listing works but could be simplified. The current approach gives fine-grained control over which packages are tested.
- run: go test ./pkg ./pkg/auth ./pkg/cli ./pkg/gorm ./pkg/http ./pkg/llogs ./pkg/media -coverprofile=coverage.out + run: go test ./pkg/... -coverprofile=coverage.outThis would automatically include all packages under
pkg/and any new packages added in the future.pkg/password_test.go (1)
5-19: Good basic test coverage, but consider adding more comprehensive test cases.The test correctly validates the core password functionality including creation, validation, and hash generation. The error handling and test structure follow Go conventions well.
Consider enhancing the test coverage with additional scenarios:
+func TestPassword_MakeAndValidate_Additional(t *testing.T) { + // Test different passwords produce different hashes + pw1, _ := MakePassword("secret1") + pw2, _ := MakePassword("secret2") + if string(pw1.GetHash()) == string(pw2.GetHash()) { + t.Fatalf("different passwords should produce different hashes") + } + + // Test empty password + pw3, err := MakePassword("") + if err != nil { + t.Fatalf("empty password should be allowed: %v", err) + } + if !pw3.Is("") { + t.Fatalf("empty password validation failed") + } +}pkg/http/handler_test.go (1)
10-28: Solid HTTP handler test with proper structure and validation.The test correctly validates the
MakeApiHandlerfunction using standard Go HTTP testing practices. It properly tests both the HTTP status code and JSON response structure, with appropriate error handling.Consider adding test cases for additional scenarios:
+func TestMakeApiHandler_Success(t *testing.T) { + h := MakeApiHandler(func(w http.ResponseWriter, r *http.Request) *ApiError { + return nil // Success case + }) + + rec := httptest.NewRecorder() + h(rec, httptest.NewRequest("GET", "/", nil)) + + if rec.Code != http.StatusOK { + t.Fatalf("expected 200, got %d", rec.Code) + } +} + +func TestMakeApiHandler_DifferentErrorCodes(t *testing.T) { + testCases := []struct { + status int + message string + }{ + {http.StatusNotFound, "not found"}, + {http.StatusInternalServerError, "server error"}, + } + + for _, tc := range testCases { + h := MakeApiHandler(func(w http.ResponseWriter, r *http.Request) *ApiError { + return &ApiError{Message: tc.message, Status: tc.status} + }) + + rec := httptest.NewRecorder() + h(rec, httptest.NewRequest("GET", "/", nil)) + + if rec.Code != tc.status { + t.Fatalf("expected %d, got %d", tc.status, rec.Code) + } + } +}pkg/http/middleware/pipeline_test.go (1)
12-39: Excellent middleware chaining test with clear execution order verification.The test effectively validates the Pipeline.Chain functionality using a clever approach with closures to track execution order. The use of httptest and proper error handling follows Go best practices.
Consider adding test cases for additional middleware scenarios:
+func TestPipelineChainErrorHandling(t *testing.T) { + p := Pipeline{} + errorMiddleware := func(next pkgHttp.ApiHandler) pkgHttp.ApiHandler { + return func(w http.ResponseWriter, r *http.Request) *pkgHttp.ApiError { + return &pkgHttp.ApiError{Message: "middleware error", Status: 500} + } + } + final := func(w http.ResponseWriter, r *http.Request) *pkgHttp.ApiError { + t.Fatalf("final handler should not be called when middleware returns error") + return nil + } + + chained := p.Chain(final, errorMiddleware) + result := chained(httptest.NewRecorder(), httptest.NewRequest("GET", "/", nil)) + + if result == nil || result.Status != 500 { + t.Fatalf("expected error from middleware") + } +} + +func TestPipelineChainEmpty(t *testing.T) { + p := Pipeline{} + called := false + final := func(w http.ResponseWriter, r *http.Request) *pkgHttp.ApiError { + called = true + return nil + } + + chained := p.Chain(final) // No middleware + chained(httptest.NewRecorder(), httptest.NewRequest("GET", "/", nil)) + + if !called { + t.Fatalf("final handler should be called when no middleware") + } +}pkg/gorm/support_test.go (1)
9-37: Comprehensive test coverage for GORM error classification functions.The tests effectively validate all three error classification functions (
IsNotFound,IsFoundButHasErrors,HasDbIssues) with appropriate positive and negative test cases. The implementation correctly usesstdgorm.ErrRecordNotFoundand generic errors to verify the expected behavior.Consider adding edge case tests for wrapped errors:
+import ( + "fmt" +) + +func TestIsNotFoundWrappedError(t *testing.T) { + wrappedErr := fmt.Errorf("database error: %w", stdgorm.ErrRecordNotFound) + if !IsNotFound(wrappedErr) { + t.Fatalf("wrapped ErrRecordNotFound should return true") + } +} + +func TestIsFoundButHasErrorsWrappedError(t *testing.T) { + wrappedErr := fmt.Errorf("database error: %w", stdgorm.ErrRecordNotFound) + if IsFoundButHasErrors(wrappedErr) { + t.Fatalf("wrapped ErrRecordNotFound should return false") + } +}pkg/http/response_test.go (1)
9-64: Excellent comprehensive test coverage for HTTP response functionality.The tests effectively cover all major response scenarios including successful responses with ETag generation, cache validation, callback execution, 304 Not Modified responses, and error helper functions. The use of httptest and proper HTTP status code validation follows Go best practices.
Consider adding tests for error scenarios and edge cases:
+func TestResponse_RespondOkError(t *testing.T) { + req := httptest.NewRequest("GET", "/", nil) + rec := httptest.NewRecorder() + + r := MakeResponseFrom("salt", rec, req) + // Test with data that might cause JSON encoding errors + invalidData := make(chan int) // channels can't be JSON encoded + + if err := r.RespondOk(invalidData); err == nil { + t.Fatalf("expected error when encoding invalid JSON data") + } +} + +func TestResponse_ETagUniqueness(t *testing.T) { + req := httptest.NewRequest("GET", "/", nil) + + rec1 := httptest.NewRecorder() + r1 := MakeResponseFrom("salt1", rec1, req) + r1.RespondOk(map[string]string{"data": "test"}) + + rec2 := httptest.NewRecorder() + r2 := MakeResponseFrom("salt2", rec2, req) + r2.RespondOk(map[string]string{"data": "test"}) + + etag1 := rec1.Header().Get("ETag") + etag2 := rec2.Header().Get("ETag") + + if etag1 == etag2 { + t.Fatalf("different salts should produce different ETags") + } +}pkg/auth/encryption_test.go (2)
5-22: Improve error messages and test structure.The test covers the core encrypt/decrypt functionality well, but could be enhanced:
- The error message "key err" on line 8 is not descriptive - it should specify whether it's a generation error or length validation error
- Consider separating key generation testing from the encrypt/decrypt flow for better test isolation
- if err != nil || len(key) != EncryptionKeyLength { - t.Fatalf("key err") - } + if err != nil { + t.Fatalf("failed to generate AES key: %v", err) + } + if len(key) != EncryptionKeyLength { + t.Fatalf("expected key length %d, got %d", EncryptionKeyLength, len(key)) + }
24-31: Handle setup errors to ensure test validity.While the test logic is correct, ignoring errors from
GenerateAESKey()andEncrypt()could lead to false positives if the setup fails.- key, _ := GenerateAESKey() - other, _ := GenerateAESKey() - enc, _ := Encrypt([]byte("hello"), key) + key, err := GenerateAESKey() + if err != nil { + t.Fatalf("setup failed - key generation: %v", err) + } + other, err := GenerateAESKey() + if err != nil { + t.Fatalf("setup failed - other key generation: %v", err) + } + enc, err := Encrypt([]byte("hello"), key) + if err != nil { + t.Fatalf("setup failed - encryption: %v", err) + }pkg/auth/handler_test.go (2)
5-24: Handle setup error but good integration test overall.The test provides excellent coverage of the token handler lifecycle, correctly verifying round-trip integrity of token encoding/decoding. However, ignoring the error from
GenerateAESKey()could mask setup failures.- key, _ := GenerateAESKey() + key, err := GenerateAESKey() + if err != nil { + t.Fatalf("setup failed - key generation: %v", err) + }
33-39: Handle setup errors but good validation test.The test correctly verifies username validation logic, but ignoring setup errors could lead to false positives if handler creation fails.
- key, _ := GenerateAESKey() - h, _ := MakeTokensHandler(key) + key, err := GenerateAESKey() + if err != nil { + t.Fatalf("setup failed - key generation: %v", err) + } + h, err := MakeTokensHandler(key) + if err != nil { + t.Fatalf("setup failed - handler creation: %v", err) + }pkg/stringable_test.go (1)
22-31: Consider testing time components for completeness.The test correctly validates date parsing, but could be enhanced to verify that time components (hour, minute, second) are set to current time as per the implementation.
Consider adding assertions to verify time components:
if dt.Year() != 2024 || dt.Month() != time.June || dt.Day() != 9 { t.Fatalf("unexpected datetime: %v", dt) } + // Verify time components are set (should be current time) + now := time.Now() + if dt.Hour() < 0 || dt.Hour() > 23 { + t.Fatalf("invalid hour component: %d", dt.Hour()) + }pkg/http/middleware/token_middleware_test.go (2)
12-26: Consider using more descriptive test parameters.The test correctly validates that all error methods return HTTP 403 status. However, the generic parameters ("a", "b", "c") make it harder to understand what's being tested.
Consider using more descriptive parameter values:
- e := tm.getInvalidRequestError("a", "b", "c") + e := tm.getInvalidRequestError("token", "user", "reason")- e = tm.getUnauthenticatedError("a", "b", "c") + e = tm.getUnauthenticatedError("token", "user", "reason")
28-38: Expand test coverage for complete middleware validation.The test correctly validates that requests without valid tokens are rejected with 403 status. However, consider adding tests for successful authentication scenarios and different request types.
Consider adding complementary tests:
func TestTokenMiddlewareHandleValid(t *testing.T) { // Test with valid token and proper auth service // Should allow request to proceed } func TestTokenMiddlewareHandleDifferentMethods(t *testing.T) { // Test POST, PUT, DELETE methods // Ensure consistent behavior across HTTP methods }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
.github/workflows/tests.yml(1 hunks)pkg/auth/encryption_test.go(1 hunks)pkg/auth/handler_test.go(1 hunks)pkg/auth/render_test.go(1 hunks)pkg/cli/helpers_test.go(1 hunks)pkg/cli/message_test.go(1 hunks)pkg/client_test.go(1 hunks)pkg/gorm/support_test.go(1 hunks)pkg/http/handler_test.go(1 hunks)pkg/http/middleware/pipeline_test.go(1 hunks)pkg/http/middleware/token_middleware_test.go(1 hunks)pkg/http/request_test.go(1 hunks)pkg/http/response_test.go(1 hunks)pkg/llogs/files_logs_test.go(1 hunks)pkg/markdown/handler_test.go(1 hunks)pkg/markdown/schema_test.go(1 hunks)pkg/media/media_test.go(1 hunks)pkg/parser_test.go(1 hunks)pkg/password_test.go(1 hunks)pkg/stringable_test.go(1 hunks)pkg/support_test.go(1 hunks)pkg/validator_test.go(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (13)
pkg/cli/helpers_test.go (1)
pkg/cli/helpers.go (1)
ClearScreen(9-19)
pkg/auth/render_test.go (1)
pkg/auth/render.go (1)
SafeDisplay(5-20)
pkg/password_test.go (1)
pkg/password.go (1)
MakePassword(10-21)
pkg/cli/message_test.go (1)
pkg/cli/message.go (14)
Error(5-7)Success(13-15)Warning(21-23)Errorln(9-11)Successln(17-19)Warningln(25-27)Magenta(29-31)Magentaln(33-35)Blue(37-39)Blueln(41-43)Cyan(45-47)Cyanln(49-51)Gray(53-55)Grayln(57-59)
pkg/auth/encryption_test.go (2)
pkg/auth/encryption.go (5)
GenerateAESKey(15-23)Encrypt(25-49)Decrypt(51-78)CreateSignatureFrom(80-85)ValidateTokenFormat(87-99)pkg/auth/schema.go (1)
EncryptionKeyLength(8-8)
pkg/support_test.go (1)
pkg/support.go (1)
CloseWithLog(8-12)
pkg/gorm/support_test.go (1)
pkg/gorm/support.go (3)
IsNotFound(8-10)IsFoundButHasErrors(12-14)HasDbIssues(16-18)
pkg/markdown/schema_test.go (1)
pkg/markdown/handler.go (1)
Parse(45-89)
pkg/client_test.go (1)
pkg/client.go (4)
GetDefaultTransport(19-25)MakeDefaultClient(27-44)Client(11-17)Get(46-82)
pkg/parser_test.go (1)
pkg/parser.go (1)
ParseJsonFile(9-29)
pkg/http/request_test.go (1)
pkg/http/request.go (1)
ParseRequestBody(12-31)
pkg/stringable_test.go (1)
pkg/stringable.go (5)
MakeStringable(16-20)ToSnakeCase(28-43)Dd(45-47)ToLower(22-26)ToDatetime(49-70)
pkg/llogs/files_logs_test.go (4)
env/env.go (1)
Environment(8-14)env/logs.go (1)
LogsEnvironment(3-7)pkg/llogs/files_logs.go (4)
MakeFilesLogs(18-36)FilesLogs(11-16)DefaultPath(38-45)Close(47-55)pkg/llogs/driver.go (1)
interface(3-5)
🔇 Additional comments (29)
pkg/support_test.go (2)
8-16: Well-designed test helper struct.The
fakeCloserstruct is a clean implementation of theio.Closerinterface for testing purposes. It properly tracks state and can simulate both success and error scenarios.
18-30: Comprehensive test coverage for CloseWithLog.The test effectively covers both success and error scenarios, properly verifying that the
Close()method is called in both cases. The test design correctly accounts for the fact thatCloseWithLogdoesn't return errors but only logs them.pkg/validator_test.go (3)
5-9: Good test struct definition for validation testing.The
userstruct with validation tags provides a good foundation for testing different validation scenarios including required fields, email format, and length constraints.
32-39: Good complementary test for Rejects method.This test nicely complements the main test by specifically focusing on the
Rejectsmethod, ensuring comprehensive coverage of the validator interface.
26-26: Verified:GetErrorsAsJasonis defined and used consistently
The methodGetErrorsAsJasonis implemented in pkg/validator.go and called with the same spelling in pkg/validator_test.go, cli/panel/menu.go, boost/factory.go, etc. There is no mismatch between definition and usages:• pkg/validator.go: func (v *Validator) GetErrorsAsJason() string
• pkg/validator_test.go: v.GetErrorsAsJason()
• cli/panel/menu.go: validate.GetErrorsAsJason()
• boost/factory.go: validate.GetErrorsAsJason()If you’d like to rename this for clarity (e.g. to
GetErrorsAsJSON), note it’s a breaking change and you must update all call sites accordingly. Otherwise, no further action is needed.pkg/parser_test.go (2)
8-11: Well-defined test struct for JSON parsing.The
jsonSamplestruct with proper JSON tags provides a clean foundation for testing the genericParseJsonFilefunction.
13-28: Excellent test implementation for successful parsing.The test follows Go testing best practices:
- Uses
t.TempDir()for clean temporary file management- Creates realistic test data
- Verifies both successful execution and correct data parsing
- Clean setup and validation logic
pkg/media/media_test.go (4)
10-18: LGTM! Well-structured test helper.The
setupTempDirfunction follows Go testing best practices by usingt.Helper(),t.TempDir(), andt.Cleanup()for proper test isolation and resource management.
20-45: LGTM! Comprehensive test coverage.This test effectively covers the complete media workflow from creation to cleanup, including property validation, file upload, and file system verification.
47-59: LGTM! Good error case coverage.The test properly validates error conditions for empty files, oversized files, and invalid extensions, ensuring robust input validation.
61-83: LGTM! Appropriate utility function tests.These tests provide good coverage for the utility functions, with appropriate assertions for path generation and directory retrieval functionality.
.github/workflows/tests.yml (3)
3-7: LGTM! Appropriate workflow triggers.The workflow correctly triggers on pushes to main and on pull requests, ensuring comprehensive test coverage without unnecessary CI resource usage.
21-22: LGTM! Standard Go coverage reporting.The coverage report step correctly uses
go tool coverwith the generated profile to provide function-level coverage information.
13-13: Go version check – no changes required
Thego-version: [1.24.x]matrix entry is valid: Go 1.24.5 (released July 8, 2025) is the latest stable patch, and using1.24.xwill correctly pick it up.pkg/markdown/handler_test.go (1)
9-20: LGTM! Well-structured HTTP client test.The test properly uses
httptest.NewServerfor isolation, includes proper cleanup withdefer, and validates both error conditions and response content.pkg/auth/encryption_test.go (2)
33-39: LGTM! Good deterministic signature test.The test correctly verifies that
CreateSignatureFromproduces consistent signatures for identical inputs, which is essential for HMAC-based authentication.
41-48: LGTM! Covers key token validation scenarios.The test appropriately validates both valid and invalid token formats, ensuring the function correctly identifies tokens with proper prefixes and rejects those that are too short or malformed.
pkg/auth/handler_test.go (1)
26-31: LGTM! Good error validation test.The test correctly verifies that
MakeTokensHandlerrejects keys that don't meet the minimum length requirements, ensuring proper validation of cryptographic key parameters.pkg/markdown/schema_test.go (2)
7-27: LGTM! Comprehensive test of markdown parsing with header image.The test correctly validates all aspects of the
Parsefunction:
- YAML front matter extraction (slug, published_at)
- Header image detection and parsing (alt text and URL)
- Content extraction after header image
- Date parsing functionality
The test data and assertions align well with the expected parsing behavior.
29-42: LGTM! Good coverage of no-header-image parsing scenario.The test correctly validates the Parse function's behavior when no header image is present, ensuring:
- Empty image fields when no header image pattern is found
- Content includes the entire body without image extraction
- Front matter parsing still works correctly
This complements the header image test for complete coverage.
pkg/http/request_test.go (4)
9-11: LGTM! Appropriate test struct for parsing validation.The
sampleReqstruct is well-designed for testing the genericParseRequestBodyfunction with a simple field and proper JSON tag.
13-19: LGTM! Good validation of successful JSON parsing.The test correctly validates the main use case of
ParseRequestBody, ensuring proper JSON deserialization into the target struct with appropriate error handling.
21-27: LGTM! Proper validation of empty body handling.The test correctly verifies that
ParseRequestBodyreturns a zero-value struct when the request body is empty, which aligns with the function's documented behavior.
29-35: LGTM! Good error handling validation for invalid JSON.The test properly validates that
ParseRequestBodyreturns an error when given malformed JSON, ensuring robust error handling for invalid input.pkg/client_test.go (2)
10-23: LGTM! Comprehensive integration test for HTTP client.The test properly validates the complete client workflow:
- Default transport and client creation
- HTTP request execution via test server
- Response validation and error handling
- Proper test server cleanup
25-31: LGTM! Important nil client error handling test.The test correctly validates that the
Getmethod properly handles nil client receivers, ensuring graceful error handling for uninitialized clients as implemented in theClient.Getmethod.pkg/stringable_test.go (3)
8-13: LGTM!The test correctly validates the
ToLowermethod behavior, including whitespace trimming and case conversion. The test input and expected output align with the implementation.
15-20: LGTM!The test correctly validates the
ToSnakeCasemethod with a proper camelCase input and expected snake_case output.
33-43: LGTM!Both tests are well-implemented:
TestStringable_ToDatetimeErrorproperly tests the error case with invalid date formatTestStringable_Ddappropriately tests the debug utility to ensure it doesn't panic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (10)
database/repository/queries/posts_filters_test.go (2)
5-5: Fix typo in test function name.Consider using American spelling for consistency with Go conventions:
-func TestPostFiltersSanitise(t *testing.T) { +func TestPostFiltersSanitize(t *testing.T) {
14-28: Improve error messages for better debugging.The current error messages only show the actual value but not the expected value. Consider making them more descriptive:
- if f.GetText() != "hello" { - t.Fatalf("got %s", f.GetText()) - } + if f.GetText() != "hello" { + t.Fatalf("expected 'hello', got '%s'", f.GetText()) + }Apply similar changes to other assertions for better test failure diagnostics.
.github/workflows/tests.yml (1)
30-48: Consider improving coverage reporting strategy.The current approach overwrites the coverage file in each step, losing previous coverage data. Consider:
- Using unique coverage filenames per step
- Combining coverage reports for an overall project coverage
- Storing coverage artifacts for later analysis
Example improvement:
- name: Run pkg tests run: | - go test ./pkg/... -coverprofile=coverage.out - go tool cover -func=coverage.out | tail -n 1 + go test ./pkg/... -coverprofile=coverage-pkg.out + go tool cover -func=coverage-pkg.out | tail -n 1database/connection_test.go (1)
44-65: Well-structured database connection and migration setup.The environment configuration correctly matches the container settings. The SSL mode "disable" is appropriate for testing environments.
Consider adding a comment explaining the SSL mode choice for clarity:
SSLMode: "disable", // Appropriate for containerized testinghandler/file_handlers_more_test.go (2)
31-33: Consider using a more type-safe interface definition.The factory function uses
interface{}which loses type safety. Consider defining a common interface for the handlers being tested.+type FileHandler interface { + Handle(http.ResponseWriter, *http.Request) *pkghttp.ApiError +} + -func runFileHandlerTest(t *testing.T, makeFn func(string) interface { - Handle(http.ResponseWriter, *http.Request) *pkghttp.ApiError -}, path string) { +func runFileHandlerTest(t *testing.T, makeFn func(string) FileHandler, path string) {
69-104: Consider reducing code duplication in test cases.The test cases follow an identical pattern with only the handler constructor and path changing. This could be refactored into a table-driven test.
+func TestAdditionalFileHandlers(t *testing.T) { + testCases := []struct { + name string + makeFn func(string) FileHandler + path string + }{ + {"Education", func(p string) FileHandler { return MakeEducationHandler(p) }, "/education"}, + {"Experience", func(p string) FileHandler { return MakeExperienceHandler(p) }, "/experience"}, + {"Projects", func(p string) FileHandler { return MakeProjectsHandler(p) }, "/projects"}, + {"Recommendations", func(p string) FileHandler { return MakeRecommendationsHandler(p) }, "/recommendations"}, + {"Talks", func(p string) FileHandler { return MakeTalksHandler(p) }, "/talks"}, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + runFileHandlerTest(t, tc.makeFn, tc.path) + }) + } +}database/repository/repository_test.go (1)
67-93: Good test coverage for user repository.The test correctly verifies user creation and retrieval. However, consider testing edge cases like non-existent users.
Add a test case for user not found scenario:
notFound := repo.FindBy("nonexistent") if notFound != nil { t.Fatalf("expected nil for non-existent user") }database/seeder/seeds/seeder_test.go (3)
43-52: Consider using constants for test database configuration.The database configuration is hardcoded throughout the function. Consider extracting these values as constants at the package level for better maintainability and consistency.
+const ( + testDBUser = "test" + testDBPassword = "secret" + testDBName = "testdb" + testDBPort = "5432/tcp" +)Then use these constants in the configuration:
- postgres.WithDatabase("testdb"), - postgres.WithUsername("test"), - postgres.WithPassword("secret"), + postgres.WithDatabase(testDBName), + postgres.WithUsername(testDBUser), + postgres.WithPassword(testDBPassword),
104-120: Add more specific assertions for better test coverage.The current assertions only check basic counts. Consider adding more specific validations to ensure data integrity and relationships are correctly established.
seeder.dbConn.Sql().Model(&database.Category{}).Count(&count) -if count == 0 { - t.Fatalf("categories not seeded") +if count == 0 { + t.Fatalf("expected categories to be seeded, got 0") +} + +// Add more specific relationship validations +seeder.dbConn.Sql().Model(&database.PostCategory{}).Count(&count) +if count == 0 { + t.Fatalf("expected post-category relationships to be seeded, got 0") +} + +seeder.dbConn.Sql().Model(&database.PostTag{}).Count(&count) +if count == 0 { + t.Fatalf("expected post-tag relationships to be seeded, got 0") }
84-145: Consider adding table-driven tests for better maintainability.While the current tests are comprehensive, consider refactoring into table-driven tests to make it easier to add new test cases and improve readability.
func TestSeederCounts(t *testing.T) { seeder := setupSeeder(t) // Setup seeding if err := seeder.TruncateDB(); err != nil { t.Fatalf("truncate err: %v", err) } userA, userB := seeder.SeedUsers() posts := seeder.SeedPosts(userA, userB) categories := seeder.SeedCategories() tags := seeder.SeedTags() seeder.SeedComments(posts...) seeder.SeedLikes(posts...) seeder.SeedPostsCategories(categories, posts) seeder.SeedPostTags(tags, posts) seeder.SeedPostViews(posts, userA, userB) tests := []struct { name string model interface{} expected int64 minCount bool // true if we expect at least the count, false for exact match }{ {"users", &database.User{}, 2, false}, {"posts", &database.Post{}, 2, false}, {"categories", &database.Category{}, 1, true}, {"tags", &database.Tag{}, 1, true}, {"comments", &database.Comment{}, 1, true}, {"likes", &database.Like{}, 1, true}, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { var count int64 seeder.dbConn.Sql().Model(tt.model).Count(&count) if tt.minCount { if count < tt.expected { t.Fatalf("expected at least %d %s, got %d", tt.expected, tt.name, count) } } else { if count != tt.expected { t.Fatalf("expected %d %s, got %d", tt.expected, tt.name, count) } } }) } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (13)
.github/workflows/tests.yml(1 hunks)database/connection_test.go(1 hunks)database/model.go(1 hunks)database/model_internal_test.go(1 hunks)database/repository/pagination/pagination_func_test.go(1 hunks)database/repository/queries/posts_filters_test.go(1 hunks)database/repository/repository_test.go(1 hunks)database/seeder/seeds/seeder_test.go(1 hunks)go.mod(2 hunks)handler/categories.go(1 hunks)handler/categories_posts_test.go(1 hunks)handler/file_handlers_more_test.go(1 hunks)handler/posts.go(2 hunks)
✅ Files skipped from review due to trivial changes (2)
- go.mod
- database/model.go
🧰 Additional context used
🧬 Code Graph Analysis (5)
handler/categories.go (4)
database/repository/pagination/paginate.go (1)
Paginate(3-7)database/repository/pagination/pagination.go (1)
Pagination(14-22)database/repository/categories.go (2)
Categories(12-14)GetAll(16-52)boost/router.go (1)
Categories(45-52)
handler/posts.go (4)
database/repository/pagination/paginate.go (1)
Paginate(3-7)database/repository/pagination/pagination.go (1)
Pagination(14-22)database/repository/posts.go (2)
Posts(12-16)GetAll(18-52)database/repository/queries/posts.go (1)
ApplyPostsFilters(8-67)
handler/file_handlers_more_test.go (7)
pkg/http/schema.go (1)
ApiError(10-13)handler/education.go (2)
MakeEducationHandler(15-19)Handle(21-45)handler/experience.go (2)
MakeExperienceHandler(15-19)Handle(21-45)handler/projects.go (2)
MakeProjectsHandler(15-19)Handle(21-45)handler/recommendations.go (1)
MakeRecommendationsHandler(15-19)handler/talks.go (2)
MakeTalksHandler(15-19)Handle(21-45)handler/profile.go (1)
Handle(21-45)
database/connection_test.go (6)
env/env.go (1)
Environment(9-15)env/db.go (1)
DBEnvironment(5-14)database/model.go (2)
DriverName(9-9)APIKey(26-35)database/connection.go (1)
MakeConnection(19-32)database/repository/api_keys.go (2)
ApiKeys(11-13)Create(15-33)database/attrs.go (1)
APIKeyAttr(7-11)
database/repository/pagination/pagination_func_test.go (2)
database/repository/pagination/paginate.go (1)
Paginate(3-7)database/repository/pagination/pagination.go (3)
MakePagination(24-60)Pagination(14-22)HydratePagination(74-91)
🔇 Additional comments (23)
database/repository/pagination/pagination_func_test.go (2)
5-20: Excellent pagination logic testing.The test correctly validates pagination metadata calculation:
- Proper handling of page boundaries (page 2 of 3)
- Correct next/previous page pointers
- Appropriate total pages calculation
The test setup and assertions are well-structured and comprehensive.
22-39: Well-designed type transformation test.The test effectively validates the
HydratePaginationfunction by:
- Testing data transformation (string length mapping)
- Ensuring metadata preservation across type conversion
- Using appropriate assertions for both data and metadata validation
database/connection_test.go (2)
16-33: Excellent integration test setup with proper resource management.The test demonstrates best practices:
- Docker availability check with graceful skip
- Proper container lifecycle management with cleanup
- Appropriate PostgreSQL version selection (16-alpine)
67-81: Comprehensive repository functionality testing.The test effectively validates both create and retrieve operations, ensuring the integration works end-to-end. The assertion logic correctly verifies that the created and found records match.
handler/categories.go (2)
14-16: LGTM! Clean interface abstraction.The
categoriesRepointerface properly abstracts the repository dependency, enabling easier testing and dependency injection. The single-method interface follows the Interface Segregation Principle.
22-24: LGTM! Constructor properly updated for interface.The constructor correctly accepts the interface type rather than the concrete implementation, maintaining consistency with the interface abstraction.
database/repository/repository_test.go (3)
17-65: Excellent integration test setup with testcontainers.The
setupDBfunction properly handles Docker container lifecycle, connection setup, and cleanup. The Docker availability check prevents test failures in environments without Docker.
95-117: LGTM! Proper test of idempotent behavior.The test correctly verifies that
FindOrCreatereturns the same tag on subsequent calls, which is the expected idempotent behavior.
138-192: Comprehensive test for posts with relationships.The test properly creates all related entities (user, category, tag) and verifies post creation with associations. This validates the complex repository behavior well.
handler/categories_posts_test.go (5)
17-25: Excellent stub implementation for categories repository.The stub properly implements the
categoriesRepointerface and allows for controlled testing of both success and error scenarios.
27-40: Well-designed stub for posts repository.The stub implements both methods of the
postsRepointerface and supports testing different scenarios through configurable return values.
42-73: Comprehensive test coverage for categories handler.The test validates both successful response parsing and error handling. The JSON response validation ensures the handler produces the expected output format.
75-103: Thorough testing of posts index handler.The test covers successful requests, repository errors, and JSON parsing errors. Using POST with request body is correctly tested.
105-133: Complete test coverage for posts show handler.The test validates successful retrieval, missing slug validation, and not-found scenarios. Good use of
SetPathValueto simulate URL parameters.handler/posts.go (2)
17-20: Well-designed posts repository interface.The
postsRepointerface appropriately abstracts the two main operations needed by the handler. The method signatures match the concrete implementation fromrepository.Posts.
26-28: Constructor correctly updated for interface dependency.The constructor now properly accepts the interface type, enabling dependency injection and easier testing as demonstrated in the accompanying test file.
database/seeder/seeds/seeder_test.go (7)
3-13: LGTM! Well-structured imports with proper organization.The imports are logically organized with standard library packages first, followed by third-party dependencies, and then local packages. The use of
testcontainers-gofor integration testing is a solid choice for database testing.
15-18: Good Docker availability check for test skipping.The Docker availability check using
exec.LookPath("docker")is a practical approach to skip tests when Docker is not available, ensuring tests don't fail in environments where Docker isn't installed.
22-32: Excellent container setup with proper cleanup.The PostgreSQL container configuration uses appropriate settings for testing:
- PostgreSQL 16-alpine is a good choice for testing (lightweight and modern)
- Test credentials are properly configured
t.Cleanup()ensures proper resource cleanup even if tests fail
60-73: Comprehensive AutoMigrate covers all necessary models.The migration includes all the database models that are being tested in the seeder workflow. The error handling is appropriate and will fail fast if migrations fail.
78-82: Clean and reusable test setup helper.The
setupSeederfunction properly encapsulates the test setup logic and returns a configured seeder instance. Good separation of concerns.
84-103: Comprehensive workflow test with good error handling.The test covers the complete seeding workflow including:
- Database truncation
- User seeding
- Post seeding with relationships
- Category and tag seeding
- Comment and like seeding
- Relationship seeding (posts-categories, post-tags)
- Post views and newsletters
The error handling for each critical step is appropriate.
122-145: Excellent edge case testing for empty inputs.This test ensures that the seeder methods handle nil/empty inputs gracefully without creating unwanted records. The assertions properly validate that no records are created when empty inputs are provided.
| runs-on: ubuntu-latest | ||
| strategy: | ||
| matrix: | ||
| go-version: [1.24.x] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify Go version 1.24.x availability.
Go 1.24.x may not be available yet. Consider using a stable version like 1.23.x or 1.22.x.
🌐 Web query:
What is the latest stable version of Go as of 2024?
💡 Result:
As of August 1, 2025, the latest stable version of Go is 1.24.5, released on July 8, 2025. This version includes security fixes to the go command, as well as bug fixes to the compiler, the linker, and the runtime. (go.dev)
Go 1.24, released on February 11, 2025, introduced several notable features:
-
Generic Type Aliases: Go 1.24 fully supports generic type aliases, allowing type aliases to be parameterized like defined types. (tip.golang.org)
-
Performance Improvements: The runtime has been optimized, resulting in a 2–3% average decrease in CPU overhead across various benchmarks. (go.dev)
-
Tool Enhancements: The
gocommand now offers a mechanism for tracking tool dependencies within a module, and thego vetsubcommand includes a new analyzer for reporting common mistakes in test declarations. (go.dev)
For a comprehensive list of changes and improvements, you can refer to the Go 1.24 release notes. (tip.golang.org)
Citations:
- 1: https://go.dev/doc/devel/release?utm_source=openai
- 2: https://tip.golang.org/doc/go1.24?utm_source=openai
- 3: https://go.dev/blog/go1.24?utm_source=openai
- 4: https://go.dev/blog/go1.24?utm_source=openai
- 5: https://tip.golang.org/doc/go1.24?utm_source=openai
Go 1.24.x is GA—no downgrade needed; improve coverage handling
- The workflow’s
go-version: [1.24.x]is valid (it will resolve to Go 1.24.5 as of July 2025). For stricter reproducibility, pin to the patch release:.github/workflows/tests.yml — Line 13 - go-version: [1.24.x] + go-version: ['1.24.5']
- Prevent coverage files from clobbering each other and produce a single report:
• Emit distinct coverage files per test step (e.g.coverage-app.out,coverage-api.out).
• After all tests, add:- name: Merge coverage reports run: go tool cover -merge=coverage-*.out -o coverage.out - name: Upload coverage artifact uses: actions/upload-artifact@v3 with: name: coverage-report path: coverage.out
🤖 Prompt for AI Agents
In .github/workflows/tests.yml at line 13, update the Go version from a floating
patch version "1.24.x" to the exact patch release "1.24.5" for stricter
reproducibility. Modify the coverage handling by generating distinct coverage
files for each test step (e.g., coverage-app.out, coverage-api.out) to avoid
overwriting. Then, add steps after all tests to merge these coverage files into
a single coverage.out file using "go tool cover -merge" and upload this merged
coverage report as an artifact using the actions/upload-artifact@v3 action.
| func TestIsValidTable(t *testing.T) { | ||
| if !isValidTable("users") { | ||
| t.Fatalf("expected users table to be valid") | ||
| } | ||
| if isValidTable("unknown") { | ||
| t.Fatalf("unexpected valid table") | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify the isValidTable function exists and consider expanding test coverage.
The test logic is sound with good positive and negative case coverage. However, ensure that:
- The
isValidTablefunction is actually accessible from this test package - Consider adding more test cases to cover edge cases (empty string, special characters, etc.)
Let me verify that the isValidTable function exists and is accessible:
🏁 Script executed:
#!/bin/bash
# Search for the isValidTable function definition
ast-grep --pattern 'func isValidTable($$$) $$$'Length of output: 196
Confirm isValidTable Accessibility & Expand Test Coverage
The isValidTable function is defined in database/model.go (lines 22–24) and is accessible from database/model_internal_test.go since they share the same package. The existing positive (“users”) and negative (“unknown”) cases are valid, but consider improving coverage by adding tests for:
- Empty string (
"") - Strings with special characters (e.g.,
"user!@#"or"user-name") - Case variations (
"Users","USERS") - Numeric or alphanumeric table names (e.g.,
"table123") - Extremely long input or whitespace-only strings
These additions will help ensure isValidTable behaves correctly across edge cases.
🤖 Prompt for AI Agents
In database/model_internal_test.go around lines 5 to 12, the current test for
isValidTable only covers basic valid and invalid cases. To improve test
coverage, add additional test cases for empty strings, strings with special
characters, case variations, numeric or alphanumeric names, and extremely long
or whitespace-only strings. This will ensure isValidTable handles a wider range
of edge cases correctly.
Summary
pkgand subpackagesTesting
go test ./pkg/... -coverprofile=coverage.outgo tool cover -func=coverage.outhttps://chatgpt.com/codex/tasks/task_e_688adb40da788333a89120b59c10a215
Summary by CodeRabbit