Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement new separate api for cast, validate and transform #27

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

bluzky
Copy link
Owner

@bluzky bluzky commented Feb 26, 2024

Summary by CodeRabbit

  • New Features

    • Added a macro __using__ in the Tarams.Schema module for importing Tarams.DefSchema.
    • Introduced a new cast_atom/1 function in the Tarams.Type module for atom casting.
    • Updated dependency version in mix.exs for improved functionality.
    • Added a new file test/validate_test.exs with tests for various validation scenarios.
  • Refactor

    • Refactored the Tarams module to use the new Result module for data processing.
  • Tests

    • Expanded testing suite in test/tarams_test.exs to cover new functionalities and scenarios.
    • Added tests for data type validation, required fields, and custom function validations.
  • Chores

    • Updated .gitignore to ignore mix.lock.
    • Updated mix.exs dependency version.

Copy link

coderabbitai bot commented Feb 26, 2024

Walkthrough

The recent update introduces a Result module to the Tarams library, enhancing the handling of casting, validation, and transformation of data against schemas. It streamlines the process flow, improving code readability and maintainability. Additionally, a new cast_atom function and updates to custom type casting enhance type handling. The mix.exs file sees a dependency version bump for valdi, and new testing modules and changes reflect the updated struct usage and validation capabilities.

Changes

File(s) Change Summary
lib/result.ex Introduced Result module with functions for handling operation results, errors, and valid data.
lib/schema.ex, lib/tarams.ex Added __using__ macro to import DefSchema in lib/schema.ex and refactored lib/tarams.ex to use Result module for data processing.
lib/type.ex Added cast_atom function and updated maybe_cast_custom_type for enhanced type handling.
mix.exs Updated valdi dependency version from ~> 0.4 to ~> 0.5.
test/tarams_test.exs, test/validate_test.exs Updated and added tests to reflect struct usage changes and new validation capabilities.

🐇✨
In the land of code and schema,
A rabbit hopped, spreading dogma.
With a leap and a bound, it refined,
Turning chaos into design.
"To the future," it said, with a twirl,
We cast, validate, and unfurl!"
🌟📜

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository from git and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

@coveralls
Copy link

coveralls commented Feb 26, 2024

Pull Request Test Coverage Report for Build a92bf5fe2d3639e81b56f4650fefa6b9d4c5b2ba-PR-27

Details

  • -10 of 72 (86.11%) changed or added relevant lines in 3 files are covered.
  • 1 unchanged line in 1 file lost coverage.
  • Overall coverage decreased (-0.9%) to 88.043%

Changes Missing Coverage Covered Lines Changed/Added Lines %
lib/result.ex 6 7 85.71%
lib/type.ex 6 7 85.71%
lib/tarams.ex 50 58 86.21%
Files with Coverage Reduction New Missed Lines %
lib/type.ex 1 84.62%
Totals Coverage Status
Change from base Build 71b27c2a3bcf62551e5adb310cb3536c8f19e964: -0.9%
Covered Lines: 243
Relevant Lines: 276

💛 - Coveralls

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 7

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 71b27c2 and edd5348.
Files selected for processing (7)
  • lib/result.ex (1 hunks)
  • lib/schema.ex (1 hunks)
  • lib/tarams.ex (7 hunks)
  • lib/type.ex (3 hunks)
  • mix.exs (1 hunks)
  • test/tarams_test.exs (12 hunks)
  • test/validate_test.exs (1 hunks)
Additional comments: 23
mix.exs (1)
  • 55-55: The update of the valdi dependency from ~> 0.4 to ~> 0.5 is noted. Ensure to verify the compatibility of the new version with the project, especially if valdi introduced any breaking changes or new features that might affect your usage.
lib/result.ex (3)
  • 25-31: The put_error function correctly updates the errors map and sets valid? to false. However, when adding a new error to an existing field, it prepends the new error to a list. This is efficient but ensure that the rest of your application expects errors to be in a list format for each field.
  • 39-39: The put_data function updates the valid_data map as expected. This straightforward approach is efficient for adding valid data to the result struct.
  • 47-47: The get_error function provides a simple way to retrieve errors for a specific field. This utility function enhances the module's usability by allowing easy access to error messages.
lib/type.ex (1)
  • 249-255: The modification to maybe_cast_custom_type/2 to check for the existence of a cast function before invoking it enhances the function's robustness. This change prevents runtime errors when attempting to cast custom types without a defined cast function.
lib/tarams.ex (3)
  • 69-76: The do_cast private function effectively iterates over the schema fields, applying casting logic, and updating the Result struct accordingly. This modular approach enhances readability and maintainability.
  • 92-102: The do_validate function's implementation, which iterates over schema fields to validate data and updates the Result struct, is consistent with the module's design principles. The early return when an error is already present for a field is a good performance optimization.
  • 120-130: The do_transform function follows a similar pattern to do_cast and do_validate, iterating over fields and applying transformation logic. This consistency in handling different stages of data processing is commendable.
test/validate_test.exs (4)
  • 58-77: The tests for required field validation ("validate require", "validate require with missing value", and "validate required with nil value") are clear and concise. They effectively test the scenarios where a required field is present, missing, or nil. This is a good practice as it ensures the library correctly handles required field validations. No changes are needed here.
  • 79-235: The tests covering inclusion/exclusion lists, format checks, and number and length validations are comprehensive and follow a consistent pattern, which is good for maintainability. However, ensure that the library's validation logic correctly interprets and applies the conditions specified in these tests, especially for complex scenarios like nested map validations and custom function validations.
  • 237-352: The nested map validation tests ("validate nested map with success", "validate nested map with bad value should error", "validate nested map with bad nested value should error", "validate nested map skip nested check if value nil", and "validate array of nested map with valid value should ok") are crucial for ensuring the library can handle complex data structures. These tests are well-written and cover important scenarios. It's important to verify that the library's implementation can accurately navigate and validate nested structures as expected by these tests.
  • 354-376: The custom function validation tests ("validate with custom function ok with good value" and "validate with custom function error with bad value") effectively demonstrate the library's ability to utilize custom validation functions. The validate_email function is a good example of a practical use case. Ensure that the library's validation engine correctly invokes custom functions and handles their return values as demonstrated in these tests.
test/tarams_test.exs (11)
  • 2-3: The addition of a values field to the ParamTest.StringList struct is a good enhancement for storing list values. Ensure that all usages of this struct across the codebase are updated to accommodate this new field.
  • 10-10: The modification to the cast function in ParamTest.StringList to return a struct with values is correctly implemented. This change aligns with the objective of enhancing data handling within the library.
  • 17-17: The addition of a name field to the ParamTest.User struct is straightforward and follows Elixir's conventions for defining struct fields. This change should be reflected wherever ParamTest.User is utilized.
  • 20-20: The implementation of the new function in ParamTest.User for creating a new user struct is correct and follows Elixir's idiomatic way of struct instantiation.
  • 24-24: The modification to the cast function in ParamTest.User to return a user struct with the provided name is correctly implemented. This enhances the module's functionality by allowing for more flexible data handling.
  • 35-35: The addition of ParamTest.User to the alias list is necessary for the new tests that involve the User struct. This change is correctly implemented.
  • 70-81: The updates to tests to reflect changes in struct usage and function returns are correctly implemented. These tests cover various scenarios, ensuring that the new functionalities work as expected.
  • 130-135: The addition of tests for the atom type and the User struct is a good practice, ensuring that the new functionalities are thoroughly tested. These tests cover both successful and error scenarios, which is crucial for validating the robustness of the library.
  • 329-329: The test for casting embedded validation invalid should error is correctly implemented. It ensures that the library correctly handles validation errors for embedded schemas, which is crucial for maintaining data integrity.
  • 340-340: The test for casting missing required value should error is correctly implemented. This test ensures that the library enforces the presence of required fields, which is essential for data validation.
  • 401-401: The test for casting array embed with invalid value should error is correctly implemented. This test ensures that the library correctly handles validation errors for array embeddings, which is important for complex data structures.

Comment on lines +1 to +5
defmodule ValidateTest.User do
defstruct name: nil, email: nil

def dumb(_), do: nil
end
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The User module defined at the beginning of the file is minimal and serves its purpose for the tests. However, it's good practice to include a brief comment explaining its role within the test suite, especially for future maintainers or contributors who might be reviewing the code.

Comment on lines +12 to +39
@type_checks [
[:string, "Bluz", :ok],
[:string, 10, :error],
[:integer, 10, :ok],
[:integer, 10.0, :error],
[:float, 10.1, :ok],
[:float, 10, :error],
[:number, 10.1, :ok],
[:number, 10, :ok],
[:number, "123", :error],
[:tuple, {1, 2}, :ok],
[:tupple, [1, 2], :error],
[:map, %{name: "Bluz"}, :ok],
[:map, %{"name" => "Bluz"}, :ok],
[:map, [], :error],
[:array, [1, 2, 3], :ok],
[:array, 10, :error],
[:atom, :hihi, :ok],
[:atom, "string", :error],
[:function, &User.dumb/1, :ok],
[:function, "not func", :error],
[:keyword, [limit: 12], :ok],
[:keyword, [1, 2], :error],
[User, %User{email: ""}, :ok],
[User, %{}, :error],
[{:array, User}, [%User{email: ""}], :ok],
[{:array, User}, [], :ok],
[{:array, User}, %{}, :error]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The @type_checks list is comprehensive and covers a wide range of type checks. This is a good practice as it ensures thorough testing of the type validation functionality. However, there's a typo in line 23: :tupple should be :tuple. This typo could potentially lead to a false test case or an unintended test scenario.

-    [:tupple, [1, 2], :error],
+    [:tuple, [1, 2], :error],

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
@type_checks [
[:string, "Bluz", :ok],
[:string, 10, :error],
[:integer, 10, :ok],
[:integer, 10.0, :error],
[:float, 10.1, :ok],
[:float, 10, :error],
[:number, 10.1, :ok],
[:number, 10, :ok],
[:number, "123", :error],
[:tuple, {1, 2}, :ok],
[:tupple, [1, 2], :error],
[:map, %{name: "Bluz"}, :ok],
[:map, %{"name" => "Bluz"}, :ok],
[:map, [], :error],
[:array, [1, 2, 3], :ok],
[:array, 10, :error],
[:atom, :hihi, :ok],
[:atom, "string", :error],
[:function, &User.dumb/1, :ok],
[:function, "not func", :error],
[:keyword, [limit: 12], :ok],
[:keyword, [1, 2], :error],
[User, %User{email: ""}, :ok],
[User, %{}, :error],
[{:array, User}, [%User{email: ""}], :ok],
[{:array, User}, [], :ok],
[{:array, User}, %{}, :error]
@type_checks [
[:string, "Bluz", :ok],
[:string, 10, :error],
[:integer, 10, :ok],
[:integer, 10.0, :error],
[:float, 10.1, :ok],
[:float, 10, :error],
[:number, 10.1, :ok],
[:number, 10, :ok],
[:number, "123", :error],
[:tuple, {1, 2}, :ok],
[:tuple, [1, 2], :error],
[:map, %{name: "Bluz"}, :ok],
[:map, %{"name" => "Bluz"}, :ok],
[:map, [], :error],
[:array, [1, 2, 3], :ok],
[:array, 10, :error],
[:atom, :hihi, :ok],
[:atom, "string", :error],
[:function, &User.dumb/1, :ok],
[:function, "not func", :error],
[:keyword, [limit: 12], :ok],
[:keyword, [1, 2], :error],
[User, %User{email: ""}, :ok],
[User, %{}, :error],
[{:array, User}, [%User{email: ""}], :ok],
[{:array, User}, [], :ok],
[{:array, User}, %{}, :error]

Comment on lines +42 to +56
test "validate type" do
@type_checks
|> Enum.each(fn [type, value, expect] ->
rs =
Tarams.validate(%{"key" => value}, %{
"key" => type
})

if expect == :ok do
assert :ok = rs
else
assert {:error, _} = rs
end
end)
end
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test "validate type" is well-structured and iterates over the @type_checks list to validate each scenario. This approach keeps the test DRY and easy to extend. However, it's important to ensure that the Tarams.validate function is correctly implemented to handle these cases, especially since the test assumes specific return values (:ok and {:error, _}) without directly inspecting the error messages. For more robust testing, consider verifying the specific error messages where applicable.

Comment on lines +5 to +10
@enforce_keys [:schema]
defstruct schema: %{},
valid_data: %{},
params: %{},
errors: %{},
valid?: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Result struct definition includes keys for schema, valid_data, params, errors, and valid?. This structure is well-designed for encapsulating the outcomes of various operations. However, consider documenting the expected types for schema, valid_data, params, and errors for better maintainability and clarity.

@enforce_keys [:schema]
@type t :: %__MODULE__{
        schema: map(),
        valid_data: map(),
        params: map(),
        errors: map(),
        valid?: boolean()
      }
defstruct schema: %{}, valid_data: %{}, params: %{}, errors: %{}, valid?: true

Comment on lines +207 to +210
defmacro __using__(_) do
quote do
import Tarams.DefSchema
end
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The addition of the __using__ macro to import Tarams.DefSchema is a good practice for simplifying schema definitions in consuming modules. Consider adding documentation on how to use this macro effectively in module documentation for better developer experience.

@doc """
When used, imports `Tarams.DefSchema` for easier schema definitions.
"""

Comment on lines +77 to +83
defp cast_atom(term) when is_atom(term), do: {:ok, term}

defp cast_atom(term) when is_binary(term) do
{:ok, String.to_existing_atom(term)}
rescue
ArgumentError -> :error
end
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cast_atom/1 function correctly uses String.to_existing_atom/1 for converting binary strings to atoms, which is a safe practice to avoid exhausting the atom table. Consider documenting the error handling behavior for cases where the binary does not correspond to an existing atom.

@doc """
Converts a binary string to an existing atom. Returns `:error` if the atom does not exist.
"""

Comment on lines +35 to +46
@spec cast(data :: map(), schema :: map()) ::
{:ok, map()} | {:error, errors :: map()}
def cast(data, schema) do
schema = schema |> Tarams.Schema.expand()

with {:cast, {:ok, data, _}} <- {:cast, cast_data(data, schema)},
data <- Map.new(data),
{:ok, _, _} <- validate_data(data, schema),
{:ok, data, _} <- transform_data(data, schema) do
{:ok, Map.new(data)}
else
{:cast, {_, data, errors}} ->
# validate casted valid data for full error report
{_, _, v_errors} = validate_data(Map.new(data), schema)
{:error, Map.new(v_errors ++ errors)}

{:error, _, errors} ->
{:error, Map.new(errors)}
Result.new(schema: schema, params: data)
|> do_cast()
|> do_validate()
|> do_transform()
|> case do
%Result{valid?: true, valid_data: valid_data} -> {:ok, valid_data}
%{errors: errors} -> {:error, errors}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The refactoring of the cast function to use the Result module for handling casting, validation, and transformation outcomes is well-implemented. The logical flow is clear, and the use of pattern matching to determine the result is effective. Consider adding more detailed documentation to explain how this function interacts with the Result module and handles different outcomes.

@doc """
Casts, validates, and transforms data according to the given schema using the `Result` module. Returns `{:ok, valid_data}` if successful, or `{:error, errors}` if there are validation or casting errors.
"""

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 1

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between edd5348 and a92bf5f.
Files selected for processing (1)
  • .gitignore (1 hunks)

.DS_Store
mix.lock
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignoring mix.lock in Elixir projects is generally not recommended. The mix.lock file ensures that the exact versions of dependencies are used across different environments, which is crucial for consistency and reliability. Unless there's a specific reason for ignoring this file, it's advisable to track it in version control.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants