Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

Add comprehensive test suite for common library chart

This PR adds a comprehensive test suite for the common library chart, implementing tests for:

  • Layout tests (single/multiple/dynamic components)
  • Resource tests (all Kubernetes resource kinds)
  • Definition blocks (deepMerge, transformMapToList)
  • Templating tests (variable references, preprocessing directives)

Changes

  • Add test suite structure under _unittests/
  • Implement all test categories with proper assertions
  • Bump chart version to 0.0.1-canary.2

Link to Devin run: https://app.devin.ai/sessions/d53fbe318b1a431f8e8979e4311cf2f7
Requested by: carlos@graphops.xyz

devin-ai-integration bot and others added 2 commits February 19, 2025 01:58
- Add layout tests for single, multiple, and dynamic components
- Add resource tests for all Kubernetes resource kinds
- Add definition block tests for deepMerge and transformMapToList
- Add templating tests for variable references and preprocessing directives

Co-Authored-By: carlos@graphops.xyz <carlos@graphops.xyz>
Co-Authored-By: carlos@graphops.xyz <carlos@graphops.xyz>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

Original prompt from carlos@graphops.xyz:

Hey @Devin, we will be working in the `launchpad-charts` repo, and leveraging helm unittest plugin, which is already installed, to build a robust set of tests for the `common` library chart. An example test already exists in that chart, under the `_unittests` directory.

Your goal is to implement a robust set of testcases, that have been validated to work with helm unittest.
On each of the created testing charts, under _unittests in the common chart, you will need to remember to run at least once `helm dep update` to bring in the `common` library chart dependency of those testing charts. If there is any change made to the `common` chart itself, `helm dep update` will need to be run again under the testing charts folders.

Here's an in depth overview with context, guidelines, goals and pseudo-examples:

# SUMMARY FOR AN AGENTIC AI WORKING ON THE `common` LIBRARY CHART TESTS

## 1. RELEVANT CONTEXT AND INFO ABOUT THE `common` LIBRARY CHART

- **Purpose**:  
  The `common` chart is a Helm **library** chart providing a flexible templating mechanism and layered value inheritance structure. It allows end users to declare multiple “components” (either statically or dynamically) in `_common.config.yaml` and merges default or inherited values accordingly.

- **Key Features**:
  1. **Single / Multiple / Dynamic Components**  
     - `_common.config.yaml` can declare a single component or multiple named components.  
     - Or it can be set to `dynamicComponents: true`, specifying a `componentsKey` in `.Values` from which it automatically loads component subkeys.

  2. **Layered Values**  
     - Each component references a stack of “layer” keys to apply in sequence (lowest precedence first, highest last).  
     - This layering is declared under `componentLayering` in `_common.config.yaml`.

  3. **Resource Kinds**  
     - The chart outputs various Kubernetes resource types: `configMap`, `podDisruptionBudget`, `role`, `roleBinding`, `secret`, `service`, `serviceAccount`, `serviceMonitor`, and `workload`.  
     - `workload` is a special top-level resource that can become either a `Deployment` or `StatefulSet`.

  4. **Templating in Values**  
     - The chart processes `.Values` as potential templates. This means you can embed references to `.Self`, `.Root`, and `.componentName` within strings.  
     - It also supports a “templating preprocessing” step featuring directives like `@needs` (for injecting dependencies between sub-values) and `@type` (for type coercion).

  5. **Helper Functions**  
     - The chart ships with internal define blocks (like `deepMerge`, `transformMapToList`, etc.). Each performs a specific transformation or merging routine and stores its result in `.__common.fcallResult`.

- **Important Notes for Tests**:
  1. `_common.config.yaml` is critical for controlling the chart’s behavior. The tested chart must supply one.  
  2. The library chart is not installed on its own; each test scenario is a separate “consumer” sub-chart referencing `common` locally via `file://...`.  
  3. Tests should be **goal-oriented** and explicit about what is being tested (e.g., single vs. multiple components, merging behavior, function outputs, etc.).  
  4. The recommended naming convention: each test sub-chart has a `templates/render.yaml` that just calls `{{ include "common.render" . }}` for simplicity.  
  5. Before running each test chart, do `helm dep update` to fetch the local library chart. Then run `helm unittest`.

## 2. DETAILED DESCRIPTION OF THE PROPOSED STRUCTURE

All tests reside in a folder named `_unittests/` at the root of the `common` chart. Within `_unittests/`, we will have testing for helm-unittests plugin in distinct subdirectories like so:

_unittests/
├─ layout-tests/
│ ├─ single-component/
│ ├─ multiple-components/
│ └─ dynamic-components/
├─ resource-tests/
│ └─ all-resource-kinds/
├─ definition-blocks/
└─ templating-tests/


- **layout-tests**  
  These testing-charts validate `_common.config.yaml` usage for single, multiple, and dynamic component setups, including verifying that layered values are merged properly.

- **resource-tests**  
  A testing-chart (`all-resource-kinds`) that enables each possible resource kind (`configMap`, `podDisruptionBudget`, etc.) for a single component, to confirm each kind renders properly.

- **definition-blocks**  
  Tests each internal define block (e.g., `deepMerge`, `transformMapToList`) in a minimal way—generally by rendering a small `ConfigMap` with a “true/false” test result.

- **templating-tests**  
  Focused on verifying that `.Self`, `.Root`, and `.componentName` are recognized, as well as `@needs` and `@type` usage. The sub-chart checks whether string expansions or typed values match expectations in the rendered output.

Each testing-chart’s structure is essentially:

<testing-chart>/ Chart.yaml _common.config.yaml templates/ render.yaml values.yaml tests/ <some_test>.yaml


Where:
- **Chart.yaml** declares dependency on `common` via `file://../../` (or equivalent relative path).  
- **_common.config.yaml** drives how components are declared.  
- **templates/render.yaml** calls `{{ include "common.render" . }}`.  
- **values.yaml** provides the scenario’s input data.  
- **tests/*.yaml** files hold `helm unittest` test blocks asserting the correct rendering.

## 3. MINIMAL EXAMPLES ILLUSTRATING ESSENTIALS

Below are three short examples, each illustrating an important aspect:

### Example A: Single Component Layout Test

**`_unittests/layout-tests/single-component/Chart.yaml`**:
```yaml
apiVersion: v2
name: single-component-test
version: 0.1.0

dependencies:
  - name: common
    repository: "file://../../"
    version: ">=0.0.0-0"

_unittests/layout-tests/single-component/_common.config.yaml:

dynamicComponents: false

components:
  - myComponent

componentLayering:
  myComponent:
    - myComponentDefaults

_unittests/layout-tests/single-component/templates/render.yaml:

{{ include "common.render" . }}

_unittests/layout-tests/single-component/values.yaml:

myComponent:
  __enabled: true
  workload:
    __enabled: true
    spec:
      replicas: 2

myComponentDefaults: {}

_unittests/layout-tests/single-component/tests/single_component_test.yaml:

suite: "Single Component Tests"

templates:
  - "templates/render.yaml"

tests:
  - it: "Should render a resource with correct replicas"
    asserts:
      - equal:
          path: "spec.replicas"
          value: 2

Example B: Testing an Internal Define Block (deepMerge)

_unittests/definition-blocks/templates/test-deepmerge.yaml:

{{- $ := . -}}

# We'll define map1/map2 in YAML, parse them, then compare the merged output
# to an expected map. The configMap's data.result => "true" if they match.

{{- $map1YAML := toYaml (dict "a" 1 "b" (dict "x" 1 "y" 2)) }}
{{- $map2YAML := toYaml (dict "b" (dict "x" nil "z" 3) "c" "new") }}
{{- $expectedYAML := toYaml (dict "a" 1 "b" (dict "y" 2 "z" 3) "c" "new") }}

{{- $_ := list $ (fromYaml $map1YAML) (fromYaml $map2YAML) | include "common.utils.deepMerge" }}
{{- $merged := $.__common.fcallResult }}
{{- $expected := fromYaml $expectedYAML }}

apiVersion: v1
kind: ConfigMap
metadata:
  name: deepmerge-test
data:
  result: "{{ eq $merged $expected }}"

_unittests/definition-blocks/tests/deepmerge_test.yaml:

suite: "Deep Merge Function Tests"

templates:
  - "templates/test-deepmerge.yaml"

tests:
  - it: "Should output 'true' if the result matches expected"
    asserts:
      - equal:
          path: "data.result"
          value: "true"

Example C: Testing Templating with @needs and .Self, .Root, .componentName

_unittests/templating-tests/_common.config.yaml:

dynamicComponents: false

components:
  - testComponent

componentLayering:
  testComponent: []

_unittests/templating-tests/templates/render.yaml:

{{ include "common.render" . }}

_unittests/templating-tests/values.yaml:

testComponent:
  __enabled: true

  configMap:
    __enabled: true
    data:
      greeting: "Hello from {{ .Root.Release.Name }} for {{ .componentName }}"

  workload:
    __enabled: true
    replicas: |
      @needs(.Self.configMap.data.greeting as myGreeting)
      @type(int)
      {{ if eq $myGreeting "Hello from release-name for testComponent" -}}
      3
      {{ else -}}
      0
      {{ end -}}

_unittests/templating-tests/tests/templating_test.yaml:

suite: "Templating Tests"

templates:
  - "templates/render.yaml"

tests:
  - it: "Should expand greeting with correct .Root and .componentName"
    asserts:
      - matchRegex:
          path: "data.greeting"
          pattern: "Hello from release-name for testComponent"

  - it: "Should set workload replicas=3 based on @needs logic"
    asserts:
      - equal:
          path: "spec.replicas"
          value: 3

4. COMPREHENSIVE LIST OF THINGS TO TEST

Below is a detailed enumeration of test goals, referencing the sub-chart where they should be placed. Implementation specifics are not included, just the target coverage:

    Single-Component Layout (_unittests/layout-tests/single-component/)
        Confirm a single declared component is recognized.
        Validate that the specified layering merges default values properly.
        Ensure the final resource (like workload) is rendered with correct settings.

    Multiple-Components Layout (_unittests/layout-tests/multiple-components/)
        Confirm two or more declared components each produce separate resources.
        Validate default layering is shared among them but can be overridden per component.
        Ensure no cross-component interference in values.

    Dynamic-Components Layout (_unittests/layout-tests/dynamic-components/)
        Confirm that dynamicComponents: true loads subkeys from .Values[componentsKey].
        Validate the templated componentLayering merges each discovered component’s defaults.
        Test that disabling certain components in .Values leads to no rendered resources for them.

    Resource Kinds (_unittests/resource-tests/all-resource-kinds/)
        For a single component, enable each resource kind (configMap, podDisruptionBudget, etc.) and check it is generated.
        Confirm each resource’s fields appear as specified in .Values.
        Validate that disabling a resource kind produces no corresponding resource.

    Definition Blocks (_unittests/definition-blocks/)
        deepMerge: Verify merging behavior with multiple maps, removing null keys.
        transformMapToList: Confirm map -> list transformation with indexKey or defaults.
        (any other blocks): Test that the function’s outcome in .__common.fcallResult matches expected logic (e.g., removing hidden fields, pruning, etc.).

    Templating (_unittests/templating-tests/)
        .Self, .Root, .componentName usage: Check that string expansions referencing these produce correct outputs in the resource.
        @needs: Confirm dependent fields are recognized, inlined properly, and the chart does not break if a needed path is absent.
        @type: Validate that typed blocks (@type(int), @type(yamlArray), etc.) get coerced to the correct final form.
        Templating with complex conditions: Confirm that multiline or conditional expansions in .Values are successfully rendered.

    Value Inheritance Edge Cases (could be part of layout-tests or integrated across sub-charts)
        Merging from empty layers.
        Overriding deeply nested keys.
        Using multiple default layers for a single component.

By covering each of these test goals in the corresponding testing-charts under _unittests/, you ensure that the common library chart’s functionality is robustly validated.

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add "(aside)" to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@cjorge-graphops cjorge-graphops merged commit 67cc307 into main Feb 19, 2025
2 checks passed
@cjorge-graphops cjorge-graphops deleted the devin/1739930301-add-common-chart-tests branch February 19, 2025 10:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants