Skip to content

[INS 330] Create tests for contributors leaderboard#141

Merged
borfast merged 1 commit intomainfrom
improvement/ins-330-create-tests-for-contributors-leaderboard
Mar 27, 2025
Merged

[INS 330] Create tests for contributors leaderboard#141
borfast merged 1 commit intomainfrom
improvement/ins-330-create-tests-for-contributors-leaderboard

Conversation

@borfast
Copy link
Copy Markdown
Contributor

@borfast borfast commented Mar 26, 2025

This fixes the tests for the Contributors Leaderboard data source.

It just takes care of the most basic test. The goal is simply to have a basic security harness so we can refactor things without fear of breaking stuff.

Summary by CodeRabbit

  • Tests
    • Added automated tests to verify that leaderboard data is retrieved and formatted accurately.
  • Chores
    • Integrated simulated response data to support reliable testing of leaderboard functionality.

@borfast borfast added the feature New feature or request label Mar 26, 2025
@borfast borfast self-assigned this Mar 26, 2025
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 26, 2025

Warning

Rate limit exceeded

@borfast has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 2 minutes and 17 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 89e46f37433d53572a8c39081998465ccd73e618 and 3e789b4.

📒 Files selected for processing (2)
  • frontend/server/data/tinybird/contributors-leaderboard-data-source.test.ts (1 hunks)
  • frontend/server/mocks/tinybird-contributors-leaderboard-response.mock.ts (1 hunks)

Walkthrough

This PR introduces new testing for the Contributors Leaderboard Data Source using the Vitest framework. A dedicated test file verifies that the fetchContributorsLeaderboard function calls the mocked fetchFromTinybird with the proper URL and parameters, and returns data in the expected structure. Additionally, a new mock file supplies a predefined Tinybird leaderboard response structure to support testing.

Changes

File(s) Change Summary
frontend/server/data/.../contributors-leaderboard-data-source.test.ts Added a Vitest test suite for fetchContributorsLeaderboard, setting up a mock for fetchFromTinybird and verifying correct parameter usage and response structure.
frontend/server/mocks/.../tinybird-contributors-leaderboard-response.mock.ts Introduced a mock data file exporting mockTimeseries, representing a simulated Tinybird contributors leaderboard response with metadata, data entries, and statistics.

Sequence Diagram(s)

sequenceDiagram
    participant Test as Test Suite
    participant FCL as fetchContributorsLeaderboard
    participant MFT as mockFetchFromTinybird

    Test->>FCL: Call fetchContributorsLeaderboard(filter)
    FCL->>MFT: Call fetchFromTinybird(url, filter)
    MFT-->>FCL: Return mock response
    FCL-->>Test: Return formatted data response
Loading

Possibly related PRs

Suggested reviewers

  • emlimlf
  • gaspergrom

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
.github/workflows/tests.yml (1)

1-24: 💡 Verification agent

🧩 Analysis chain

Good addition of GitHub Actions workflow for automated testing

The workflow is well-structured and follows best practices by specifying versions for actions and setting up the necessary environment.

However, consider using GitHub secrets for the Tinybird token instead of hardcoding it:

-          TINYBIRD_TOKEN: arandomtoken
+          TINYBIRD_TOKEN: ${{ secrets.TINYBIRD_TEST_TOKEN }}

This is more secure even with test credentials.


🏁 Script executed:

#!/bin/bash
# Check if any other workflows are using secrets for similar tokens
grep -r "TINYBIRD_TOKEN" --include="*.yml" .github/workflows/ | grep -v "arandomtoken" || echo "No other workflows using TINYBIRD_TOKEN with secrets found"

Length of output: 211


Action Required: Replace Hardcoded Tinybird Token with GitHub Secret

The GitHub Actions workflow is well-structured; however, the Tinybird token is hardcoded, which isn’t secure—even with test credentials. Our verification confirms that no other workflows use secrets for this token. Please update the token assignment as follows:

  • File: .github/workflows/tests.yml
  • Lines: 16-18

Diff snippet:

-          TINYBIRD_TOKEN: arandomtoken
+          TINYBIRD_TOKEN: ${{ secrets.TINYBIRD_TEST_TOKEN }}

This change ensures that the credentials remain secure by leveraging GitHub secrets.

🧹 Nitpick comments (3)
frontend/server/mocks/tinybird-contributors-leaderboard-response.mock.ts (1)

1-2: Remove sensitive URL from comment

The comment contains what appears to be a real Tinybird API endpoint URL. Consider removing or sanitizing this information to avoid potential exposure of internal endpoints.

-/*
-https://api.us-west-2.aws.tinybird.co/v0/pipes/contributors_leaderboard.json?startDate=2024-03-20 00:00:00&endDate=2025-03-20 00:00:00&project=the-linux-kernel-organization */
+/*
+ Mock data for contributors leaderboard response */
frontend/server/data/tinybird/active-contributors-data-source.test.ts (1)

7-77: Consider adding more context to the commented-out code.

The commented-out test code contains valuable test logic that would be worth preserving for the future refactoring effort. Consider adding a more detailed comment at the beginning of this section indicating when and how this code will be restored, such as linking to a ticket or indicating the planned refactoring approach.

This will help future developers understand not just that this code is temporarily disabled, but also the plan for its restoration.

frontend/server/data/tinybird/contributors-leaderboard-data-source.test.ts (1)

63-63: Consider prioritizing security-related edge cases.

The TODO for adding edge case tests is important, especially for SQL injection tests which are security-critical. Consider prioritizing security-related tests and creating a specific ticket to track this work.

Additionally, you might want to add tests for:

  1. Rate limiting or large response handling
  2. Network failures or timeout scenarios
  3. Empty or partial response data

This would provide a more comprehensive test coverage for the data source.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e46dad7 and 34de059809c78e0fbc0189caa6d3cd5a1bd772db.

📒 Files selected for processing (6)
  • .github/workflows/tests.yml (1 hunks)
  • frontend/server/data/tinybird/active-contributors-data-source.test.ts (1 hunks)
  • frontend/server/data/tinybird/contributors-leaderboard-data-source.test.ts (1 hunks)
  • frontend/server/data/tinybird/tinybird.test.ts (1 hunks)
  • frontend/server/data/tinybird/tinybird.ts (1 hunks)
  • frontend/server/mocks/tinybird-contributors-leaderboard-response.mock.ts (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
frontend/server/data/tinybird/active-contributors-data-source.test.ts (1)
Learnt from: borfast
PR: LF-Engineering/insights#76
File: frontend/server/data/tinybird/active-contributors-data-source.test.ts:10-10
Timestamp: 2025-03-26T14:54:01.624Z
Learning: The tests in the frontend/server/data/tinybird directory became broken due to changes introduced in another branch and will need to be revisited later as part of the data sources refactoring.
frontend/server/data/tinybird/contributors-leaderboard-data-source.test.ts (1)
Learnt from: borfast
PR: LF-Engineering/insights#76
File: frontend/server/data/tinybird/active-contributors-data-source.test.ts:10-10
Timestamp: 2025-03-26T14:54:01.624Z
Learning: The tests in the frontend/server/data/tinybird directory became broken due to changes introduced in another branch and will need to be revisited later as part of the data sources refactoring.
🧬 Code Definitions (1)
frontend/server/data/tinybird/contributors-leaderboard-data-source.test.ts (1)
frontend/server/mocks/tinybird-contributors-leaderboard-response.mock.ts (1)
  • mockTimeseries (3-105)
🔇 Additional comments (7)
frontend/server/data/tinybird/tinybird.test.ts (1)

65-66: Moving mockNuxtImport call into setMockRuntimeConfig is a good improvement

This change ensures that the mock for useRuntimeConfig is consistently applied each time the mock configuration is updated, rather than just once at the beginning of the test file. This better supports dynamic configuration changes during test execution.

frontend/server/data/tinybird/tinybird.ts (1)

30-30: Simplified configuration retrieval improves code clarity

Removing the environment variable fallback logic and directly destructuring values from the config object simplifies the code and ensures consistent configuration access throughout the application.

frontend/server/mocks/tinybird-contributors-leaderboard-response.mock.ts (1)

98-105: Ensure mock data structure matches real API responses

The mock data structure includes fields for statistics and pagination that appear to match the interface defined in tinybird.ts. This is good practice for ensuring tests are realistic.

frontend/server/data/tinybird/active-contributors-data-source.test.ts (1)

3-5: Appropriate temporary solution, consider tracking as technical debt.

The placeholder test allows the pipeline to run while preserving the original test logic in comments for future reference. This aligns with your approach of creating foundational tests as mentioned in the PR description.

Consider adding a JIRA ticket or GitHub issue to track the restoration of these tests in the "subsequent PR" you mentioned, to ensure this technical debt is addressed.

frontend/server/data/tinybird/contributors-leaderboard-data-source.test.ts (3)

8-21: Well-structured test setup with good mocking approach.

The test setup with the mock function and beforeEach hook follows good testing practices by isolating the system under test from external dependencies. The detailed comment about vi.doMock behavior is particularly valuable for future developers who might not be familiar with its non-hoisting behavior.

This implementation provides a solid foundation for testing the Contributors Leaderboard data source.


23-61: Good implementation of happy path test case.

This test thoroughly validates the basic functionality of fetching contributors leaderboard data, including verification of both the request parameters and the response transformation. The use of mock data that resembles real-world responses adds to the test's reliability.

The mapping of the external API response to the internal data structure is well-tested, ensuring that the transformation is correct.


51-57: Verify the hardcoded contributionValue in the expected result.

In the transformation of mock data to the expected result, contributionValue is hardcoded to 0:

contributionValue: 0,

Is this intentional? If this value should be derived from the mock data or calculated based on other fields, consider updating the test to reflect that logic instead of using a hardcoded value.

Comment on lines 3 to 105
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Use anonymized mock data for testing

The mock data includes what appears to be real GitHub usernames, avatar URLs, and LinkedIn profile URLs with expiration dates (lines 71, 92). Consider:

  1. Using fictional or anonymized data for testing
  2. Replacing LinkedIn URLs that have expiration dates with static placeholder URLs
  3. Using generic avatar URLs that won't expire

This ensures tests remain reliable over time and prevents any potential privacy concerns.


🌐 Web query:

Do GitHub avatar URLs like "https://avatars.githubusercontent.com/u/6732289?v=4" expire or remain stable over time?

💡 Result:

GitHub avatar URLs that include the user’s numeric ID (e.g., https://avatars.githubusercontent.com/u/6732289?v=4) remain stable over time and do not expire, even if the username is changed. This stability is due to the use of GitHub’s internal user ID, which is permanent and unaffected by username updates[1][5].

Key Details:

  • User ID-Based URLs: GitHub assigns a unique numeric ID to each user account, which persists regardless of username changes. For example, the URL structure https://avatars.githubusercontent.com/u/{user-id} uses this ID, ensuring the avatar link remains valid[1][5].
  • Manual Updates Required: While the URL itself is stable, updating the avatar image requires manually uploading a new picture. The URL will then reflect the latest version when accessed[10].
  • Caching Considerations: GitHub avatar URLs may experience temporary caching issues (e.g., outdated images displayed), but these are resolved automatically as caches refresh[7].

For long-term reliability, use the ID-based URL format. To find your GitHub user ID:

  1. Visit your profile settings on GitHub.
  2. Check the "GitHub ID" field under the "Profile" section[1].

This ensures links to your avatar remain consistent even if your username changes.

Citations:


Action Required: Anonymize Mock Data in frontend/server/mocks/tinybird-contributors-leaderboard-response.mock.ts

  • Replace real personal data (e.g., actual GitHub usernames and display names) with fictional or anonymized values to avoid using live user information in tests.
  • Although GitHub avatar URLs (e.g., "https://avatars.githubusercontent.com/u/6732289?v=4") are stable over time, consider using generic placeholder URLs to prevent any direct association with real users.
  • Replace URLs that include expiration parameters (such as the LinkedIn/profile image URLs seen on lines 71 and 92) with static placeholders to ensure long-term reliability of tests.

@borfast borfast requested review from emlimlf and gaspergrom March 26, 2025 15:21
@borfast borfast changed the base branch from main to improvement/add-github-action-for-tests March 26, 2025 15:23
@borfast borfast force-pushed the improvement/add-github-action-for-tests branch from 8325e68 to 6c99d03 Compare March 27, 2025 11:29
Base automatically changed from improvement/add-github-action-for-tests to main March 27, 2025 11:30
@borfast borfast force-pushed the improvement/ins-330-create-tests-for-contributors-leaderboard branch from 34de059 to 89e46f3 Compare March 27, 2025 16:15
Signed-off-by: Raúl Santos <4837+borfast@users.noreply.github.com>
@borfast borfast force-pushed the improvement/ins-330-create-tests-for-contributors-leaderboard branch from 89e46f3 to 3e789b4 Compare March 27, 2025 16:17
@borfast borfast merged commit 7251fd1 into main Mar 27, 2025
4 checks passed
@borfast borfast deleted the improvement/ins-330-create-tests-for-contributors-leaderboard branch March 27, 2025 16:26
@borfast borfast assigned borfast and unassigned borfast May 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants