Skip to content

Conversation

@svenaric
Copy link
Collaborator

@svenaric svenaric commented Nov 7, 2025

Overview: This PR introduces comprehensive unit tests for the onchain_agent.py module, ensuring the reliability and correctness of its data fetching functions.

Changes

  • Developed unit tests for fetch_onchain_metrics and fetch_tokenomics within onchain_agent.py.
  • Implemented mock responses for external Etherscan and Dune APIs to isolate testing.
  • Validated that returned JSON structures conform to expected schemas.
  • Added robust handling for edge cases, including missing fields, network errors, and invalid token IDs.
  • Utilized pytest and pytest-asyncio for effective asynchronous testing.

Summary by CodeRabbit

  • Tests
    • Expanded test coverage for on-chain data operations, including successful retrieval and schema validation.
    • Added tests for partial/missing data handling and invalid identifier scenarios with correct HTTP status propagation.
    • Comprehensive retry-path tests for timeouts, network and HTTP errors, including max-retry cases.
    • New tests for unexpected exception paths and mocked HTTP client responses to simulate varied conditions.

@coderabbitai
Copy link

coderabbitai bot commented Nov 7, 2025

Walkthrough

Adds comprehensive async tests for the onchain agent covering successful responses and schema checks, partial/missing-field handling, HTTP errors with status propagation, retry flows for timeouts/network errors, and various exception paths using a mocked httpx.AsyncClient.

Changes

Cohort / File(s) Summary
On-chain Agent Tests
backend/app/services/agents/tests/test_onchain_agent.py
Expands and reorganizes async tests for onchain metrics and tokenomics: success and schema validation, partial/missing-field scenarios, invalid token ID handling (raising OnchainAgentHTTPError with status codes), retry-path tests for timeouts/network/HTTP errors including max-retries cases, and exception propagation tests. Introduces mocked httpx.AsyncClient usage and a create_mock_response helper to simulate responses and errors.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Inspect mock httpx.AsyncClient setup and create_mock_response helper for accurate behavior simulation.
  • Verify retry-related tests correctly configure and assert retry counts, wait/stop behavior, and max-retry outcomes.
  • Ensure assertions validate data types, presence/absence of fields, and that HTTP status codes propagate via OnchainAgentHTTPError.
  • Check coverage of exception classes: OnchainAgentException, OnchainAgentTimeout, OnchainAgentNetworkError.

Suggested reviewers

  • felixjordandev

Poem

🐇 I hopped through mocks and responses bright,
Retries in pockets, errors in sight,
I fetch the chain with careful care,
Assert each field and status fair,
A tiny rabbit tests the night.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Feat: Add Unit Tests for onchain_agent.py' directly and clearly summarizes the main change—adding comprehensive unit tests for the onchain_agent.py module, which aligns perfectly with the raw summary and PR objectives.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/onchain-agent-unit-tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/app/services/agents/tests/test_onchain_agent.py (1)

148-208: Inconsistent retry expectations in error handling tests.

These tests expect call_count == 3 (indicating 3 retry attempts) but don't patch the retry mechanism like tests in lines 28-147. This creates an inconsistency:

  • If retry exists by default: these tests will retry 3 times (potentially slow and unpredictable)
  • If retry doesn't exist: these tests will fail because call_count will be 1, not 3

For consistency and speed, either patch the retry behavior here too (with wait_fixed(0.01) and stop_after_attempt(3)), or adjust the expected call_count based on whether retry is actually configured by default.

Apply this pattern to make retry behavior explicit in these tests:

 @pytest.mark.asyncio
 @patch('httpx.AsyncClient')
 async def test_fetch_onchain_metrics_http_error_raises_onchainagenthttperror(mock_async_client):
     mock_client_instance = AsyncMock()
     mock_async_client.return_value.__aenter__.return_value = mock_client_instance
     mock_client_instance.get.side_effect = [
         create_mock_response(404),
         create_mock_response(404),
         create_mock_response(404) # All attempts fail
     ]
 
+    with patch.object(fetch_onchain_metrics.retry, 'wait', new=wait_fixed(0.01)), \
+         patch.object(fetch_onchain_metrics.retry, 'stop', new=stop_after_attempt(3)):
+        
-    with pytest.raises(OnchainAgentHTTPError) as excinfo:
-        await fetch_onchain_metrics(url="http://test.com/onchain")
+        with pytest.raises(OnchainAgentHTTPError) as excinfo:
+            await fetch_onchain_metrics(url="http://test.com/onchain")
     assert excinfo.value.status_code == 404
     assert mock_client_instance.get.call_count == 3 # Retries should still happen
🧹 Nitpick comments (2)
backend/app/services/agents/tests/test_onchain_agent.py (2)

210-264: Consider using a schema validation library for more robust validation.

The manual field and type checks work correctly but could be more maintainable with a schema validation library like pydantic or jsonschema. This would make the validation more declarative and easier to extend.

Example with pydantic (if adopted):

from pydantic import BaseModel

class OnchainMetricsSchema(BaseModel):
    total_transactions: int
    active_users: int
    average_transaction_value: float
    timestamp: str

@pytest.mark.asyncio
@patch('httpx.AsyncClient')
async def test_fetch_onchain_metrics_success_and_schema(mock_async_client):
    # ... setup code ...
    
    result = await fetch_onchain_metrics(url="http://test.com/onchain")
    # Validate schema with pydantic (will raise ValidationError if invalid)
    validated = OnchainMetricsSchema(**result)
    assert result == expected_metrics

1-337: Optional: Consider adding tests for rate limiting and header configuration.

The current test coverage is good, but you could enhance it by testing additional behavior from the implementation:

  1. Rate limiting verification: Test that asyncio.sleep(settings.REQUEST_DELAY_SECONDS) is called after successful requests
  2. User-Agent header verification: Test that the correct User-Agent header is set in requests

Example test for rate limiting:

@pytest.mark.asyncio
@patch('httpx.AsyncClient')
@patch('asyncio.sleep', new_callable=AsyncMock)
async def test_fetch_onchain_metrics_rate_limiting(mock_sleep, mock_async_client):
    mock_client_instance = AsyncMock()
    mock_async_client.return_value.__aenter__.return_value = mock_client_instance
    mock_client_instance.get.return_value = create_mock_response(200, {"data": "test"})
    
    await fetch_onchain_metrics(url="http://test.com/onchain")
    
    # Verify rate limiting sleep was called
    mock_sleep.assert_called_once()

Example test for User-Agent:

@pytest.mark.asyncio
@patch('httpx.AsyncClient')
async def test_fetch_onchain_metrics_user_agent_header(mock_async_client):
    from backend.app.core.config import settings
    
    mock_async_client_instance = mock_async_client.return_value
    
    await fetch_onchain_metrics(url="http://test.com/onchain")
    
    # Verify AsyncClient was initialized with correct User-Agent
    call_kwargs = mock_async_client.call_args.kwargs
    assert call_kwargs['headers']['User-Agent'] == settings.USER_AGENT
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 23a39b3 and 8320ea0.

⛔ Files ignored due to path filters (8)
  • backend/app/core/__pycache__/config.cpython-313.pyc is excluded by !**/*.pyc
  • backend/app/core/__pycache__/orchestrator.cpython-313.pyc is excluded by !**/*.pyc
  • backend/app/services/__pycache__/report_processor.cpython-313.pyc is excluded by !**/*.pyc
  • backend/app/services/agents/tests/__pycache__/test_code_audit_agent.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
  • backend/app/services/agents/tests/__pycache__/test_onchain_agent.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
  • backend/logs/app.log is excluded by !**/*.log
  • backend/tests/__pycache__/test_orchestrator.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
  • backend/tests/__pycache__/test_orchestrator_config.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
📒 Files selected for processing (1)
  • backend/app/services/agents/tests/test_onchain_agent.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/app/services/agents/tests/test_onchain_agent.py (1)
backend/app/services/agents/onchain_agent.py (3)
  • fetch_onchain_metrics (38-76)
  • fetch_tokenomics (84-122)
  • OnchainAgentHTTPError (26-30)
🔇 Additional comments (5)
backend/app/services/agents/tests/test_onchain_agent.py (5)

1-12: LGTM!

The imports are appropriate for testing async functions with mocked HTTP clients and retry mechanisms.


14-26: LGTM!

The mock response helper correctly simulates httpx.Response behavior for both successful and error responses.


265-308: LGTM!

These tests correctly verify that the fetch functions return API responses as-is, allowing schema validation to be handled downstream. The design is appropriate for a data fetching layer.


309-337: LGTM!

These tests correctly verify that HTTP error responses (400, 404) for invalid token IDs raise OnchainAgentHTTPError with the appropriate status code.


28-147: The review comment is incorrect.

The tenacity library exposes a .retry attribute on decorated functions that is writable and intended for testing. The original concern assumed this attribute doesn't exist, but verification confirms:

  1. Both fetch_onchain_metrics and fetch_tokenomics are properly decorated with @retry
  2. The tests correctly patch the .retry attribute using the documented pattern: patch.object(func.retry, 'wait', wait_fixed(...))
  3. All required imports (wait_fixed, stop_after_attempt) and exception types are correctly imported and defined
  4. Exception assertions in the tests match the actual exception classes defined in onchain_agent.py

The test implementation follows the standard, documented approach for mocking tenacity retry behavior and will function as intended.

Likely an incorrect or invalid review comment.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
backend/app/services/agents/tests/test_onchain_agent.py (4)

220-243: Mock asyncio.sleep to speed up tests.

The test doesn't mock asyncio.sleep, which the actual implementation calls for rate limiting. This could slow down test execution if settings.REQUEST_DELAY_SECONDS is significant.

Apply this diff to add the necessary mocks:

 @pytest.mark.asyncio
 @patch('httpx.AsyncClient')
-async def test_fetch_onchain_metrics_success_and_schema(mock_async_client):
+@patch('asyncio.sleep', new_callable=AsyncMock)
+async def test_fetch_onchain_metrics_success_and_schema(mock_async_client, mock_sleep):
     mock_client_instance = AsyncMock()
     mock_async_client.return_value.__aenter__.return_value = mock_client_instance
 
     expected_metrics = {
         "total_transactions": 1000,
         "active_users": 500,
         "average_transaction_value": 150.75,
         "timestamp": "2023-10-27T10:00:00Z"
     }
     mock_client_instance.get.return_value = create_mock_response(200, expected_metrics)
 
     result = await fetch_onchain_metrics(url="http://test.com/onchain")
     assert result == expected_metrics
+    mock_sleep.assert_called_once()

Optionally, consider patching retry behavior for consistency with other tests:

with patch.object(fetch_onchain_metrics.retry, 'wait', new=wait_fixed(0.01)), \
     patch.object(fetch_onchain_metrics.retry, 'stop', new=stop_after_attempt(3)):
    result = await fetch_onchain_metrics(url="http://test.com/onchain")
    # ... assertions

245-271: Mock asyncio.sleep to speed up tests.

The test doesn't mock asyncio.sleep, which could slow down test execution if settings.REQUEST_DELAY_SECONDS is significant.

Apply this diff to add the necessary mocks:

 @pytest.mark.asyncio
 @patch('httpx.AsyncClient')
-async def test_fetch_tokenomics_success_and_schema(mock_async_client):
+@patch('asyncio.sleep', new_callable=AsyncMock)
+async def test_fetch_tokenomics_success_and_schema(mock_async_client, mock_sleep):
     mock_client_instance = AsyncMock()
     mock_async_client.return_value.__aenter__.return_value = mock_client_instance
 
     expected_tokenomics = {
         "total_supply": "1000000000",
         "circulating_supply": "800000000",
         "market_cap_usd": "1500000000.50",
         "token_price_usd": "1.50",
         "last_updated": "2023-10-27T10:00:00Z"
     }
     mock_client_instance.get.return_value = create_mock_response(200, expected_tokenomics)
 
     result = await fetch_tokenomics(url="http://test.com/tokenomics")
     assert result == expected_tokenomics
+    mock_sleep.assert_called_once()

Optionally, consider patching retry behavior for consistency with other tests.


275-315: Mock asyncio.sleep to speed up tests.

Both missing fields tests should mock asyncio.sleep to avoid unnecessary delays during test execution.

Add the mock as a decorator and assertion for both tests:

@pytest.mark.asyncio
@patch('httpx.AsyncClient')
@patch('asyncio.sleep', new_callable=AsyncMock)
async def test_fetch_onchain_metrics_missing_fields(mock_async_client, mock_sleep):
    # ... test body
    result = await fetch_onchain_metrics(url="http://test.com/onchain")
    # ... assertions
    mock_sleep.assert_called_once()

Apply the same pattern to test_fetch_tokenomics_missing_fields.


319-345: Consider patching retry behavior for consistency.

While 4xx errors typically don't warrant retries, patching retry behavior ensures test consistency with other error tests and guarantees fast, deterministic execution regardless of retry configuration.

Add retry patching to both tests:

@pytest.mark.asyncio
@patch('httpx.AsyncClient')
async def test_fetch_onchain_metrics_invalid_token_id(mock_async_client):
    mock_client_instance = AsyncMock()
    mock_async_client.return_value.__aenter__.return_value = mock_client_instance

    error_response_data = {"error": "Invalid token ID provided"}
    mock_client_instance.get.return_value = create_mock_response(400, error_response_data)

    with patch.object(fetch_onchain_metrics.retry, 'wait', new=wait_fixed(0.01)), \
         patch.object(fetch_onchain_metrics.retry, 'stop', new=stop_after_attempt(3)):
        with pytest.raises(OnchainAgentHTTPError) as excinfo:
            await fetch_onchain_metrics(url="http://test.com/onchain", params={"token_id": "invalid"})
        assert excinfo.value.status_code == 400

Apply the same pattern to test_fetch_tokenomics_invalid_token_id.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8320ea0 and acbded8.

⛔ Files ignored due to path filters (1)
  • backend/app/services/agents/tests/__pycache__/test_onchain_agent.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
📒 Files selected for processing (1)
  • backend/app/services/agents/tests/test_onchain_agent.py (4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/app/services/agents/tests/test_onchain_agent.py (1)
backend/app/services/agents/onchain_agent.py (4)
  • fetch_onchain_metrics (38-76)
  • OnchainAgentHTTPError (26-30)
  • OnchainAgentException (14-16)
  • fetch_tokenomics (84-122)
🔇 Additional comments (3)
backend/app/services/agents/tests/test_onchain_agent.py (3)

14-26: LGTM! Well-designed test helper.

The helper correctly simulates both successful and error HTTP responses, including proper setup of raise_for_status() behavior for different status codes.


28-146: LGTM! Comprehensive retry logic testing.

The retry tests thoroughly cover timeout, network error, HTTP error, and max retries scenarios for both functions. Patching retry behavior ensures fast, deterministic test execution.


159-216: Good addition of retry control for consistency.

The modifications add retry patching to exception tests, ensuring they run fast and verify that retries occur even when exceptions are ultimately raised. This improves test consistency with other retry-based tests.

@felixjordandev felixjordandev merged commit af5b7b0 into main Nov 7, 2025
1 check passed
@felixjordandev felixjordandev deleted the feat/onchain-agent-unit-tests branch November 7, 2025 16:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants