-
Notifications
You must be signed in to change notification settings - Fork 0
Feat: Add Unit Tests for onchain_agent.py
#32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds comprehensive async tests for the onchain agent covering successful responses and schema checks, partial/missing-field handling, HTTP errors with status propagation, retry flows for timeouts/network errors, and various exception paths using a mocked Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/services/agents/tests/test_onchain_agent.py (1)
148-208: Inconsistent retry expectations in error handling tests.These tests expect
call_count == 3(indicating 3 retry attempts) but don't patch the retry mechanism like tests in lines 28-147. This creates an inconsistency:
- If retry exists by default: these tests will retry 3 times (potentially slow and unpredictable)
- If retry doesn't exist: these tests will fail because call_count will be 1, not 3
For consistency and speed, either patch the retry behavior here too (with
wait_fixed(0.01)andstop_after_attempt(3)), or adjust the expected call_count based on whether retry is actually configured by default.Apply this pattern to make retry behavior explicit in these tests:
@pytest.mark.asyncio @patch('httpx.AsyncClient') async def test_fetch_onchain_metrics_http_error_raises_onchainagenthttperror(mock_async_client): mock_client_instance = AsyncMock() mock_async_client.return_value.__aenter__.return_value = mock_client_instance mock_client_instance.get.side_effect = [ create_mock_response(404), create_mock_response(404), create_mock_response(404) # All attempts fail ] + with patch.object(fetch_onchain_metrics.retry, 'wait', new=wait_fixed(0.01)), \ + patch.object(fetch_onchain_metrics.retry, 'stop', new=stop_after_attempt(3)): + - with pytest.raises(OnchainAgentHTTPError) as excinfo: - await fetch_onchain_metrics(url="http://test.com/onchain") + with pytest.raises(OnchainAgentHTTPError) as excinfo: + await fetch_onchain_metrics(url="http://test.com/onchain") assert excinfo.value.status_code == 404 assert mock_client_instance.get.call_count == 3 # Retries should still happen
🧹 Nitpick comments (2)
backend/app/services/agents/tests/test_onchain_agent.py (2)
210-264: Consider using a schema validation library for more robust validation.The manual field and type checks work correctly but could be more maintainable with a schema validation library like
pydanticorjsonschema. This would make the validation more declarative and easier to extend.Example with pydantic (if adopted):
from pydantic import BaseModel class OnchainMetricsSchema(BaseModel): total_transactions: int active_users: int average_transaction_value: float timestamp: str @pytest.mark.asyncio @patch('httpx.AsyncClient') async def test_fetch_onchain_metrics_success_and_schema(mock_async_client): # ... setup code ... result = await fetch_onchain_metrics(url="http://test.com/onchain") # Validate schema with pydantic (will raise ValidationError if invalid) validated = OnchainMetricsSchema(**result) assert result == expected_metrics
1-337: Optional: Consider adding tests for rate limiting and header configuration.The current test coverage is good, but you could enhance it by testing additional behavior from the implementation:
- Rate limiting verification: Test that
asyncio.sleep(settings.REQUEST_DELAY_SECONDS)is called after successful requests- User-Agent header verification: Test that the correct User-Agent header is set in requests
Example test for rate limiting:
@pytest.mark.asyncio @patch('httpx.AsyncClient') @patch('asyncio.sleep', new_callable=AsyncMock) async def test_fetch_onchain_metrics_rate_limiting(mock_sleep, mock_async_client): mock_client_instance = AsyncMock() mock_async_client.return_value.__aenter__.return_value = mock_client_instance mock_client_instance.get.return_value = create_mock_response(200, {"data": "test"}) await fetch_onchain_metrics(url="http://test.com/onchain") # Verify rate limiting sleep was called mock_sleep.assert_called_once()Example test for User-Agent:
@pytest.mark.asyncio @patch('httpx.AsyncClient') async def test_fetch_onchain_metrics_user_agent_header(mock_async_client): from backend.app.core.config import settings mock_async_client_instance = mock_async_client.return_value await fetch_onchain_metrics(url="http://test.com/onchain") # Verify AsyncClient was initialized with correct User-Agent call_kwargs = mock_async_client.call_args.kwargs assert call_kwargs['headers']['User-Agent'] == settings.USER_AGENT
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (8)
backend/app/core/__pycache__/config.cpython-313.pycis excluded by!**/*.pycbackend/app/core/__pycache__/orchestrator.cpython-313.pycis excluded by!**/*.pycbackend/app/services/__pycache__/report_processor.cpython-313.pycis excluded by!**/*.pycbackend/app/services/agents/tests/__pycache__/test_code_audit_agent.cpython-313-pytest-8.4.2.pycis excluded by!**/*.pycbackend/app/services/agents/tests/__pycache__/test_onchain_agent.cpython-313-pytest-8.4.2.pycis excluded by!**/*.pycbackend/logs/app.logis excluded by!**/*.logbackend/tests/__pycache__/test_orchestrator.cpython-313-pytest-8.4.2.pycis excluded by!**/*.pycbackend/tests/__pycache__/test_orchestrator_config.cpython-313-pytest-8.4.2.pycis excluded by!**/*.pyc
📒 Files selected for processing (1)
backend/app/services/agents/tests/test_onchain_agent.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/app/services/agents/tests/test_onchain_agent.py (1)
backend/app/services/agents/onchain_agent.py (3)
fetch_onchain_metrics(38-76)fetch_tokenomics(84-122)OnchainAgentHTTPError(26-30)
🔇 Additional comments (5)
backend/app/services/agents/tests/test_onchain_agent.py (5)
1-12: LGTM!The imports are appropriate for testing async functions with mocked HTTP clients and retry mechanisms.
14-26: LGTM!The mock response helper correctly simulates httpx.Response behavior for both successful and error responses.
265-308: LGTM!These tests correctly verify that the fetch functions return API responses as-is, allowing schema validation to be handled downstream. The design is appropriate for a data fetching layer.
309-337: LGTM!These tests correctly verify that HTTP error responses (400, 404) for invalid token IDs raise
OnchainAgentHTTPErrorwith the appropriate status code.
28-147: The review comment is incorrect.The tenacity library exposes a
.retryattribute on decorated functions that is writable and intended for testing. The original concern assumed this attribute doesn't exist, but verification confirms:
- Both
fetch_onchain_metricsandfetch_tokenomicsare properly decorated with@retry- The tests correctly patch the
.retryattribute using the documented pattern:patch.object(func.retry, 'wait', wait_fixed(...))- All required imports (
wait_fixed,stop_after_attempt) and exception types are correctly imported and defined- Exception assertions in the tests match the actual exception classes defined in
onchain_agent.pyThe test implementation follows the standard, documented approach for mocking tenacity retry behavior and will function as intended.
Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
backend/app/services/agents/tests/test_onchain_agent.py (4)
220-243: Mockasyncio.sleepto speed up tests.The test doesn't mock
asyncio.sleep, which the actual implementation calls for rate limiting. This could slow down test execution ifsettings.REQUEST_DELAY_SECONDSis significant.Apply this diff to add the necessary mocks:
@pytest.mark.asyncio @patch('httpx.AsyncClient') -async def test_fetch_onchain_metrics_success_and_schema(mock_async_client): +@patch('asyncio.sleep', new_callable=AsyncMock) +async def test_fetch_onchain_metrics_success_and_schema(mock_async_client, mock_sleep): mock_client_instance = AsyncMock() mock_async_client.return_value.__aenter__.return_value = mock_client_instance expected_metrics = { "total_transactions": 1000, "active_users": 500, "average_transaction_value": 150.75, "timestamp": "2023-10-27T10:00:00Z" } mock_client_instance.get.return_value = create_mock_response(200, expected_metrics) result = await fetch_onchain_metrics(url="http://test.com/onchain") assert result == expected_metrics + mock_sleep.assert_called_once()Optionally, consider patching retry behavior for consistency with other tests:
with patch.object(fetch_onchain_metrics.retry, 'wait', new=wait_fixed(0.01)), \ patch.object(fetch_onchain_metrics.retry, 'stop', new=stop_after_attempt(3)): result = await fetch_onchain_metrics(url="http://test.com/onchain") # ... assertions
245-271: Mockasyncio.sleepto speed up tests.The test doesn't mock
asyncio.sleep, which could slow down test execution ifsettings.REQUEST_DELAY_SECONDSis significant.Apply this diff to add the necessary mocks:
@pytest.mark.asyncio @patch('httpx.AsyncClient') -async def test_fetch_tokenomics_success_and_schema(mock_async_client): +@patch('asyncio.sleep', new_callable=AsyncMock) +async def test_fetch_tokenomics_success_and_schema(mock_async_client, mock_sleep): mock_client_instance = AsyncMock() mock_async_client.return_value.__aenter__.return_value = mock_client_instance expected_tokenomics = { "total_supply": "1000000000", "circulating_supply": "800000000", "market_cap_usd": "1500000000.50", "token_price_usd": "1.50", "last_updated": "2023-10-27T10:00:00Z" } mock_client_instance.get.return_value = create_mock_response(200, expected_tokenomics) result = await fetch_tokenomics(url="http://test.com/tokenomics") assert result == expected_tokenomics + mock_sleep.assert_called_once()Optionally, consider patching retry behavior for consistency with other tests.
275-315: Mockasyncio.sleepto speed up tests.Both missing fields tests should mock
asyncio.sleepto avoid unnecessary delays during test execution.Add the mock as a decorator and assertion for both tests:
@pytest.mark.asyncio @patch('httpx.AsyncClient') @patch('asyncio.sleep', new_callable=AsyncMock) async def test_fetch_onchain_metrics_missing_fields(mock_async_client, mock_sleep): # ... test body result = await fetch_onchain_metrics(url="http://test.com/onchain") # ... assertions mock_sleep.assert_called_once()Apply the same pattern to
test_fetch_tokenomics_missing_fields.
319-345: Consider patching retry behavior for consistency.While 4xx errors typically don't warrant retries, patching retry behavior ensures test consistency with other error tests and guarantees fast, deterministic execution regardless of retry configuration.
Add retry patching to both tests:
@pytest.mark.asyncio @patch('httpx.AsyncClient') async def test_fetch_onchain_metrics_invalid_token_id(mock_async_client): mock_client_instance = AsyncMock() mock_async_client.return_value.__aenter__.return_value = mock_client_instance error_response_data = {"error": "Invalid token ID provided"} mock_client_instance.get.return_value = create_mock_response(400, error_response_data) with patch.object(fetch_onchain_metrics.retry, 'wait', new=wait_fixed(0.01)), \ patch.object(fetch_onchain_metrics.retry, 'stop', new=stop_after_attempt(3)): with pytest.raises(OnchainAgentHTTPError) as excinfo: await fetch_onchain_metrics(url="http://test.com/onchain", params={"token_id": "invalid"}) assert excinfo.value.status_code == 400Apply the same pattern to
test_fetch_tokenomics_invalid_token_id.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
backend/app/services/agents/tests/__pycache__/test_onchain_agent.cpython-313-pytest-8.4.2.pycis excluded by!**/*.pyc
📒 Files selected for processing (1)
backend/app/services/agents/tests/test_onchain_agent.py(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/app/services/agents/tests/test_onchain_agent.py (1)
backend/app/services/agents/onchain_agent.py (4)
fetch_onchain_metrics(38-76)OnchainAgentHTTPError(26-30)OnchainAgentException(14-16)fetch_tokenomics(84-122)
🔇 Additional comments (3)
backend/app/services/agents/tests/test_onchain_agent.py (3)
14-26: LGTM! Well-designed test helper.The helper correctly simulates both successful and error HTTP responses, including proper setup of
raise_for_status()behavior for different status codes.
28-146: LGTM! Comprehensive retry logic testing.The retry tests thoroughly cover timeout, network error, HTTP error, and max retries scenarios for both functions. Patching retry behavior ensures fast, deterministic test execution.
159-216: Good addition of retry control for consistency.The modifications add retry patching to exception tests, ensuring they run fast and verify that retries occur even when exceptions are ultimately raised. This improves test consistency with other retry-based tests.
Overview: This PR introduces comprehensive unit tests for the
onchain_agent.pymodule, ensuring the reliability and correctness of its data fetching functions.Changes
fetch_onchain_metricsandfetch_tokenomicswithinonchain_agent.py.pytestandpytest-asynciofor effective asynchronous testing.Summary by CodeRabbit