Background
PredictRequest (in src/climatevision/api/schemas.py / api/main.py) accepts a start_date and end_date, but currently does not check that the range is well-ordered. A request with start_date="2026-06-01" and end_date="2026-01-01" is silently accepted and produces an empty image stack downstream.
Acceptance criteria
- Add a Pydantic v2
model_validator(mode="after") (or field_validator) on PredictRequest that raises ValueError if start_date > end_date.
- Update the OpenAPI description for both fields to make the ordering explicit.
- Add a pytest in
tests/ that:
- posts a valid range and asserts the request reaches the inference layer (mock the inference call),
- posts a reversed range and asserts the API returns 422 Unprocessable Entity with the validator's error message in the response body.
Pointers
- The schema lives in
src/climatevision/api/schemas.py. The route handler is @app.post("/api/predict") in api/main.py.
- For the test, use
fastapi.testclient.TestClient and patch run_inference_from_gee so the test does not hit GEE.
- Existing validators (e.g. on
bbox) are good reference patterns.
Why this is a good first issue
Self-contained: one validator + one test. No need to understand the inference pipeline, model loading, or GEE. Done in well under 50 lines.
Background
PredictRequest(insrc/climatevision/api/schemas.py/api/main.py) accepts astart_dateandend_date, but currently does not check that the range is well-ordered. A request withstart_date="2026-06-01"andend_date="2026-01-01"is silently accepted and produces an empty image stack downstream.Acceptance criteria
model_validator(mode="after")(orfield_validator) onPredictRequestthat raisesValueErrorifstart_date > end_date.tests/that:Pointers
src/climatevision/api/schemas.py. The route handler is@app.post("/api/predict")inapi/main.py.fastapi.testclient.TestClientand patchrun_inference_from_geeso the test does not hit GEE.bbox) are good reference patterns.Why this is a good first issue
Self-contained: one validator + one test. No need to understand the inference pipeline, model loading, or GEE. Done in well under 50 lines.