Skip to content

[Feature Request] Add Detailed Error Messages for API Responses #17

@1sarthakbhardwaj

Description

@1sarthakbhardwaj

When API calls fail (especially 400 Bad Request), the SDK/API returns generic error objects without detailed information about what went wrong. This makes debugging extremely difficult, especially when building integrations in languages other than Python.

Current Behavior

When making API calls that fail, the response looks like:

{
  "status": 400,
  "error": {}
}

Or in some cases:

Error: [object Object]

This happens specifically with:

  • /datasets/create endpoint
  • Potentially other POST/PUT endpoints

Expected Behavior

Error responses should include detailed information:

{
  "status": 400,
  "error": {
    "code": "INVALID_PAYLOAD",
    "message": "Missing or invalid required fields",
    "details": {
      "missing_fields": ["connection_id"],
      "invalid_fields": [
        {
          "field": "data_type",
          "error": "Must be one of: image, video, audio, document, text",
          "received": "images"
        }
      ],
      "validation_errors": []
    }
  }
}

Why This Matters

Use Case: Building MCP Server Integration

I'm building a Model Context Protocol (MCP) server that wraps the Labellerr SDK in JavaScript to allow AI assistants (like Claude, Cursor) to interact with Labellerr via natural language.

Current Status:

  • ✅ 21/22 tools working perfectly (list projects, exports, monitoring, etc.)
  • ❌ 1/22 tool failing: Project creation fails at dataset creation step
  • Blocker: Can't debug because error message is generic

The Problem in Detail

When creating a project, the workflow involves:

1. Upload files to GCS ✅
2. Create dataset ❌ (Returns 400 - can't see why)
3. Create annotation template
4. Create project

Our Request:

POST /datasets/create?client_id=14286&uuid=xxx
Headers: {
  api_key: "...",
  api_secret: "...",
  client_id: "14286",
  source: "sdk",
  origin: "https://pro.labellerr.com"
}
Body: {
  "dataset_name": "Test Dataset",
  "dataset_description": "Test",
  "data_type": "image",
  "connection_id": "temp_connection_id_xyz",
  "path": "local",
  "client_id": "14286"
}

Response:

400 Bad Request
Error: [object Object]

What we need to know:

  • Is the connection_id invalid?
  • Is a required field missing?
  • Is the field format wrong?
  • Is there a permission issue?

Without detailed errors, we can't fix the issue.

Impact

Without detailed errors:

  • Developers waste hours guessing what's wrong
  • Can't build reliable integrations in other languages
  • Poor developer experience
  • Blocks adoption of Labellerr in AI/automation workflows

With detailed errors:

  • Developers can fix issues immediately
  • Better SDK adoption across different languages
  • Easier to build integrations and automations
  • Improved developer satisfaction

Proposed Solution

Option 1: Enhanced Error Objects (Recommended)

Return structured error objects with:

  • Error code (e.g., INVALID_PAYLOAD, MISSING_FIELD, UNAUTHORIZED)
  • Human-readable message
  • Field-level validation errors
  • Suggested fixes (optional but helpful)

Option 2: Validation Mode

Add a ?validate=true query parameter that validates the payload without executing the operation and returns detailed validation results.

Option 3: Debug Mode

Add a ?debug=true parameter or X-Debug: true header that returns verbose error information including:

  • Received payload
  • Expected payload structure
  • Field-by-field validation results
  • Stack trace (for development environments)

Examples from Other APIs

Stripe API:

{
  "error": {
    "type": "invalid_request_error",
    "message": "Missing required param: source.",
    "param": "source",
    "code": "parameter_missing"
  }
}

GitHub API:

{
  "message": "Validation Failed",
  "errors": [
    {
      "resource": "Issue",
      "field": "title",
      "code": "missing_field"
    }
  ]
}

Additional Context

Environment:

  • Building JavaScript wrapper around Labellerr API
  • Using node-fetch for HTTP requests
  • Testing with valid credentials (client_id: 14286)
  • Other endpoints work perfectly (GET requests, exports, etc.)

Repository:

What Works:

  • ✅ GET /project_drafts/projects/detailed_list - Returns all projects
  • ✅ GET /datasets/{id} - Returns dataset details
  • ✅ POST /sdk/export/files - Creates exports successfully
  • ✅ All read operations and exports

What Doesn't Work:

  • ❌ POST /datasets/create - Returns 400 with generic error

Temporary Workaround

Currently advising users to:

  1. Create projects via web interface
  2. Use MCP server for monitoring, exports, and management
  3. Wait for this fix to enable full automation

This gives 95% functionality, but blocks the most important workflow: automated project creation.

Questions for Maintainers

  1. Can you enable detailed error messages in the next release?
  2. Is there existing debug logging we can enable?
  3. Can you share a working Node.js/JavaScript example for dataset creation?
  4. Are there undocumented field requirements causing the 400 error?

Suggested Priority

High Priority - This affects:

  • Developer experience
  • SDK adoption
  • Integration development
  • AI/automation workflows

Checklist

  • Issue occurs with latest SDK version
  • Other API endpoints work correctly with same credentials
  • Request payload matches Python SDK structure
  • Issue prevents building reliable integrations
  • Detailed error messages would resolve the issue

Thank you for considering this feature request! Better error messages will significantly improve the developer experience and enable more robust integrations with Labellerr.

Happy to provide:

  • More debugging information
  • Test cases
  • Pull request for documentation improvements
  • Beta testing for the fix

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions