-
Notifications
You must be signed in to change notification settings - Fork 3
Description
When API calls fail (especially 400 Bad Request), the SDK/API returns generic error objects without detailed information about what went wrong. This makes debugging extremely difficult, especially when building integrations in languages other than Python.
Current Behavior
When making API calls that fail, the response looks like:
{
"status": 400,
"error": {}
}Or in some cases:
Error: [object Object]This happens specifically with:
/datasets/createendpoint- Potentially other POST/PUT endpoints
Expected Behavior
Error responses should include detailed information:
{
"status": 400,
"error": {
"code": "INVALID_PAYLOAD",
"message": "Missing or invalid required fields",
"details": {
"missing_fields": ["connection_id"],
"invalid_fields": [
{
"field": "data_type",
"error": "Must be one of: image, video, audio, document, text",
"received": "images"
}
],
"validation_errors": []
}
}
}Why This Matters
Use Case: Building MCP Server Integration
I'm building a Model Context Protocol (MCP) server that wraps the Labellerr SDK in JavaScript to allow AI assistants (like Claude, Cursor) to interact with Labellerr via natural language.
Current Status:
- ✅ 21/22 tools working perfectly (list projects, exports, monitoring, etc.)
- ❌ 1/22 tool failing: Project creation fails at dataset creation step
- Blocker: Can't debug because error message is generic
The Problem in Detail
When creating a project, the workflow involves:
1. Upload files to GCS ✅
2. Create dataset ❌ (Returns 400 - can't see why)
3. Create annotation template
4. Create project
Our Request:
POST /datasets/create?client_id=14286&uuid=xxx
Headers: {
api_key: "...",
api_secret: "...",
client_id: "14286",
source: "sdk",
origin: "https://pro.labellerr.com"
}
Body: {
"dataset_name": "Test Dataset",
"dataset_description": "Test",
"data_type": "image",
"connection_id": "temp_connection_id_xyz",
"path": "local",
"client_id": "14286"
}Response:
400 Bad Request
Error: [object Object]
What we need to know:
- Is the
connection_idinvalid? - Is a required field missing?
- Is the field format wrong?
- Is there a permission issue?
Without detailed errors, we can't fix the issue.
Impact
Without detailed errors:
- Developers waste hours guessing what's wrong
- Can't build reliable integrations in other languages
- Poor developer experience
- Blocks adoption of Labellerr in AI/automation workflows
With detailed errors:
- Developers can fix issues immediately
- Better SDK adoption across different languages
- Easier to build integrations and automations
- Improved developer satisfaction
Proposed Solution
Option 1: Enhanced Error Objects (Recommended)
Return structured error objects with:
- Error code (e.g.,
INVALID_PAYLOAD,MISSING_FIELD,UNAUTHORIZED) - Human-readable message
- Field-level validation errors
- Suggested fixes (optional but helpful)
Option 2: Validation Mode
Add a ?validate=true query parameter that validates the payload without executing the operation and returns detailed validation results.
Option 3: Debug Mode
Add a ?debug=true parameter or X-Debug: true header that returns verbose error information including:
- Received payload
- Expected payload structure
- Field-by-field validation results
- Stack trace (for development environments)
Examples from Other APIs
Stripe API:
{
"error": {
"type": "invalid_request_error",
"message": "Missing required param: source.",
"param": "source",
"code": "parameter_missing"
}
}GitHub API:
{
"message": "Validation Failed",
"errors": [
{
"resource": "Issue",
"field": "title",
"code": "missing_field"
}
]
}Additional Context
Environment:
- Building JavaScript wrapper around Labellerr API
- Using
node-fetchfor HTTP requests - Testing with valid credentials (client_id: 14286)
- Other endpoints work perfectly (GET requests, exports, etc.)
Repository:
- MCP Server: https://github.com/1sarthakbhardwaj/labellerr-mcp-server
- Related Issue: Project Creation via MCP Server - Need Help Debugging 400 Error 1sarthakbhardwaj/labellerr-mcp-server#1
What Works:
- ✅ GET
/project_drafts/projects/detailed_list- Returns all projects - ✅ GET
/datasets/{id}- Returns dataset details - ✅ POST
/sdk/export/files- Creates exports successfully - ✅ All read operations and exports
What Doesn't Work:
- ❌ POST
/datasets/create- Returns 400 with generic error
Temporary Workaround
Currently advising users to:
- Create projects via web interface
- Use MCP server for monitoring, exports, and management
- Wait for this fix to enable full automation
This gives 95% functionality, but blocks the most important workflow: automated project creation.
Questions for Maintainers
- Can you enable detailed error messages in the next release?
- Is there existing debug logging we can enable?
- Can you share a working Node.js/JavaScript example for dataset creation?
- Are there undocumented field requirements causing the 400 error?
Suggested Priority
High Priority - This affects:
- Developer experience
- SDK adoption
- Integration development
- AI/automation workflows
Checklist
- Issue occurs with latest SDK version
- Other API endpoints work correctly with same credentials
- Request payload matches Python SDK structure
- Issue prevents building reliable integrations
- Detailed error messages would resolve the issue
Thank you for considering this feature request! Better error messages will significantly improve the developer experience and enable more robust integrations with Labellerr.
Happy to provide:
- More debugging information
- Test cases
- Pull request for documentation improvements
- Beta testing for the fix