Skip to content

Stash large response payload to S3 for /descendants/<id> #756

@yuanzhou

Description

@yuanzhou

The https://entity.api.hubmapconsortium.org/descendants/HBM925.HSGR.556 request to get back all descendants of Donor HBM925.HSGR.556 retuned 500 with

{
    "message": "Internal server error"
}

Further logging indicates the response payload is 11.7 MB:

Mon Oct 21 20:56:21 2024 - uwsgi_response_write_body_do(): Connection reset by peer [core/writer.c line 341] during GET /descendants/HBM925.HSGR.556 (172.31.23.74)
OSError: write error
[pid: 259|app: 0|req: 2284600/3109683] 172.31.23.74 () {52 vars in 993 bytes} [Mon Oct 21 20:56:16 2024] GET /descendants/HBM925.HSGR.556 => generated 11743268 bytes in 5791 msecs (HTTP/1.1 200) 2 headers in 77 bytes (4 switches on core 10)

Follow the existing implementation, use hubmap_commons.S3_worker to stash the results (when the response payload is >=10MB) in AWS S3 bucket and return a URL and 303 status code.

Note: /descendants/HBM925.HSGR.556?property=uuid returned 200 with 24798 uuids, and the response payload is 848 KB.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions