Open
Description
Describe the bug
When list_commits()
returns 30 results, the follow-up LLM call will use >64k tokens for repos like zalando/skipper
. This is prone to exceed the context size of LLMs. Users will also run into rate limits from LLM providers:
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Request too large for gpt-4o in organization org-xxx on tokens per min (TPM): Limit 30000, Requested 68490. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}
I'm a tier-1 API user for OpenAI.
Affected version
server version v0.1.0 (b89336793c5bc9b9abdd5100d876babbc1031f5d) 2025-04-04T15:38:21
Steps to reproduce the behavior
- Type
Who is the most frequent committer in github/github-mcp-server? Use list_commits for the output.
- Function call executed (due to
list_commits
returns 30 commits despiteperPage
set to 1 #136 the actual result set has 30 items).
list_commits({
"owner": "github",
"repo": "github-mcp-server",
"perPage": 100
})
- 30 commits fetched will exceed context size of the model and/or run into API rate limits making this function unusable.
Expected vs actual behavior
list_commits()
should apply field filtering on the Github API response. Currently all sorts of data for author
, committer
, commit.verification
(incl. signature) is returned which could be optional.
Logs
N/A