The Adapters package facilitates communication between different language model APIs by providing a unified interface for interaction. This ensures ease of use and flexibility in integrating multiple models from various providers.
The package can be installed an used via pip:
pip install martian-adapters
- Python version: 3.11.10
- Poetry
poetry install
poetry run pre-commit installTo run pre-commit manually:
poetry run pre-commit run --all-filesFor versioning we follow Semantic Versioning
The package requires certain environment variables to be set by the users:
- Copy
.env-exampleto.envand populate it with appropriate values.
poetry run pytestfrom adapters import AdapterFactory
from adapters.types import Conversation, ConversationRole, Turn
adapter = AdapterFactory.get_adapter_by_path("openai/openai/gpt-4o-mini")
conversation = Conversation(
[Turn(role=ConversationRole.user, content="Hi")]
)
adapter.execute_sync(conversation)Adapter paths follows the format provider/vendor/model_name. Use AdapterFactory.get_supported_models() to retrieve all supported models. For a given model, model.get_path() returns the adapter path.
-
Existing Providers: Add new models to the
MODELSarray if the provider is already supported. -
New Providers:
- If the provider follows the OpenAI format, model integration is straightforward. See the "Fireworks" provider class as an example.
- For providers with different schemas, see the "Anthropic" provider class for guidance.
-
Add the Provider and Model: Update
provider_adapters/__init__.pyand test files accordingly. -
Write Tests: Add tests in the relevant directories. Use
@pytest.mark.vcrfor tests making network requests. -
Run Tests:
poetry run pytest
-
Check-in Cassette Files: Include any new cassette YAML files in your commit.
-
Send a Pull Request: Ensure all tests pass before requesting a review.
Use the poetry run pytest --record-mode=rewrite option with pytest to update cassette files.
Some models may only be accessible from specific locations (e.g., the U.S.). In such cases, running tests might require access to a U.S.-based server.
This documentation provides a streamlined approach to using and contributing to the Adapters package, emphasizing practical steps and clear examples.
To optimize throughput and performance, we provide options to configure HTTP networking parameters:
ADAPTERS_MAX_KEEPALIVE_CONNECTIONS_PER_PROCESS = 100
ADAPTERS_MAX_CONNECTIONS_PER_PROCESS = 1000
ADAPTERS_HTTP_CONNECT_TIMEOUT = 5
ADAPTERS_HTTP_TIMEOUT = 600For stress testing or other purposes, you can override all base URLs by setting the following in your .env file:
_ADAPTERS_OVERRIDE_ALL_BASE_URLS_ = "https://new-base-url.com/api"This setting ensures that all LLM API calls will route to the specified new base URL.