Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 28 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,15 @@ Thank you for your interest in contributing to OpenEvolve! This document provide

1. Fork the repository
2. Clone your fork: `git clone https://github.com/codelion/openevolve.git`
3. Install the package in development mode: `pip install -e .`
4. Run the tests to ensure everything is working: `python -m unittest discover tests`
3. Install the package in development mode: `pip install -e ".[dev]"`
4. Set up environment for testing:
```bash
# Unit tests don't require a real API key, but the environment variable must be set
export OPENAI_API_KEY=test-key-for-unit-tests
```
5. Run the tests to ensure everything is working: `python -m unittest discover tests`

**Note**: The unit tests do not make actual API calls to OpenAI or any LLM provider. However, the `OPENAI_API_KEY` environment variable must be set to any non-empty value for the tests to run. You can use a placeholder value like `test-key-for-unit-tests`.

## Development Environment

Expand All @@ -17,14 +24,32 @@ We recommend using a virtual environment for development:
python -m venv env
source env/bin/activate # On Windows: env\Scripts\activate
pip install -e ".[dev]"

# For running tests (no actual API calls are made)
export OPENAI_API_KEY=test-key-for-unit-tests

# For testing with real LLMs during development
# export OPENAI_API_KEY=your-actual-api-key
```

### LLM Configuration for Development

When developing features that interact with LLMs:

1. **Local Development**: Use a mock API key for unit tests
2. **Integration Testing**: Use your actual API key and configure `api_base` if using alternative providers
3. **Cost Management**: Consider using cheaper models or [optillm](https://github.com/codelion/optillm) for rate limiting during development

## Pull Request Process

1. Create a new branch for your feature or bugfix: `git checkout -b feat-your-feature-name`
2. Make your changes
3. Add tests for your changes
4. Run the tests to make sure everything passes: `python -m unittest discover tests`
4. Run the tests to make sure everything passes:
```bash
export OPENAI_API_KEY=test-key-for-unit-tests
python -m unittest discover tests
```
5. Commit your changes: `git commit -m "Add your descriptive commit message"`
6. Push to your fork: `git push origin feature/your-feature-name`
7. Submit a pull request to the main repository
Expand Down
34 changes: 31 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,13 +42,41 @@ pip install -e .

### Quick Start

We use the OpenAI SDK, so you can use any LLM or provider that supports an OpenAI compatible API. Just set the `OPENAI_API_KEY` environment variable
and update the `api_base` in config.yaml if you are using a provider other than OpenAI. For local models, you can use
an inference server like [optillm](https://github.com/codelion/optillm).
#### Setting up LLM Access

OpenEvolve uses the OpenAI SDK, which means it works with any LLM provider that supports an OpenAI-compatible API:

1. **Set the API Key**: Export the `OPENAI_API_KEY` environment variable:
```bash
export OPENAI_API_KEY=your-api-key-here
```

2. **Using Alternative LLM Providers**:
- For providers other than OpenAI (e.g., Anthropic, Cohere, local models), update the `api_base` in your config.yaml:
```yaml
llm:
api_base: "https://your-provider-endpoint.com/v1"
```

3. **Maximum Flexibility with optillm**:
- For advanced routing, rate limiting, or using multiple providers, we recommend [optillm](https://github.com/codelion/optillm)
- optillm acts as a proxy that can route requests to different LLMs based on your rules
- Simply point `api_base` to your optillm instance:
```yaml
llm:
api_base: "http://localhost:8000/v1"
```

This setup ensures OpenEvolve can work with any LLM provider - OpenAI, Anthropic, Google, Cohere, local models via Ollama/vLLM, or any OpenAI-compatible endpoint.

```python
import os
from openevolve import OpenEvolve

# Ensure API key is set
if not os.environ.get("OPENAI_API_KEY"):
raise ValueError("Please set OPENAI_API_KEY environment variable")

# Initialize the system
evolve = OpenEvolve(
initial_program_path="path/to/initial_program.py",
Expand Down
Loading