A comprehensive Python module for interacting with the Perplexity AI API. This module provides an easy-to-use interface for chat completions, streaming responses, and search functionality using Perplexity's various AI models.
- 🤖 Multiple Models: Support for all Perplexity AI models including Sonar, Llama, Mixtral, and CodeLlama
 - 🔄 Streaming Support: Real-time streaming responses for interactive applications
 - 🔍 Search Functionality: Built-in search capabilities using Perplexity's online models
 - 📝 Type Safety: Full type hints and structured data classes for better development experience
 - 🛡️ Error Handling: Comprehensive error handling with custom exception classes
 - 🔧 Flexible Configuration: Environment-based configuration with sensible defaults
 - 📚 Context Management: Built-in support for Python context managers
 
pip install perplexity-api-clientTo install the latest development version directly from GitHub:
pip install git+https://github.com/yourusername/perplexity-api-client.gitClone the repository and install in development mode:
git clone https://github.com/yourusername/perplexity-api-client.git
cd perplexity-api-client
pip install -e .[dev]Get your API key from Perplexity AI and set it as an environment variable:
export PERPLEXITY_API_KEY="your-api-key-here"from perplexity_api import PerplexityClient
# Initialize the client
client = PerplexityClient()
# Ask a simple question
response = client.ask("What is the capital of France?")
print(response)
# Close the client
client.close()from perplexity_api import PerplexityClient
with PerplexityClient() as client:
    response = client.ask("Explain quantum computing in simple terms")
    print(response)with PerplexityClient() as client:
    for chunk in client.ask_stream("Tell me about artificial intelligence"):
        print(chunk, end="", flush=True)from perplexity_api import PerplexityClient, PerplexityModel
with PerplexityClient() as client:
    # Use a specific model
    response = client.ask(
        "What's the latest news?", 
        model=PerplexityModel.SONAR_MEDIUM_ONLINE
    )
    print(response)with PerplexityClient() as client:
    # Search for current information
    result = client.search("Latest developments in AI 2024")
    print(result)with PerplexityClient() as client:
    response = client.ask(
        "How do computers work?",
        system_message="You are a helpful teacher who explains things simply.",
        temperature=0.7
    )
    print(response)from perplexity_api import Message, ChatCompletionRequest
with PerplexityClient() as client:
    messages = [
        Message(role="system", content="You are a coding assistant."),
        Message(role="user", content="How do I create a Python function?"),
    ]
    
    request = ChatCompletionRequest(
        model="sonar-pro",
        messages=messages,
        temperature=0.5
    )
    
    response = client.chat_completion(request)
    print(response.choices[0].message.content)from perplexity_api import ask_perplexity
# Quick one-liner for simple questions
answer = ask_perplexity("What is Python?")
print(answer)The module supports all Perplexity AI models:
- Sonar Models: 
sonar-small-chat,sonar-small-online,sonar-medium-chat,sonar-medium-online,sonar-pro - Llama Models: 
llama-3.1-8b-instruct,llama-3.1-70b-instruct - Other Models: 
mixtral-8x7b-instruct,codellama-34b-instruct 
- For general questions: 
sonar-pro(default) - For current information/search: 
sonar-medium-onlineorsonar-small-online - For coding tasks: 
codellama-34b-instruct - For cost-effective usage: 
sonar-small-chat 
You can configure the client using environment variables:
export PERPLEXITY_API_KEY="your-api-key"
export PERPLEXITY_BASE_URL="https://api.perplexity.ai"  # Optional
export PERPLEXITY_DEFAULT_MODEL="sonar-pro"             # Optional
export PERPLEXITY_MAX_RETRIES="3"                       # Optional
export PERPLEXITY_TIMEOUT="30"                          # Optionalfrom config import PerplexityConfig
config = PerplexityConfig(
    api_key="your-api-key",
    default_model="sonar-medium-online",
    timeout=60
)
client = PerplexityClient(api_key=config.api_key)The module provides comprehensive error handling:
from perplexity_api import PerplexityClient, PerplexityAPIError
try:
    with PerplexityClient() as client:
        response = client.ask("Your question here")
        print(response)
except PerplexityAPIError as e:
    print(f"API Error: {e.message}")
    if e.status_code:
        print(f"Status Code: {e.status_code}")
except Exception as e:
    print(f"Unexpected error: {e}")Main client class for interacting with the Perplexity API.
ask(question, model, system_message, **kwargs): Ask a simple questionask_stream(question, model, system_message, **kwargs): Get streaming responsesearch(query, model): Search for information using online modelschat_completion(request): Full chat completion with structured request/responseget_available_models(): List all available modelsclose(): Close the HTTP session
question/query(str): The question or search querymodel(str | PerplexityModel): Model to use for the requestsystem_message(str, optional): System message to set contextmax_tokens(int, optional): Maximum tokens in responsetemperature(float, optional): Sampling temperature (0.0 to 1.0)top_p(float, optional): Top-p sampling parametertop_k(int, optional): Top-k sampling parameterpresence_penalty(float, optional): Presence penalty (-2.0 to 2.0)frequency_penalty(float, optional): Frequency penalty (-2.0 to 2.0)
Run the example script to see the module in action:
python example.pyThe example script demonstrates:
- Basic usage
 - Streaming responses
 - Different models
 - Search functionality
 - System messages
 - Multi-turn conversations
 - Convenience functions
 
perplexity_api/
├── perplexity_client.py  # Main client module
├── config.py            # Configuration management
├── example.py           # Usage examples
├── requirements.txt     # Dependencies
└── README.md           # This file
We welcome contributions! Here's how you can help:
- Fork the repository on GitHub
 - Clone your fork locally:
git clone https://github.com/yourusername/perplexity-api-client.git cd perplexity-api-client - Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
 - Install development dependencies:
pip install -e .[dev]
 - Create a feature branch:
git checkout -b feature/your-feature-name
 - Make your changes and ensure they pass all tests:
# Run tests pytest # Check code formatting black --check . # Run linting flake8 . # Type checking mypy perplexity_api/
 - Commit your changes:
git commit -m "Add your feature description" - Push to your fork:
git push origin feature/your-feature-name
 - Create a Pull Request on GitHub
 
Please read CONTRIBUTING.md for detailed guidelines.
# Install test dependencies
pip install -e .[test]
# Run all tests
pytest
# Run with coverage
pytest --cov=perplexity_api --cov-report=htmlThis project uses several tools to maintain code quality:
- Black: Code formatting
 - Flake8: Linting
 - MyPy: Type checking
 - Bandit: Security linting
 
Run all quality checks:
black .
flake8 .
mypy perplexity_api/
bandit -r perplexity_api/This project is open source and available under the MIT License.
For issues related to the Perplexity API itself, please refer to the official Perplexity documentation.
For issues with this client module, please create an issue in the repository.