Skip to content

alainrossi/perplexity-api-client

Repository files navigation

Perplexity API Client

CI Code Quality PyPI version Python Support License: MIT

A comprehensive Python module for interacting with the Perplexity AI API. This module provides an easy-to-use interface for chat completions, streaming responses, and search functionality using Perplexity's various AI models.

Features

  • 🤖 Multiple Models: Support for all Perplexity AI models including Sonar, Llama, Mixtral, and CodeLlama
  • 🔄 Streaming Support: Real-time streaming responses for interactive applications
  • 🔍 Search Functionality: Built-in search capabilities using Perplexity's online models
  • 📝 Type Safety: Full type hints and structured data classes for better development experience
  • 🛡️ Error Handling: Comprehensive error handling with custom exception classes
  • 🔧 Flexible Configuration: Environment-based configuration with sensible defaults
  • 📚 Context Management: Built-in support for Python context managers

Installation

From PyPI (Recommended)

pip install perplexity-api-client

From GitHub

To install the latest development version directly from GitHub:

pip install git+https://github.com/yourusername/perplexity-api-client.git

For Development

Clone the repository and install in development mode:

git clone https://github.com/yourusername/perplexity-api-client.git
cd perplexity-api-client
pip install -e .[dev]

Quick Start

1. Set up your API key

Get your API key from Perplexity AI and set it as an environment variable:

export PERPLEXITY_API_KEY="your-api-key-here"

2. Basic Usage

from perplexity_api import PerplexityClient

# Initialize the client
client = PerplexityClient()

# Ask a simple question
response = client.ask("What is the capital of France?")
print(response)

# Close the client
client.close()

3. Using Context Manager (Recommended)

from perplexity_api import PerplexityClient

with PerplexityClient() as client:
    response = client.ask("Explain quantum computing in simple terms")
    print(response)

Advanced Usage

Streaming Responses

with PerplexityClient() as client:
    for chunk in client.ask_stream("Tell me about artificial intelligence"):
        print(chunk, end="", flush=True)

Using Different Models

from perplexity_api import PerplexityClient, PerplexityModel

with PerplexityClient() as client:
    # Use a specific model
    response = client.ask(
        "What's the latest news?", 
        model=PerplexityModel.SONAR_MEDIUM_ONLINE
    )
    print(response)

Search Functionality

with PerplexityClient() as client:
    # Search for current information
    result = client.search("Latest developments in AI 2024")
    print(result)

System Messages and Context

with PerplexityClient() as client:
    response = client.ask(
        "How do computers work?",
        system_message="You are a helpful teacher who explains things simply.",
        temperature=0.7
    )
    print(response)

Multi-turn Conversations

from perplexity_api import Message, ChatCompletionRequest

with PerplexityClient() as client:
    messages = [
        Message(role="system", content="You are a coding assistant."),
        Message(role="user", content="How do I create a Python function?"),
    ]
    
    request = ChatCompletionRequest(
        model="sonar-pro",
        messages=messages,
        temperature=0.5
    )
    
    response = client.chat_completion(request)
    print(response.choices[0].message.content)

Convenience Function

from perplexity_api import ask_perplexity

# Quick one-liner for simple questions
answer = ask_perplexity("What is Python?")
print(answer)

Available Models

The module supports all Perplexity AI models:

  • Sonar Models: sonar-small-chat, sonar-small-online, sonar-medium-chat, sonar-medium-online, sonar-pro
  • Llama Models: llama-3.1-8b-instruct, llama-3.1-70b-instruct
  • Other Models: mixtral-8x7b-instruct, codellama-34b-instruct

Model Recommendations

  • For general questions: sonar-pro (default)
  • For current information/search: sonar-medium-online or sonar-small-online
  • For coding tasks: codellama-34b-instruct
  • For cost-effective usage: sonar-small-chat

Configuration

Environment Variables

You can configure the client using environment variables:

export PERPLEXITY_API_KEY="your-api-key"
export PERPLEXITY_BASE_URL="https://api.perplexity.ai"  # Optional
export PERPLEXITY_DEFAULT_MODEL="sonar-pro"             # Optional
export PERPLEXITY_MAX_RETRIES="3"                       # Optional
export PERPLEXITY_TIMEOUT="30"                          # Optional

Programmatic Configuration

from config import PerplexityConfig

config = PerplexityConfig(
    api_key="your-api-key",
    default_model="sonar-medium-online",
    timeout=60
)

client = PerplexityClient(api_key=config.api_key)

Error Handling

The module provides comprehensive error handling:

from perplexity_api import PerplexityClient, PerplexityAPIError

try:
    with PerplexityClient() as client:
        response = client.ask("Your question here")
        print(response)
except PerplexityAPIError as e:
    print(f"API Error: {e.message}")
    if e.status_code:
        print(f"Status Code: {e.status_code}")
except Exception as e:
    print(f"Unexpected error: {e}")

API Reference

PerplexityClient

Main client class for interacting with the Perplexity API.

Methods

  • ask(question, model, system_message, **kwargs): Ask a simple question
  • ask_stream(question, model, system_message, **kwargs): Get streaming response
  • search(query, model): Search for information using online models
  • chat_completion(request): Full chat completion with structured request/response
  • get_available_models(): List all available models
  • close(): Close the HTTP session

Parameters

  • question/query (str): The question or search query
  • model (str | PerplexityModel): Model to use for the request
  • system_message (str, optional): System message to set context
  • max_tokens (int, optional): Maximum tokens in response
  • temperature (float, optional): Sampling temperature (0.0 to 1.0)
  • top_p (float, optional): Top-p sampling parameter
  • top_k (int, optional): Top-k sampling parameter
  • presence_penalty (float, optional): Presence penalty (-2.0 to 2.0)
  • frequency_penalty (float, optional): Frequency penalty (-2.0 to 2.0)

Examples

Run the example script to see the module in action:

python example.py

The example script demonstrates:

  • Basic usage
  • Streaming responses
  • Different models
  • Search functionality
  • System messages
  • Multi-turn conversations
  • Convenience functions

Files Structure

perplexity_api/
├── perplexity_client.py  # Main client module
├── config.py            # Configuration management
├── example.py           # Usage examples
├── requirements.txt     # Dependencies
└── README.md           # This file

Contributing

We welcome contributions! Here's how you can help:

  1. Fork the repository on GitHub
  2. Clone your fork locally:
    git clone https://github.com/yourusername/perplexity-api-client.git
    cd perplexity-api-client
  3. Create a virtual environment:
    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  4. Install development dependencies:
    pip install -e .[dev]
  5. Create a feature branch:
    git checkout -b feature/your-feature-name
  6. Make your changes and ensure they pass all tests:
    # Run tests
    pytest
    
    # Check code formatting
    black --check .
    
    # Run linting
    flake8 .
    
    # Type checking
    mypy perplexity_api/
  7. Commit your changes:
    git commit -m "Add your feature description"
  8. Push to your fork:
    git push origin feature/your-feature-name
  9. Create a Pull Request on GitHub

Please read CONTRIBUTING.md for detailed guidelines.

Development

Running Tests

# Install test dependencies
pip install -e .[test]

# Run all tests
pytest

# Run with coverage
pytest --cov=perplexity_api --cov-report=html

Code Quality

This project uses several tools to maintain code quality:

  • Black: Code formatting
  • Flake8: Linting
  • MyPy: Type checking
  • Bandit: Security linting

Run all quality checks:

black .
flake8 .
mypy perplexity_api/
bandit -r perplexity_api/

License

This project is open source and available under the MIT License.

Support

For issues related to the Perplexity API itself, please refer to the official Perplexity documentation.

For issues with this client module, please create an issue in the repository.

About

A comprehensive Python client for the Perplexity AI API

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages