Skip to content

qagentur/texttunnel

Repository files navigation

texttunnel: Efficient text processing with GPT-3.5 and GPT-4

This package offers a straightforward interface for integrating the GPT-3.5 and GPT-4 models into your natural language processing pipelines. It is optimally designed for the following scenario:

Suppose you possess a corpus of text data that you want to analyze using the GPT-3.5 or GPT-4 models. The goal is to perform extractive NLP tasks such as classification, named entity recognition, translation, summarization, question answering, or sentiment analysis. In this context, the package prioritizes efficiency and tidiness to provide you streamlined results.

Features:

  • 📄 Output Schema: Utilizes JSON Schema alongside OpenAI's function calling schema to define the output data structure.
  • ✔️ Input Validation: Ensures well-structured and error-free API requests by validating input data.
  • ✅ Output Validation: Checks the response data from OpenAI's API against the expected schema to maintain data integrity.
  • 🚦 Asynchronous Requests: Facilitates speedy data processing by sending simultaneous requests to OpenAI's API, while staying within API rate limits.
  • 🚀 Efficient Batching: Supports bulk processing by packing multiple input texts into a single request for the OpenAI's API.
  • 💰 Cost Estimation: Aims for transparency in API utilization cost by providing cost estimates before sending API requests.
  • 💾 Caching: Uses aiohttp_client_cache to avoid redundant requests and reduce cost by caching previous requests. Supports SQLite, MongoDB, DynamoDB and Redis cache backends.
  • 📝 Request Logging: Implements Python's native logging framework for tracking and logging all API requests.

Note that this package only works with function calling and only with the OpenAI API. If you're looking for a more flexible solution, consider instructor and litellm. You might also consider using the OpenAI Batch API as it offers savings compared to synchronous API calls.

⚠️ Maintenance mode: At this time no new features or enhancements are being developed. Only critical bugfixes will be made.

Installation

The package is available on PyPI. To install it, run:

pip install texttunnel

or via poetry:

poetry add texttunnel

Note: If you want to use caching, you need to install the aiohttp_client_cache extras. Please refer to the aiohttp_client_cache documentation for more information.

Usage

Check the docs: https://qagentur.github.io/texttunnel/

Create an account on OpenAI and get an API key. Set it as an environment variable called OPENAI_API_KEY.

Check the examples directory for examples of how to use this package.

If your account has been granted higher rate limits than the ones configured in the models module, you can override the default attributes of the Model class instances. See documentation of the models package module.

Development

To get started with development, follow these steps:

  • clone the repository
  • install poetry if you don't have it yet
  • navigate to the project folder
  • run poetry install to install the dependencies
  • run the tests with poetry run pytest -v

This project uses Google-style docstrings and black formatting. The docs are automatically built based on the docstrings.