Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Taskfile.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ vars:
GOLANGCI_LINT_VERSION: v2.4.0
GOIMPORTS_VERSION: v0.29.0
DPRINT_VERSION: 0.48.0
EXAMPLE_VERSION: "0.5.1"
RUNNER_VERSION: "0.5.0"
EXAMPLE_VERSION: "0.6.0"
RUNNER_VERSION: "0.6.0"
VERSION: # if version is not passed we hack the semver by encoding the commit as pre-release
sh: echo "${VERSION:-0.0.0-$(git rev-parse --short HEAD)}"

Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ Stop real-time audio classification.

Terminates audio capture and releases any associated resources.

#### `classify_from_file(audio_path: str, confidence: int)`
#### `classify_from_file(audio_path: str, confidence: float)`

Classify audio content from a WAV file.

Expand All @@ -80,9 +80,8 @@ Supported sample widths:
##### Parameters

- **audio_path** (*str*): Path to the `.wav` audio file to classify.
- **confidence** (*int*) (optional): Confidence threshold (0–1). If None,
the default confidence level specified during initialization
will be applied.
- **confidence** (*float*) (optional): Minimum confidence threshold (0.0–1.0) required
for a detection to be considered valid. Defaults to 0.8 (80%).

##### Returns

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# cloud_llm API Reference

## Index

- Class `CloudLLM`
- Class `CloudModel`

---

## `CloudLLM` class

```python
class CloudLLM(api_key: str, model: Union[str, CloudModel], system_prompt: str, temperature: Optional[float], timeout: int)
```

A Brick for interacting with cloud-based Large Language Models (LLMs).

This class wraps LangChain functionality to provide a simplified, unified interface
for chatting with models like Claude, GPT, and Gemini. It supports both synchronous
'one-shot' responses and streaming output, with optional conversational memory.

### Parameters

- **api_key** (*str*): The API access key for the target LLM service. Defaults to the
'API_KEY' environment variable.
- **model** (*Union[str, CloudModel]*): The model identifier. Accepts a `CloudModel`
enum member (e.g., `CloudModel.OPENAI_GPT`) or its corresponding raw string
value (e.g., `'gpt-4o-mini'`). Defaults to `CloudModel.ANTHROPIC_CLAUDE`.
- **system_prompt** (*str*): A system-level instruction that defines the AI's persona
and constraints (e.g., "You are a helpful assistant"). Defaults to empty.
- **temperature** (*Optional[float]*): The sampling temperature between 0.0 and 1.0.
Higher values make output more random/creative; lower values make it more
deterministic. Defaults to 0.7.
- **timeout** (*int*): The maximum duration in seconds to wait for a response before
timing out. Defaults to 30.

### Raises

- **ValueError**: If `api_key` is not provided (empty string).

### Methods

#### `with_memory(max_messages: int)`

Enables conversational memory for this instance.

Configures the Brick to retain a window of previous messages, allowing the
AI to maintain context across multiple interactions.

##### Parameters

- **max_messages** (*int*): The maximum number of messages (user + AI) to keep
in history. Older messages are discarded. Set to 0 to disable memory.
Defaults to 10.

##### Returns

- (*CloudLLM*): The current instance, allowing for method chaining.

#### `chat(message: str)`

Sends a message to the AI and blocks until the complete response is received.

This method automatically manages conversation history if memory is enabled.

##### Parameters

- **message** (*str*): The input text prompt from the user.

##### Returns

- (*str*): The complete text response generated by the AI.

##### Raises

- **RuntimeError**: If the internal chain is not initialized or if the API request fails.

#### `chat_stream(message: str)`

Sends a message to the AI and yields response tokens as they are generated.

This allows for processing or displaying the response in real-time (streaming).
The generation can be interrupted by calling `stop_stream()`.

##### Parameters

- **message** (*str*): The input text prompt from the user.

##### Returns

- (*str*): Chunks of text (tokens) from the AI response.

##### Raises

- **RuntimeError**: If the internal chain is not initialized or if the API request fails.
- **AlreadyGenerating**: If a streaming session is already active.

#### `stop_stream()`

Signals the active streaming generation to stop.

This sets an internal flag that causes the `chat_stream` iterator to break
early. It has no effect if no stream is currently running.

#### `clear_memory()`

Clears the conversational memory history.

Resets the stored context. This is useful for starting a new conversation
topic without previous context interfering. Only applies if memory is enabled.


---

## `CloudModel` class

```python
class CloudModel()
```

Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,14 @@ This module processes an input image and returns:
- Corresponding class labels
- Confidence scores for each detection

### Parameters

- **confidence** (*float*): Minimum confidence threshold for detections. Default is 0.3 (30%).

### Raises

- **ValueError**: If model information cannot be retrieved.

### Methods

#### `detect_from_file(image_path: str, confidence: float)`
Expand Down Expand Up @@ -60,7 +68,7 @@ Draw bounding boxes on an image enclosing detected objects using PIL.
##### Returns

-: Image with bounding boxes and key points drawn.
None if no detection or invalid image.
None if input image or detections are invalid.

#### `process(item)`

Expand Down
Loading