Skip to content

Python wrapper

Yury-Fridlyand edited this page Jun 18, 2025 · 68 revisions

Client Initialization

Valkey GLIDE provides support for both Cluster and Standalone configurations. Please refer to the relevant section based on your specific setup.

Cluster

Valkey GLIDE supports Cluster deployments, where the database is partitioned across multiple primary shards, with each shard being represented by a primary node and zero or more replica nodes.

To initialize a GlideClusterClient, you need to provide a GlideClusterClientConfiguration that includes the addresses of initial seed nodes. Valkey GLIDE automatically discovers the entire cluster topology, eliminating the necessity of explicitly listing all cluster nodes.

Connecting to a Cluster

The NodeAddress class represents the host and port of a cluster node. The host can be either an IP address, a hostname, or a fully qualified domain name (FQDN).

Example - Connecting to a cluster

from glide import (
    GlideClusterClient,
    GlideClusterClientConfiguration,
    NodeAddress
)

addresses = [NodeAddress(host="address.example.com", port=6379)]
client_config = GlideClusterClientConfiguration(addresses)

client = await GlideClusterClient.create(client_config)

Request Routing

In the cluster, data is divided into slots, and each primary node within the cluster is responsible for specific slots. Valkey GLIDE adheres to Valkey OSS guidelines when determining the node(s) to which a command should be sent in clustering mode.

For more details on the routing of specific commands, please refer to the documentation within the code.

Response Aggregation

When requests are dispatched to multiple shards in a cluster (as discussed in the Request routing section), the client needs to aggregate the responses for a given command. Valkey GLIDE follows Valkey OSS guidelines for determining how to aggregate the responses from multiple shards within a cluster.

To learn more about response aggregation for specific commands, please refer to the documentation within the code.

Topology Updates

The cluster's topology can change over time. New nodes can be added or removed, and the primary node owning a specific slot may change. Valkey GLIDE is designed to automatically rediscover the topology whenever the server indicates a change in slot ownership. This ensures that the Valkey GLIDE client stays in sync with the cluster's topology.

Standalone

Valkey GLIDE also supports Standalone deployments, where the database is hosted on a single primary node, optionally with replica nodes. To initialize a GlideClient for a standalone setup, you should create a GlideClientConfiguration that includes the addresses of primary and all replica nodes.

Example - Connecting to a standalone

from glide import (
    GlideClient,
    GlideClientConfiguration,
    NodeAddress
)

addresses = [
    NodeAddress(host="primary.example.com", port=6379),
    NodeAddress(host="replica1.example.com", port=6379),
    NodeAddress(host="replica2.example.com", port=6379)
  ]
client_config = GlideClientConfiguration(addresses)

client = await GlideClient.create(client_config)

Valkey commands

For information on the supported commands and their corresponding parameters, we recommend referring to the documentation in the code. This documentation provides in-depth insights into the usage and options available for each command.

Batch: Transaction and Pipelining (Glide 2.0)

In Valkey Glide 2.0, the concept of Batch and ClusterBatch replaces the previous Transaction and ClusterTransaction APIs. This change provides greater flexibility by supporting both atomic batches (Transactions) and non-atomic batches (Pipelining), while ensuring easy configuration and clear, detailed examples for each scenario.

Overview

Glide 2.0 introduces a robust Batch API with two primary modes:

  • Atomic Batch: Guarantees that all commands in a batch execute as a single, atomic unit. No other commands can interleave (similar to MULTI/EXEC).
  • Non-Atomic Batch (Pipeline): Sends multiple commands in one request without atomic guarantees. Commands can span multiple slots/nodes in a cluster and do not block other operations from being processed between them.

Both modes leverage the same classes— Batch for standalone mode and ClusterBatch for cluster mode — distinguished by an isAtomic flag. Extra configuration is provided via BatchOptions or ClusterBatchOptions, allowing control over timeouts, routings, and retry strategies.

Key Concepts

Atomic Batch (Transaction)

  • Definition: A set of commands executed together as a single, indivisible operation.
  • Guarantees: Sequential execution without interruption. Other clients cannot interleave commands between the batched operations.
  • Slot Constraint (Cluster Mode): When running against a cluster, all keys in an atomic batch must map to the same hash slot. Mixing keys from different slots will cause the transaction to fail.
  • Underlying Valkey: Equivalent to MULTI/EXEC Valkey commands.
  • Use Case: When you need consistency and isolation.
  • See: Valkey Transactions.

Non-Atomic Batch (Pipeline)

  • Definition: A group of commands sent in a single request, but executed without atomicity or isolation.
  • Behavior: Commands may be processed on different slots/nodes (in cluster mode), and other operations from different clients may interleave during execution.
  • Underlying Valkey: Similar to pipelining, minimizing round-trip latencies by sending all commands at once.
  • Use Case: Bulk reads or writes where each command is independent.
  • See: Valkey Pipelines.

Classes and API

Batch

For standalone (non-cluster, cluster mode disabled) clients.

from glide import Batch

# Create an atomic batch (transaction)
batch = Batch(True)
# Create a non-atomic batch (pipeline)
batch = Batch(False)

Note: Standalone Batches are executed on primary node.

ClusterBatch

For cluster (cluster mode enabled) clients (Mirrors Batch but routes commands based on slot ownership, splitting into sub-pipelines if needed, Read more in Multi-Node support).

from glide import ClusterBatch

# Create an atomic cluster batch (must use keys mapping to same slot)
batch = ClusterBatch(True)
# Create a non-atomic cluster batch (pipeline may span multiple slots)
const batch = ClusterBatch(False)

Note: When isAtomic = true, all keys in the ClusterBatch must map to the same slot. Attempting to include keys from different slots will result in an exception. Read more in Multi-Node support. If the client is configured to read from replicas (ReplicaPrefered, AZ_AFFINITY, AZ_AFFINITY_REPLICAS_AND_PRIMARY) read commands may be routed to the replicas, in a round robin manner, if this behavior impacts your application, consider creating a dedicated client, with the desired ReadFrom configuration.

Error handling - Raise on Error

Determines how errors are surfaced when calling exec(...). It is passed directly:

# Standalone Mode
async def exec(
    self,
    batch: Batch,
    raise_on_error: bool,
    options: Optional[BatchOptions] = None,
)

# Cluster Mode
async def exec(
    self,
    batch: ClusterBatch,
    raise_on_error: bool,
    options: Optional[ClusterBatchOptions] = None,
)

Behavior:

  • raiseOnError = true: When set to true, the first encountered error within the batch (after all configured retries and redirections have been executed) is raised as a RequestException.

  • raiseOnError = false:

    • When set to false, errors are returned as part of the response array rather than thrown.
    • Each failed command’s error details appear as a RequestException instance in the corresponding position of the returned array.
    • Allows processing of both successful and failed commands together.

Example:

# Cluster pipeline with raiseOnError = False
batch = ClusterBatch(False)

batch.set(key, "hello")
batch.lpop(key)
batch.delete([key])
batch.rename(key, key2)

result = await GlideClusterClient.exec(batch, raise_on_error=False)
print("Result is:", result)
# Output: Result is: ['OK', RequestError('WRONGTYPE: Operation against a key holding the wrong kind of value'), 1, RequestError('An error was signalled by the server: - ResponseError: no such key')]
# Transaction with raiseOnError = true
batch = ClusterBatch(True)

batch.set(key, "hello")
batch.lpop(key)
batch.delete([key])
batch.rename(key, key2)

try: 
    await GlideClient.exec(batch, raise_on_error=False)
except RequestsError as e: 
    print("Batch execution aborted:", e)
# Output: Batch execution aborted: WRONGTYPE: Operation against a key holding the wrong kind of value

BatchOptions

Configuration for standalone batches.

Option Type Default Description
timeout Integer Client-level request timeout (e.g., 5000 ms) Maximum time in milliseconds to wait for the batch response. If exceeded, a timeout error is returned for the batch.
from glide import BatchOptions

batch_options = BatchOptions(timeout=2000) # 2 seconds

ClusterBatchOptions

Configuration for cluster batches.

Option Type Default Description
timeout Integer Client’s requestTimeout Maximum time in milliseconds to wait for entire cluster batch response.
retryStrategy ClusterBatchRetryStrategy null (defaults to no retries) Configures retry settings for server and connection errors. Not supported if
isAtomic = true — retry strategies only apply to non-atomic (pipeline) batches.
route SingleNodeRoute null Configures single-node routing for the batch request.

ClusterBatchRetryStrategy

Defines retry behavior (only for non-atomic cluster batches).

Option Type Default Description
retryServerError boolean false Retry commands that fail with retriable server errors (e.g.TRYAGAIN). May cause out-of-order results.
retryConnectionError boolean false Retry entire batch on connection failures. May cause duplicate executions since server might have processed the request before failure.
from glide import BatchRetryStrategy

retry_strategy = BatchRetryStrategy(retry_server_error=True, retry_connection_error= False)

Note: The ClusterBatchRetryStrategy configuration is only for non-atomic cluster batches, If provided for an atomic cluster batch (a cluster transaction), an error will be thrown.

Full usage

from glide import ClusterBatchOptions, BatchRetryStrategy

retry_strategy = BatchRetryStrategy(retry_server_error=True, retry_connection_error=False)
options = ClusterBatchOptions(retry_strategy=retry_strategy)

Configuration Details

Timeout

  • Specifies the maximum time (in milliseconds) to wait for the batch (atomic or non-atomic) request to complete.
  • If the timeout is reached before receiving all responses, the batch fails with a timeout error.
  • Defaults to the client’s requestTimeout if not explicitly set.

Retry Strategies (Cluster Only, Non-Atomic Batches)

  • Retry on Server Errors

    • Applies when a command fails with a retriable server error (e.g., TRYAGAIN).
    • Glide will automatically retry the failed command on the same node or the new master, depending on the topology update.
    • ⚠️ Caveat: Retried commands may arrive later than subsequent commands, leading to out-of-order execution if commands target the same slot.
  • Retry on Connection Errors

    • If a connection error occurs, the entire batch (or sub-pipeline, Read more in Multi-Node support) is retried from the start.
    • ⚠️ Caveat: If the server received and processed some or all commands before the connection failure, retrying the batch may lead to duplicate executions.

Route (Cluster Only)

Configures single-node routing for the batch request. The client will send the batch to the specified node defined by route. If a redirection error occurs:

  • For Atomic Batches (Transactions): The entire transaction will be redirected.
  • For Non-Atomic Batches (Pipelines): only the commands that encountered redirection errors will be redirected.

Usage Examples

Standalone (Atomic Batch)

from glide import (
 GlideClientConfiguration,
 NodeAddress,
 GlideClient,
 BatchOptions,
)

 # Create client configuration
addresses = [
    NodeAddress("server_primary.example.com", 6379),
    NodeAddress("server_replica.example.com", 6379)
]
config = GlideClientConfiguration(addresses)

 # Initialize client
client = await GlideClient.create(config)

 # Configure batch options
options = BatchOptions(timeout=2000)

# Create atomic batch (true indicates atomic/transaction mode)
atomic_batch = Batch(True)
atomic_batch.set("account:source", "100")
atomic_batch.set("account:dest", "0")
atomic_batch.incrby("account:dest", 50)
atomic_batch.decrby("account:source", 50)
atomic_batch.get("account:source")

try: 
    # Execute with raiseOnError = true
    results = await client.exec(atomic_batch, raise_on_error=True, options=options)
    print("Atomic Batch Results:", results)
    # Expected output: Atomic Batch Results: ['OK', 'OK', 50, 50, '50']
 except RequestError as e:
    print(f"Batch failed:", e)

Standalone (Non-Atomic Batch)

from glide import (
    GlideClientConfiguration,
    NodeAddress,
    GlideClient,
    Batch,
    BatchOptions
)

# Create client configuration
addresses = [
    NodeAddress("localhost", 6379)
]
config = GlideClientConfiguration(addresses)

# Initialize client
client = await GlideClient.create(config)

# Configure batch options
options = BatchOptions(timeout=2000)  # 2-second timeout

# Create non-atomic batch (False indicates pipeline mode)
pipeline = Batch(False)
pipeline.set("temp:key1", "value1")
pipeline.set("temp:key2", "value2")
pipeline.get("temp:key1")
pipeline.get("temp:key2")

# Execute with raise_on_error = False
results = await client.exec(pipeline, raise_on_error=False, options=options)
print("Pipeline Results:", results)
# Expected output: Pipeline Results: ['OK', 'OK', 'value1', 'value2']

Cluster (Atomic Batch)

from glide import (
    GlideClusterClientConfiguration,
    NodeAddress,
    GlideClusterClient,
    ClusterBatch,
    ClusterBatchOptions
)

# Initialize cluster client configuration
addresses = [
    NodeAddress("127.0.0.1", 6379)
]
config = GlideClusterClientConfiguration(addresses)

# Initialize client
glideClusterClient = await GlideClusterClient.create(config)

# Configure atomic batch options
options = ClusterBatchOptions(timeout=3000)  # 3-second timeout

# Create atomic cluster batch (all keys map to same slot)
atomicClusterBatch = ClusterBatch(True)
atomicClusterBatch.set("user:100:visits", "1")
atomicClusterBatch.incrby("user:100:visits", 5)
atomicClusterBatch.get("user:100:visits")

try:
    # Execute with raise_on_error = True
    clusterResults = await glideClusterClient.exec(atomicClusterBatch, raise_on_error=True, options=options)
    print("Atomic Cluster Batch:", clusterResults)
    # Expected output: Atomic Cluster Batch: ['OK', 6, '6']
except RequestError as e:
    print("Atomic cluster batch failed:", e)

Important: If you attempt to include keys from different slots, the batch creation will throw an exception informing you that keys must map to the same slot when isAtomic = true.

Cluster (Non-Atomic Batch / Pipeline)

from glide import (
    GlideClusterClientConfiguration,
    NodeAddress,
    GlideClusterClient,
    ClusterBatch,
    ClusterBatchOptions,
    ClusterBatchRetryStrategy
)

# Initialize cluster client configuration
addresses = [
    NodeAddress("localhost", 6379)
]
config = GlideClusterClientConfiguration(addresses)

# Initialize client
glideClusterClient = await GlideClusterClient.create(config)

# Configure retry strategy and pipeline options
retry_strategy = ClusterBatchRetryStrategy(
    retry_server_error=False,
    retry_connection_error=True
)

pipeline_options = ClusterBatchOptions(
    timeout=5000,                # 5-second timeout
    retry_strategy=retry_strategy
)

# Create pipeline spanning multiple slots
pipeline_cluster = ClusterBatch(False)  # False indicates non-atomic (pipeline)
pipeline_cluster.set("page:home:views", "100")
pipeline_cluster.incrby("page:home:views", 25)
pipeline_cluster.get("page:home:views")
pipeline_cluster.lpush("recent:logins", ["user1"])
pipeline_cluster.lpush("recent:logins", ["user2"])
pipeline_cluster.lrange("recent:logins", 0, 1)

# Execute with raise_on_error = False
pipeline_results = await glideClusterClient.exec(pipeline_cluster, raise_on_error=False, options=pipeline_options)
print("Pipeline Cluster Results:", pipeline_results)
# Expected output: Pipeline Cluster Results: ['OK', 125, '125', 1, 2, ['user2', 'user1']]

Multi-Node Support

While atomic batches (transactions) are restricted to a single Valkey node— all commands must map to the same hash slot in cluster mode—non-atomic batches (pipelines) can span multiple nodes. This enables operations that involve keys located in different slots or even multi-node commands.

When Glide processes a pipeline:

  1. Slot Calculation and Routing: For each key-based command (e.g., GET, SET), Glide computes the hash slot and determines which node owns that slot. If a command does not reference a key (e.g., INFO), it follows the command’s default request policy.
  2. Grouping into Sub-Pipelines: Commands targeting the same node are grouped together into a sub-pipeline. Each sub-pipeline contains all commands destined for a specific node.
  3. Dispatching Sub-Pipelines: Glide sends each sub-pipeline independently to its target node as a pipelined request.
  4. Aggregating Responses: Once all sub-pipelines return their results, Glide reassembles the responses into a single array, preserving the original command order. Multi-node commands are automatically split and dispatched appropriately.

Retry Strategy in Pipelines

When errors occur during pipeline execution, Glide handles them efficiently and granularly — each command in the pipeline receives its own response, whether successful or not. This means pipeline execution is not all-or-nothing: some commands may succeed while others may return errors (See the ClusterBatchRetryStrategy configuration and error handling details in the classes and API section for how to handle these errors programmatically).

Glide distinguishes between different types of errors and handles them as follows:

  • Redirection Errors (e.g., MOVED or ASK): These are always handled automatically. Glide will update the topology map if needed and redirect the command to the appropriate node, regardless of the retry configuration.
  • Retriable Server Errors (e.g., TRYAGAIN): If the retryServerError option is enabled in the batch's retry strategy, Glide will retry commands that fail with retriable server errors.
    ⚠️ Retrying may cause out-of-order execution for commands targeting the same slot.
  • Connection Errors: If the retryConnectionError option is enabled, Glide will retry the batch if a connection failure occurs.
    ⚠️ Retrying after a connection error may result in duplicate executions, since the server might have already received and processed the request before the error occurred.

Retry strategies are currently supported only for non-atomic (pipeline) cluster batches. You can configure these using the ClusterBatchRetryStrategy options:

  • retryServerError: Retry on server errors.
  • retryConnectionError: Retry on connection failures.

Example Scenario:

Suppose you issue the following commands:

MGET key {key}:1
SET key "value"

When keys are empty, the result is expected to be:

[null, null]
OK

However, suppose the slot of key is migrating. In this case, both commands will return an ASK error and be redirected. Upon ASK redirection, a multi-key command (like MGET) may return a TRYAGAIN error (triggering a retry), while the SET command succeeds immediately. This can result in an unintended reordering of commands if the first command is retried after the slot stabilizes:

["value", null]
OK

Deprecation Notice

  • Deprecated Classes: Transaction and ClusterTransaction are deprecated in Glide 2.0.

  • Replacement: Use Batch or ClusterBatch with isAtomic = true to achieve transaction-like (atomic) behavior.

  • Migration Tips:

    • Replace calls to new Transaction() with new Batch(true).
    • Replace calls to new ClusterTransaction() with new ClusterBatch(true).
    • Replace client.exec(transaction) with client.exec(batch, raiseOnError) or client.exec(batch, raiseOnError, options).

OpenTelemetry (GLIDE 2.0)

Observability is consistently one of the top feature requests by customers. Valkey GLIDE 2.0 introduces support for OpenTelemetry (OTel), enabling developers to gain deep insights into client-side performance and behavior in distributed systems. OTel is an open source, vendor-neutral framework that provides APIs, SDKs, and tools for generating, collecting, and exporting telemetry data—such as traces, metrics, and logs. It supports multiple programming languages and integrates with various observability backends like Prometheus, Jaeger, and AWS CloudWatch.

How It Works

GLIDE's OpenTelemetry integration is designed to be both powerful and easy to adopt. Once an OTel collector endpoint is configured, GLIDE begins emitting default metrics and traces automatically—no additional code changes are required. This simplifies the path to observability best practices and minimizes disruption to existing workflows.

Metrics Overview

GLIDE emits several built-in metrics out of the box. These metrics can be used to build dashboards, configure alerts, and monitor performance trends:

  • Timeouts: Number of requests that exceeded their timeout duration.
  • Retries: Count of operations retried due to transient errors or topology changes.
  • Moved Errors: Number of MOVED responses received, indicating key reallocation in the cluster.

These metrics are emitted to your configured OpenTelemetry collector and can be viewed in any supported backend (Prometheus, CloudWatch, etc.).

Tracing Integration

GLIDE creates a trace span for each Valkey command, giving detailed visibility into client-side performance. Each trace captures:

  • The entire command lifecycle: from creation to completion or failure.
  • A nested send_command span, measuring communication time with the Valkey server.
  • A status tag indicating success or error for each span, helping you identify failure patterns.

This distinction helps developers separate client-side queuing latency from server communication delays, making it easier to troubleshoot performance issues.

⚠ Note: Some advanced commands are not yet included in tracing instrumentation:

  • The SCAN family of commands (SCAN, SSCAN, HSCAN, ZSCAN)
  • Lua scripting commands (EVAL, EVALSHA)

Support for these commands will be added in a future version as we continue to expand tracing coverage.

Even with these exceptions, GLIDE 2.0 provides comprehensive insights across the vast majority of standard operations, making it easy to adopt observability best practices with minimal effort.

Getting Started

To begin collecting telemetry data with GLIDE 2.0:

  1. Set up an OpenTelemetry Collector to receive trace and metric data.
  2. Configure the GLIDE client with the endpoint to your collector.

GLIDE does not export data directly to third-party services—instead, it sends data to your collector, which routes it to your backend (e.g., CloudWatch, Prometheus, Jaeger).

Supported Collector Protocols

You can configure the OTel collector endpoint using one of the following protocols:

  • http:// or https:// - Send data via HTTP(S)
  • grpc:// - Use gRPC for efficient telemetry transmission
  • file:// - Write telemetry data to a local file (ideal for local dev/debugging)

Optional Parameters

When initializing OpenTelemetry, you can customize behavior using the openTelemetryConfig object. Note: Both traces and metrics are optional—but at least one must be provided in the openTelemetryConfig. If neither is set, OpenTelemetry will not emit any data.

Tracing

openTelemetryConfig.traces
  • endpoint (required): The trace collector endpoint.
  • samplePercentage (optional): Percentage (0–100) of commands to sample for tracing. Default: 1.
    • For production, a low sampling rate (1–5%) is recommended to balance performance and insight.

Metrics

openTelemetryConfig.metrics
  • endpoint (required): The metrics collector endpoint.

Flush Interval

openTelemetryConfig.flush_interval_ms
  • (optional): Time in milliseconds between flushes to the collector. Default: 5000.

File Exporter Details

If using file:// as the endpoint:

  • The path must begin with file://.
  • If a directory is provided (or no file extension), data is written to signals.json in that directory.
  • If a filename is included, it will be used as-is.
  • The parent directory must already exist.
  • Data is appended, not overwritten.

Validation Rules

  • flush_interval_ms must be a positive integer.
  • sample_percentage must be between 0 and 100.
  • File exporter paths must start with file:// and have an existing parent directory.
  • Invalid configuration will throw an error synchronously when calling OpenTelemetry.init().

⚠️ Important: OpenTelemetry.init() can only be called once per process. Subsequent calls will be ignored. To change configuration, restart the process.

Full Example (Python)

   from glide import OpenTelemetry, OpenTelemetryConfig, OpenTelemetryTracesConfig, OpenTelemetryMetricsConfig

        OpenTelemetry.init(OpenTelemetryConfig(
            traces=OpenTelemetryTracesConfig(
                endpoint="http://localhost:4318/v1/traces",
                sample_percentage=10  # Optional, defaults to 1. Can also be changed at runtime via set_sample_percentage().
            ),
            metrics=OpenTelemetryMetricsConfig(
                endpoint="http://localhost:4318/v1/metrics"
            ),
            flush_interval_ms=1000  # Optional, defaults to 5000
        ))

Advanced Configuration Settings

Authentication

By default, when connecting to Valkey, Valkey GLIDEs operates in an unauthenticated mode.

Valkey GLIDE also offers support for an authenticated connection mode.

In authenticated mode, you have the following options:

  • Use both a username and password, which is recommended and configured through ACLs on the server.
  • Use a password only, which is applicable if the server is configured with the requirepass setting.

To provide the necessary authentication credentials to the client, you can use the ServerCredentials class.

Example - Connecting with Username and Password to a Cluster

from glide import (
    GlideClusterClient,
    GlideClusterClientConfiguration,
    ServerCredentials,
    NodeAddress
)

addresses = [NodeAddress(host="address.example.com", port=6379)]
credentials = ServerCredentials("passwordA", "user1")
client_config = GlideClusterClientConfiguration(addresses, credentials=credentials)

client = await GlideClusterClient.create(client_config)

Example - Connecting with Username and Password to a Standalone server

from glide import (
    GlideClient,
    GlideClientConfiguration,
    ServerCredentials,
    NodeAddress
)

addresses = [
    NodeAddress(host="primary.example.com", port=6379),
    NodeAddress(host="replica1.example.com", port=6379),
    NodeAddress(host="replica2.example.com", port=6379)
  ]
credentials = ServerCredentials("passwordA", "user1")
client_config = GlideClientConfiguration(addresses, credentials=credentials)

client = await GlideClient.create(client_config)

Example - Using IAM Authentication with GLIDE for ElastiCache and MemoryDB

See the General Concepts section for a detailed explanation about using IAM authentication with GLIDE.

The example below utilizes the AWS SDK for the IAM token generation. Please refer to the AWS SDK docs for a detailed explanation regarding generating the IAM token.

Token generation -

from typing import Tuple, Union
from urllib.parse import ParseResult, urlencode, urlunparse
import botocore.session
from botocore.model import ServiceId
from botocore.signers import RequestSigner
from cachetools import TTLCache, cached
import valkey


class ElastiCacheIAMProvider(valkey.CredentialProvider):
    def __init__(self, user, cluster_name, region="us-east-1"):
        self.user = user
        self.cluster_name = cluster_name
        self.region = region

        session = botocore.session.get_session()
        self.request_signer = RequestSigner(
            ServiceId("elasticache"),
            self.region,
            "elasticache",
            "v4",
            session.get_credentials(),
            session.get_component("event_emitter"),
        )

    # Generated IAM tokens are valid for 15 minutes
    @cached(cache=TTLCache(maxsize=128, ttl=900))
    def get_credentials(self) -> Tuple[str, str]:
        query_params = {"Action": "connect", "User": self.user}
        url = urlunparse(
            ParseResult(
                scheme="https",
                netloc=self.cluster_name,
                path="/",
                query=urlencode(query_params),
                params="",
                fragment="",
            )
        )
        signed_url = self.request_signer.generate_presigned_url(
            {"method": "GET", "url": url, "body": {}, "headers": {}, "context": {}},
            operation_name="connect",
            expires_in=900,
            region_name=self.region,
        )
        # Elasticache expects to receive the URL without the protocol prefix
        return (self.user, signed_url.removeprefix("https://"))

Usage example -

from typing import Tuple, Union
import asyncio
from glide import (
    GlideClusterClient,
    GlideClusterClientConfiguration,
    ServerCredentials,
    NodeAddress,
)

async def main():
    username = "your-username"
    cluster_name = "your-cluster-name"
    
    auth = ElastiCacheIAMProvider(user=username,cluster_name=cluster_name, region='us-east-1')
    _, iam_token = auth.get_credentials()
    valkey_credentials = ServerCredentials(
        username=username,
        password=iam_token,
    )
    
    addresses = [NodeAddress("example-cluster-endpoint.use1.cache.amazonaws.com", 6379)]
    config = GlideClusterClientConfiguration(addresses=addresses, use_tls=True, credentials=valkey_credentials)
    client = await GlideClusterClient.create(config)
    
    # Update password dynamically
    _, new_iam_token = auth.get_credentials()
        await client.update_connection_password(new_iam_token)
        
    # To perform immediate re-authentication, set the second parameter to true
    await client.update_connection_password(new_iam_token, True)

TLS

Valkey GLIDE supports secure TLS connections to a data store.

It's important to note that TLS support in Valkey GLIDE relies on rusttls. Currently, Valkey GLIDE employs the default rustls settings with no option for customization.

Example - Connecting with TLS Mode Enabled to a Cluster

from glide import (
    GlideClusterClient,
    GlideClusterClientConfiguration,
    NodeAddress
)

addresses = [NodeAddress(host="address.example.com", port=6379)]
client_config = GlideClusterClientConfiguration(addresses, use_tls=True)

client = await GlideClusterClient.create(client_config)

Example - Connecting with TLS Mode Enabled to a Standalone server

from glide import (
    GlideClient,
    GlideClientConfiguration,
    NodeAddress
)

addresses = [
    NodeAddress(host="primary.example.com", port=6379),
    NodeAddress(host="replica1.example.com", port=6379),
    NodeAddress(host="replica2.example.com", port=6379)
  ]
client_config = GlideClientConfiguration(addresses, use_tls=True)

client = await GlideClient.create(client_config)

Read Strategy

By default, Valkey GLIDE directs read commands to the primary node that owns a specific slot. For applications that prioritize read throughput and can tolerate possibly stale data, Valkey GLIDE provides the flexibility to route reads to replica nodes.

Valkey GLIDE provides support for next read strategies, allowing you to choose the one that best fits your specific use case.

Strategy Description
PRIMARY Always read from primary, in order to get the freshest data
PREFER_REPLICA Spread requests between all replicas in a round robin manner. If no replica is available, route the requests to the primary
AZ_AFFINITY Spread the read requests between replicas in the same client's availability zone in a round robin manner, falling back to other replicas or the primary if needed.
AZ_AFFINITY_REPLICAS_AND_PRIMARY Spread the read requests among nodes within the client's availability zone in a round robin manner, prioritizing local replicas, then the local primary, and falling back to other replicas or the primary if needed.

Example - Use PREFER_REPLICA Read Strategy

from glide import (
    GlideClusterClient,
    GlideClusterClientConfiguration,
    NodeAddress,
    ReadFrom
)

addresses = [NodeAddress(host="address.example.com", port=6379)]
client_config = GlideClusterClientConfiguration(addresses, read_from=ReadFrom.PREFER_REPLICA)

client = await GlideClusterClient.create(client_config)
await client.set("key1", "val1")
# get will read from one of the replicas
await client.get("key1")

Example - Use AZ_AFFINITY Read Strategy

If ReadFrom strategy is AZ_AFFINITY, 'client_az' setting is required to ensures that readonly commands are directed to replicas within the specified AZ if exits.

from glide import (
    GlideClusterClient,
    GlideClusterClientConfiguration,
    NodeAddress,
    ReadFrom
)

addresses = [NodeAddress(host="address.example.com", port=6379)]
client_config = GlideClusterClientConfiguration(addresses, read_from=ReadFrom.AZ_AFFINITY, client_az="us-east-1a")

client = await GlideClusterClient.create(client_config)
await client.set("key1", "val1")
# get will read from one of the replicas in the same client's availability zone if exits.
await client.get("key1")

Example - Use AZ_AFFINITY_REPLICAS_AND_PRIMARY Read Strategy

If ReadFrom strategy is AZ_AFFINITY_REPLICAS_AND_PRIMARY, 'client_az' setting is required to ensures that readonly commands are directed to replicas or primary within the specified AZ if exits.

from glide import (
    GlideClusterClient,
    GlideClusterClientConfiguration,
    NodeAddress,
    ReadFrom
)

addresses = [NodeAddress(host="address.example.com", port=6379)]
client_config = GlideClusterClientConfiguration(addresses, read_from=ReadFrom.AZ_AFFINITY_REPLICAS_AND_PRIMARY, client_az="us-east-1a")

client = await GlideClusterClient.create(client_config)
await client.set("key1", "val1")
# get will read from one of the replicas or the primary in the same client's availability zone if exits.
await client.get("key1")

Timeouts and Reconnect Strategy

Valkey GLIDE allows you to configure timeout settings and reconnect strategies. These configurations can be applied through the GlideClusterClientConfiguration and GlideClientConfiguration parameters.

Configuration setting Description Default value
request_timeout This specified time duration, measured in milliseconds, represents the period during which the client will await the completion of a request. This time frame includes the process of sending the request, waiting for a response from the node(s), and any necessary reconnection or retry attempts. If a pending request exceeds the specified timeout, it will trigger a timeout error. If no timeout value is explicitly set, a default value will be employed. 250 milliseconds
reconnect_strategy The reconnection strategy defines how and when reconnection attempts are made in the event of connection failures Exponential backoff

Example - Setting Increased Request Timeout for Long-Running Commands

from glide import (
    GlideClient,
    GlideClusterClientConfiguration,
    GlideClusterClient
)

addresses = [NodeAddress(host="address.example.com", port=6379)]
client_config = GlideClusterClientConfiguration(addresses, request_timeout=500)

client = await GlideClusterClient.create(client_config)

Tracking resources

GLIDE 1.2 introduces a new NONE Valkey API: getStatistics which returns a Dict with (currently) 2 properties (available for both GlideClient & GlideClusterClient):

  • total_connections contains the number of active connections across all clients
  • total_clients contains the number of active clients (regardless of its type)
from glide import (
    NodeAddress,
    GlideClusterClientConfiguration,
    GlideClusterClient
)

addresses = [NodeAddress(host="address.example.com", port=6379)]
client_config = GlideClusterClientConfiguration(addresses, request_timeout=500)

client = await GlideClusterClient.create(client_config)

# Retrieve statistics
stats = await client.get_statistics()

# Example: Accessing and printing statistics
print(f"Total Connections: {stats['total_connections']}")
print(f"Total Clients: {stats['total_clients']}")
Clone this wiki locally