diff --git a/README.md b/README.md index b972c26635..2c96aef58c 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ * [1.4. Basic Configuration](#14-basic-configuration) * [1.4.1. Configuring Hazelcast IMDG](#141-configuring-hazelcast-imdg) * [1.4.2. Configuring Hazelcast Python Client](#142-configuring-hazelcast-python-client) - * [1.4.2.1. Group Settings](#1421-group-settings) + * [1.4.2.1. Cluster Name Setting](#1421-cluster-name-setting) * [1.4.2.2. Network Settings](#1422-network-settings) * [1.4.3. Client System Properties](#143-client-system-properties) * [1.5. Basic Usage](#15-basic-usage) @@ -19,7 +19,6 @@ * [2. Features](#2-features) * [3. Configuration Overview](#3-configuration-overview) * [3.1. Configuration Options](#31-configuration-options) - * [3.1.1. Programmatic Configuration](#311-programmatic-configuration) * [4. Serialization](#4-serialization) * [4.1. IdentifiedDataSerializable Serialization](#41-identifieddataserializable-serialization) * [4.2. Portable Serialization](#42-portable-serialization) @@ -32,15 +31,10 @@ * [5.2. Setting Smart Routing](#52-setting-smart-routing) * [5.3. Enabling Redo Operation](#53-enabling-redo-operation) * [5.4. Setting Connection Timeout](#54-setting-connection-timeout) - * [5.5. Setting Connection Attempt Limit](#55-setting-connection-attempt-limit) - * [5.6. Setting Connection Attempt Period](#56-setting-connection-attempt-period) - * [5.7. Enabling Client TLS/SSL](#57-enabling-client-tlsssl) - * [5.8. Enabling Hazelcast Cloud Discovery](#58-enabling-hazelcast-cloud-discovery) -* [6. Securing Client Connection](#6-securing-client-connection) - * [6.1. TLS/SSL](#61-tlsssl) - * [6.1.1. TLS/SSL for Hazelcast Members](#611-tlsssl-for-hazelcast-members) - * [6.1.2. TLS/SSL for Hazelcast Python Clients](#612-tlsssl-for-hazelcast-python-clients) - * [6.1.3. Mutual Authentication](#613-mutual-authentication) + * [5.5. Enabling Client TLS/SSL](#55-enabling-client-tlsssl) + * [5.6. Enabling Hazelcast Cloud Discovery](#56-enabling-hazelcast-cloud-discovery) +* [6. Client Connection Strategy](#6-client-connection-strategy) + * [6.1. Configuring Client Connection Retry](#61-configuring-client-connection-retry) * [7. Using Python Client with Hazelcast IMDG](#7-using-python-client-with-hazelcast-imdg) * [7.1. Python Client API Overview](#71-python-client-api-overview) * [7.2. Python Client Operation Modes](#72-python-client-operation-modes) @@ -58,12 +52,10 @@ * [7.4.6. Using List](#746-using-list) * [7.4.7. Using Ringbuffer](#747-using-ringbuffer) * [7.4.8. Using Topic](#748-using-topic) - * [7.4.9. Using Lock](#749-using-lock) - * [7.4.10. Using Atomic Long](#7410-using-atomic-long) - * [7.4.11. Using Semaphore](#7411-using-semaphore) - * [7.4.12. Using Transactions](#7412-using-transactions) - * [7.4.13. Using PN Counter](#7413-using-pn-counter) - * [7.4.14. Using Flake ID Generator](#7414-using-flake-id-generator) + * [7.4.9. Using Transactions](#749-using-transactions) + * [7.4.10. Using PN Counter](#7410-using-pn-counter) + * [7.4.11. Using Flake ID Generator](#7411-using-flake-id-generator) + * [7.4.11.1. Configuring Flake ID Generator](#74111-configuring-flake-id-generator) * [7.5. Distributed Events](#75-distributed-events) * [7.5.1. Cluster Events](#751-cluster-events) * [7.5.1.1. Listening for Member Events](#7511-listening-for-member-events) @@ -89,17 +81,24 @@ * [7.9. Monitoring and Logging](#79-monitoring-and-logging) * [7.9.1. Enabling Client Statistics](#791-enabling-client-statistics) * [7.9.2. Logging Configuration](#792-logging-configuration) -* [8. Development and Testing](#8-development-and-testing) - * [8.1. Building and Using Client From Sources](#81-building-and-using-client-from-sources) - * [8.2. Testing](#82-testing) -* [9. Getting Help](#9-getting-help) -* [10. Contributing](#10-contributing) -* [11. License](#11-license) -* [12. Copyright](#12-copyright) +* [8. Securing Client Connection](#8-securing-client-connection) + * [8.1. TLS/SSL](#81-tlsssl) + * [8.1.1. TLS/SSL for Hazelcast Members](#811-tlsssl-for-hazelcast-members) + * [8.1.2. TLS/SSL for Hazelcast Python Clients](#812-tlsssl-for-hazelcast-python-clients) + * [8.1.3. Mutual Authentication](#813-mutual-authentication) +* [9. Development and Testing](#9-development-and-testing) + * [9.1. Building and Using Client From Sources](#91-building-and-using-client-from-sources) + * [9.2. Testing](#92-testing) +* [10. Getting Help](#10-getting-help) +* [11. Contributing](#11-contributing) +* [12. License](#12-license) +* [13. Copyright](#13-copyright) # Introduction -This document provides information about the Python client for [Hazelcast](https://hazelcast.org/). This client uses Hazelcast's [Open Client Protocol](https://hazelcast.org/documentation/#open-binary) and works with Hazelcast IMDG 3.6 and higher versions. +This document provides information about the Python client for [Hazelcast](https://hazelcast.org/). +This client uses Hazelcast's [Open Client Protocol](https://github.com/hazelcast/hazelcast-client-protocol) and works +with Hazelcast IMDG 4.0 and higher versions. ### Resources @@ -115,14 +114,16 @@ See the [Releases](https://github.com/hazelcast/hazelcast-python-client/releases # 1. Getting Started -This chapter provides information on how to get started with your Hazelcast Python client. It outlines the requirements, installation and configuration of the client, setting up a cluster, and provides a simple application that uses a distributed map in Python client. +This chapter provides information on how to get started with your Hazelcast Python client. It outlines the requirements, +installation and configuration of the client, setting up a cluster, and provides a simple application that uses a +distributed map in Python client. ## 1.1. Requirements - Windows, Linux/UNIX or Mac OS X - Python 2.7 or Python 3.4 or newer -- Java 6 or newer -- Hazelcast IMDG 3.6 or newer +- Java 8 or newer +- Hazelcast IMDG 4.0 or newer - Latest Hazelcast Python client ## 1.2. Working with Hazelcast IMDG Clusters @@ -144,7 +145,8 @@ In order to use Hazelcast Python client, we first need to setup a Hazelcast IMDG There are following options to start a Hazelcast IMDG cluster easily: * You can run standalone members by downloading and running JAR files from the website. -* You can embed members to your Java projects. +* You can embed members to your Java projects. +* You can use our [Docker images](https://hub.docker.com/r/hazelcast/hazelcast/). We are going to download JARs from the website and run a standalone member for this guide. @@ -153,29 +155,38 @@ We are going to download JARs from the website and run a standalone member for t Follow the instructions below to create a Hazelcast IMDG cluster: 1. Go to Hazelcast's download [page](https://hazelcast.org/download/) and download either the `.zip` or `.tar` distribution of Hazelcast IMDG. -2. Decompress the contents into any directory that you -want to run members from. +2. Decompress the contents into any directory that you want to run members from. 3. Change into the directory that you decompressed the Hazelcast content and then into the `bin` directory. 4. Use either `start.sh` or `start.bat` depending on your operating system. Once you run the start script, you should see the Hazelcast IMDG logs in the terminal. You should see a log similar to the following, which means that your 1-member cluster is ready to be used: ``` -INFO: [192.168.0.3]:5701 [dev] [3.10.4] +Sep 03, 2020 2:21:57 PM com.hazelcast.core.LifecycleService +INFO: [192.168.1.10]:5701 [dev] [4.1-SNAPSHOT] [192.168.1.10]:5701 is STARTING +Sep 03, 2020 2:21:58 PM com.hazelcast.internal.cluster.ClusterService +INFO: [192.168.1.10]:5701 [dev] [4.1-SNAPSHOT] Members {size:1, ver:1} [ - Member [192.168.0.3]:5701 - 65dac4d1-2559-44bb-ba2e-ca41c56eedd6 this + Member [192.168.1.10]:5701 - 7362c66f-ef9f-4a6a-a003-f8b33dfd292a this ] -Sep 06, 2018 10:50:23 AM com.hazelcast.core.LifecycleService -INFO: [192.168.0.3]:5701 [dev] [3.10.4] [192.168.0.3]:5701 is STARTED +Sep 03, 2020 2:21:58 PM com.hazelcast.core.LifecycleService +INFO: [192.168.1.10]:5701 [dev] [4.1-SNAPSHOT] [192.168.1.10]:5701 is STARTED ``` #### 1.2.1.2. Adding User Library to CLASSPATH -When you want to use features such as querying and language interoperability, you might need to add your own Java classes to the Hazelcast member in order to use them from your Python client. This can be done by adding your own compiled code to the `CLASSPATH`. To do this, compile your code with the `CLASSPATH` and add the compiled files to the `user-lib` directory in the extracted `hazelcast-.zip` (or `tar`). Then, you can start your Hazelcast member by using the start scripts in the `bin` directory. The start scripts will automatically add your compiled classes to the `CLASSPATH`. +When you want to use features such as querying and language interoperability, you might need to add your own Java +classes to the Hazelcast member in order to use them from your Python client. This can be done by adding your own +compiled code to the `CLASSPATH`. To do this, compile your code with the `CLASSPATH` and add the compiled files to +the `user-lib` directory in the extracted `hazelcast-.zip` (or `tar`). Then, you can start your Hazelcast +member by using the start scripts in the `bin` directory. The start scripts will automatically add your compiled +classes to the `CLASSPATH`. -Note that if you are adding an `IdentifiedDataSerializable` or a `Portable` class, you need to add its factory too. Then, you should configure the factory in the `hazelcast.xml` configuration file. This file resides in the `bin` directory where you extracted the `hazelcast-.zip` (or `tar`). +Note that if you are adding an `IdentifiedDataSerializable` or a `Portable` class, you need to add its factory too. +Then, you should configure the factory in the `hazelcast.xml` configuration file. This file resides in the `bin` +directory where you extracted the `hazelcast-.zip` (or `tar`). The following is an example configuration when you are adding an `IdentifiedDataSerializable` class: @@ -217,18 +228,23 @@ If you are using Hazelcast IMDG and Python client on the same computer, generall trying out the client. However, if you run the client on a different computer than any of the cluster members, you may need to do some simple configurations such as specifying the member addresses. -The Hazelcast IMDG members and clients have their own configuration options. You may need to reflect some of the member side configurations on the client side to properly connect to the cluster. +The Hazelcast IMDG members and clients have their own configuration options. You may need to reflect some of the member +side configurations on the client side to properly connect to the cluster. This section describes the most common configuration elements to get you started in no time. It discusses some member side configuration options to ease the understanding of Hazelcast's ecosystem. Then, the client side configuration options -regarding the cluster connection are discussed. The configurations for the Hazelcast IMDG data structures that can be used in the Python client are discussed in the following sections. +regarding the cluster connection are discussed. The configurations for the Hazelcast IMDG data structures +that can be used in the Python client are discussed in the following sections. -See the [Hazelcast IMDG Reference Manual](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html) and [Configuration Overview section](#3-configuration-overview) for more information. +See the [Hazelcast IMDG Reference Manual](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html) +and [Configuration Overview section](#3-configuration-overview) for more information. ### 1.4.1. Configuring Hazelcast IMDG Hazelcast IMDG aims to run out-of-the-box for most common scenarios. However if you have limitations on your network such as multicast being disabled, -you may have to configure your Hazelcast IMDG members so that they can find each other on the network. Also, since most of the distributed data structures are configurable, you may want to configure them according to your needs. We will show you the basics about network configuration here. +you may have to configure your Hazelcast IMDG members so that they can find each other on the network. +Also, since most of the distributed data structures are configurable, you may want to configure them according to your needs. +We will show you the basics about network configuration here. You can use the following options to configure Hazelcast IMDG: @@ -237,14 +253,12 @@ You can use the following options to configure Hazelcast IMDG: Since we use standalone servers, we will use the `hazelcast.xml` file to configure our cluster members. -When you download and unzip `hazelcast-.zip` (or `tar`), you see the `hazelcast.xml` in the `bin` directory. When a Hazelcast member starts, it looks for the `hazelcast.xml` file to load the configuration from. A sample `hazelcast.xml` is shown below. +When you download and unzip `hazelcast-.zip` (or `tar`), you see the `hazelcast.xml` in the `bin` directory. +When a Hazelcast member starts, it looks for the `hazelcast.xml` file to load the configuration from. A sample `hazelcast.xml` is shown below. ```xml - - dev - dev-pass - + dev 5701 @@ -270,11 +284,9 @@ When you download and unzip `hazelcast-.zip` (or `tar`), you see the `h We will go over some important configuration elements in the rest of this section. -- ``: Specifies which cluster this member belongs to. A member connects only to the other members that are in the same group as -itself. As shown in the above configuration sample, there are `` and `` tags under the `` element with some pre-configured values. You may give your clusters different names so that they can -live in the same network without disturbing each other. Note that the cluster name should be the same across all members and clients that belong - to the same cluster. The `` tag is not in use since Hazelcast 3.9. It is there for backward compatibility -purposes. You can remove or leave it as it is if you use Hazelcast 3.9 or later. +- ``: Specifies which cluster this member belongs to. A member connects only to the other members that are in the same cluster as +itself. You may give your clusters different names so that they can live in the same network without disturbing each other. +Note that the cluster name should be the same across all members and clients that belong to the same cluster. - `` - ``: Specifies the port number to be used by the member when it starts. Its default value is 5701. You can specify another port number, and if you set `auto-increment` to `true`, then Hazelcast will try the subsequent ports until it finds an available port or the `port-count` is reached. @@ -316,18 +328,15 @@ If you run the Hazelcast IMDG members in a different server than the client, you names as explained in the previous section. If you did, then you need to make certain changes to the network settings of your client. -#### 1.4.2.1. Group Settings +#### 1.4.2.1. Cluster Name Setting -You need to provide the group name of the cluster, if it is defined on the server side, to which you want the client to connect. +You need to provide the name of the cluster, if it is defined on the server side, to which you want the client to connect. ```python config = hazelcast.ClientConfig() -config.group_config.name = "group-name-of-your-cluster" -config.group_config.password = "group password" +config.cluster_name = "name of your cluster" ``` -> **NOTE: If you have a Hazelcast IMDG release older than 3.11, you need to provide also a group password along with the group name.** - #### 1.4.2.2. Network Settings You need to provide the IP address and port of at least one member in your cluster so the client can find it. @@ -336,12 +345,13 @@ You need to provide the IP address and port of at least one member in your clust import hazelcast config = hazelcast.ClientConfig() -config.network_config.addresses.append("IP-address:port") +config.network.addresses.append("IP-address:port") ``` ### 1.4.3. Client System Properties -While configuring your Python client, you can use various system properties provided by Hazelcast to tune its clients. These properties can be set programmatically through `config.set_property` method or by using an environment variable. +While configuring your Python client, you can use various system properties provided by Hazelcast to tune its clients. +These properties can be set programmatically through `config.set_property` method or by using an environment variable. The value of the any property will be: @@ -378,16 +388,14 @@ environ[ClientProperties.INVOCATION_TIMEOUT_SECONDS.name] = "2" If you set a property both programmatically and via an environment variable, the programmatically set value will be used. -See the [complete list](http://hazelcast.github.io/hazelcast-python-client/3.10/hazelcast.config.html#hazelcast.config.ClientProperties) of client system properties, along with their descriptions, which can be used to configure your Hazelcast Python client. +See the [complete list](http://hazelcast.github.io/hazelcast-python-client/4.0/hazelcast.config.html#hazelcast.config.ClientProperties) of client system properties, along with their descriptions, which can be used to configure your Hazelcast Python client. ## 1.5. Basic Usage Now that we have a working cluster and we know how to configure both our cluster and client, we can run a simple program to use a distributed map in the Python client. -The following example first configures the logger for the Python client. You can find more information about the logging options in the [Logging Configuration section](#792-logging-configuration). - -Then, it creates a configuration object and starts a client. +The following example first creates a configuration object and starts a client. ```python import hazelcast @@ -403,38 +411,37 @@ client = hazelcast.HazelcastClient(config) This should print logs about the cluster members such as address, port and UUID to the `stderr`. ``` -Feb 15, 2019 12:51:59 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] A non-empty group password is configured for the Hazelcast client. Starting with Hazelcast IMDG version 3.11, clients with the same group name, but with different group passwords (that do not use authentication) will be accepted to a cluster. The group password configuration will be removed completely in a future release. -Feb 15, 2019 12:51:59 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is STARTING -Feb 15, 2019 12:51:59 PM HazelcastClient.ClusterService -INFO: [3.10] [dev] [hz.client_0] Connecting to Address(host=127.0.0.1, port=5701) -Feb 15, 2019 12:51:59 PM HazelcastClient.ConnectionManager -INFO: [3.10] [dev] [hz.client_0] Authenticated with Connection(address=('127.0.0.1', 5701), id=0) -Feb 15, 2019 12:51:59 PM HazelcastClient.ClusterService -INFO: [3.10] [dev] [hz.client_0] New member list: +Sep 03, 2020 02:33:31 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is STARTING +Sep 03, 2020 02:33:31 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is STARTED +Sep 03, 2020 02:33:31 PM HazelcastClient.ConnectionManager +INFO: [4.0.0] [dev] [hz.client_0] Trying to connect to Address(host=127.0.0.1, port=5701) +Sep 03, 2020 02:33:31 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is CONNECTED +Sep 03, 2020 02:33:31 PM HazelcastClient.ConnectionManager +INFO: [4.0.0] [dev] [hz.client_0] Authenticated with server Address(host=192.168.1.10, port=5701):7362c66f-ef9f-4a6a-a003-f8b33dfd292a, server version: 4.1-SNAPSHOT, local address: Address(host=127.0.0.1, port=33376) +Sep 03, 2020 02:33:31 PM HazelcastClient.ClusterService +INFO: [4.0.0] [dev] [hz.client_0] Members [1] { - Member [10.216.1.49]:5701 - 1f4bb35d-b68f-46eb-bd65-61e3f4bc9922 + Member [192.168.1.10]:5701 - 7362c66f-ef9f-4a6a-a003-f8b33dfd292a } -Feb 15, 2019 12:51:59 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is CONNECTED -Feb 15, 2019 12:51:59 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] Client started. +Sep 03, 2020 02:33:31 PM HazelcastClient +INFO: [4.0.0] [dev] [hz.client_0] Client started. ``` Congratulations. You just started a Hazelcast Python client. **Using a Map** -Let's manipulate a distributed map on a cluster using the client. +Let's manipulate a distributed map(similar to Python's builtin `dict`) on a cluster using the client. ```python import hazelcast -config = hazelcast.ClientConfig() -client = hazelcast.HazelcastClient(config) +client = hazelcast.HazelcastClient() personnel_map = client.get_map("personnel-map") personnel_map.put("Alice", "IT") @@ -452,37 +459,10 @@ client.shutdown() **Output** ``` -Feb 15, 2019 12:53:15 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] A non-empty group password is configured for the Hazelcast client. Starting with Hazelcast IMDG version 3.11, clients with the same group name, but with different group passwords (that do not use authentication) will be accepted to a cluster. The group password configuration will be removed completely in a future release. -Feb 15, 2019 12:53:15 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is STARTING -Feb 15, 2019 12:53:15 PM HazelcastClient.ClusterService -INFO: [3.10] [dev] [hz.client_0] Connecting to Address(host=127.0.0.1, port=5701) -Feb 15, 2019 12:53:15 PM HazelcastClient.ConnectionManager -INFO: [3.10] [dev] [hz.client_0] Authenticated with Connection(address=('127.0.0.1', 5701), id=0) -Feb 15, 2019 12:53:15 PM HazelcastClient.ClusterService -INFO: [3.10] [dev] [hz.client_0] New member list: - -Members [1] { - Member [10.216.1.49]:5701 - 1f4bb35d-b68f-46eb-bd65-61e3f4bc9922 -} - -Feb 15, 2019 12:53:15 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is CONNECTED -Feb 15, 2019 12:53:15 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] Client started. Added IT personnel. Printing all known personnel Alice is in IT department Clark is in IT department Bob is in IT department -Feb 15, 2019 12:53:15 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is SHUTTING_DOWN -Feb 15, 2019 12:53:15 PM HazelcastClient.AsyncoreReactor -WARNING: [3.10] [dev] [hz.client_0] Connection closed by server -Feb 15, 2019 12:53:15 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is SHUTDOWN -Feb 15, 2019 12:53:15 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] Client shutdown. ``` You see this example puts all the IT personnel into a cluster-wide `personnel-map` and then prints all the known personnel. @@ -490,6 +470,10 @@ You see this example puts all the IT personnel into a cluster-wide `personnel-ma Now, run the following code. ```python +import hazelcast + +client = hazelcast.HazelcastClient() + personnel_map = client.get_map("personnel-map") personnel_map.put("Denise", "Sales") personnel_map.put("Erwing", "Sales") @@ -504,25 +488,6 @@ for person, department in personnel_map.entry_set().result(): **Output** ``` -Feb 15, 2019 12:54:05 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] A non-empty group password is configured for the Hazelcast client. Starting with Hazelcast IMDG version 3.11, clients with the same group name, but with different group passwords (that do not use authentication) will be accepted to a cluster. The group password configuration will be removed completely in a future release. -Feb 15, 2019 12:54:05 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is STARTING -Feb 15, 2019 12:54:05 PM HazelcastClient.ClusterService -INFO: [3.10] [dev] [hz.client_0] Connecting to Address(host=127.0.0.1, port=5701) -Feb 15, 2019 12:54:05 PM HazelcastClient.ConnectionManager -INFO: [3.10] [dev] [hz.client_0] Authenticated with Connection(address=('127.0.0.1', 5701), id=0) -Feb 15, 2019 12:54:05 PM HazelcastClient.ClusterService -INFO: [3.10] [dev] [hz.client_0] New member list: - -Members [1] { - Member [10.216.1.49]:5701 - 1f4bb35d-b68f-46eb-bd65-61e3f4bc9922 -} - -Feb 15, 2019 12:54:05 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is CONNECTED -Feb 15, 2019 12:54:05 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] Client started. Added Sales personnel. Printing all known personnel Denise is in Sales department Erwing is in Sales department @@ -530,14 +495,6 @@ Faith is in Sales department Alice is in IT department Clark is in IT department Bob is in IT department -Feb 15, 2019 12:54:05 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is SHUTTING_DOWN -Feb 15, 2019 12:54:05 PM HazelcastClient.AsyncoreReactor -WARNING: [3.10] [dev] [hz.client_0] Connection closed by server -Feb 15, 2019 12:54:05 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is SHUTDOWN -Feb 15, 2019 12:54:05 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] Client shutdown. ``` You will see this time we add only the sales employees but we get the list of all known employees including the ones in IT. @@ -567,7 +524,7 @@ def entry_set_cb(future): print("{} is in {} department".format(person, department)) personnel_map.entry_set().add_done_callback(entry_set_cb) -time.sleep(0.1) # wait for Future to complete +time.sleep(1) # wait for Future to complete ``` Asynchronous operations are far more efficient in single threaded Python interpreter but you may want all of your method calls @@ -593,7 +550,7 @@ for person, department in personnel_map.entry_set(): See the Hazelcast Python [examples](https://github.com/hazelcast/hazelcast-python-client/tree/master/examples) for more code samples. -You can also see the [latest Hazelcast Python API Documentation](http://hazelcast.github.io/hazelcast-python-client/3.10/index.html) or [global API Documentation page](http://hazelcast.github.io/hazelcast-python-client/). +You can also see the [latest Hazelcast Python API Documentation](http://hazelcast.github.io/hazelcast-python-client/4.0/index.html) or [global API Documentation page](http://hazelcast.github.io/hazelcast-python-client/). # 2. Features @@ -607,13 +564,8 @@ Hazelcast Python client supports the following data structures and features: * Replicated Map * Ringbuffer * Topic -* Lock -* Semaphore -* AtomicLong * CRDT PN Counter -* AtomicReference -* IdGenerator -* CountDownLatch +* Flake Id Generator * Distributed Executor Service * Event Listeners * Sub-Listener Interfaces for Map Listener @@ -632,6 +584,11 @@ Hazelcast Python client supports the following data structures and features: * SSL Support (requires Enterprise server) * Mutual Authentication (requires Enterprise server) * Authorization +* Management Center Integration / Awareness +* Client Near Cache Stats +* Client Runtime Stats +* Client Operating Systems Stats +* Hazelcast Cloud Discovery * Smart Client * Unisocket Client * Lifecycle Service @@ -641,6 +598,8 @@ Hazelcast Python client supports the following data structures and features: * Custom Serialization * JSON Serialization * Global Serialization +* Connection Strategy +* Connection Retry # 3. Configuration Overview @@ -648,35 +607,39 @@ This chapter describes the options to configure your Python client. ## 3.1. Configuration Options -You can configure Hazelcast Python client programmatically (API). - -### 3.1.1. Programmatic Configuration +You can configure Hazelcast Python client programmatically. For programmatic configuration of the Hazelcast Python client, just instantiate a `ClientConfig` object and configure the desired aspects. An example is shown below. ```python config = hazelcast.ClientConfig() -config.network_config.addresses.append("127.0.0.1:5701") +config.network.addresses.append("127.0.0.1:5701") client = hazelcast.HazelcastClient(config) ``` -See the `ClientConfig` class documentation at [Hazelcast Python Client API Docs](http://hazelcast.github.io/hazelcast-python-client/3.10/hazelcast.config.html) for details. +See the `ClientConfig` class documentation at [Hazelcast Python Client API Docs](http://hazelcast.github.io/hazelcast-python-client/4.0/hazelcast.config.html) for details. # 4. Serialization -Serialization is the process of converting an object into a stream of bytes to store the object in the memory, a file or database, or transmit it through the network. Its main purpose is to save the state of an object in order to be able to recreate it when needed. The reverse process is called deserialization. Hazelcast offers you its own native serialization methods. You will see these methods throughout this chapter. - -Hazelcast serializes all your objects before sending them to the server. The `bool`, `int`, `long` (for Python 2), `float`, `str`, and `unicode` (for Python 2) types are serialized natively and you cannot override this behavior. The following table is the conversion of types for the Java server side. - -| Python | Java | -|---------|----------------------------------------| -| bool | Boolean | -| int | Byte, Short, Integer, Long, BigInteger | -| long | Byte, Short, Integer, Long, BigInteger | -| float | Float, Double | -| str | String | -| unicode | String | +Serialization is the process of converting an object into a stream of bytes to store the object in the memory, a file or database, +or transmit it through the network. Its main purpose is to save the state of an object in order to be able to recreate it when needed. +The reverse process is called deserialization. Hazelcast offers you its own native serialization methods. +You will see these methods throughout this chapter. + +Hazelcast serializes all your objects before sending them to the server. The `bool`, `int`, `long` (for Python 2), `float`, `str`, `unicode` (for Python 2), `bytearray` and `bytes` types are serialized natively and you cannot override this behavior. +The following table is the conversion of types for the Java server side. + +| Python | Java | +|-----------|----------------------------------------| +| bool | Boolean | +| int | Byte, Short, Integer, Long, BigInteger | +| long | Byte, Short, Integer, Long, BigInteger | +| float | Float, Double | +| str | String | +| unicode | String | +| bytearray | byte[] | +| bytes | byte[] | > Note: A `int` or `long` type is serialized as `Integer` by default. You can configure this behavior using the `SerializationConfig.default_integer_type`. @@ -700,7 +663,8 @@ When Hazelcast Python client serializes an object: 7. If the above check fails, then the Python client uses `cPickle` (for Python 2) or `pickle` (for Python 3) by default. -However, `cPickle/pickle Serialization` is not the best way of serialization in terms of performance and interoperability between the clients in different languages. If you want the serialization to work faster or you use the clients in different languages, Hazelcast offers its own native serialization types, such as [`IdentifiedDataSerializable` Serialization](#41-identifieddataserializable-serialization) and [`Portable` Serialization](#42-portable-serialization). +However, `cPickle/pickle Serialization` is not the best way of serialization in terms of performance and interoperability between the clients in different languages. +If you want the serialization to work faster or you use the clients in different languages, Hazelcast offers its own native serialization types, such as [`IdentifiedDataSerializable` Serialization](#41-identifieddataserializable-serialization) and [`Portable` Serialization](#42-portable-serialization). On top of all, if you want to use your own serialization type, you can use a [Custom Serialization](#43-custom-serialization). @@ -726,21 +690,27 @@ class Address(IdentifiedDataSerializable): def get_factory_id(self): return 1 - def write_data(self, object_data_output): - object_data_output.write_utf(self.street) - object_data_output.write_int(self.zip_code) - object_data_output.write_utf(self.city) - object_data_output.write_utf(self.state) + def write_data(self, output): + output.write_utf(self.street) + output.write_int(self.zip_code) + output.write_utf(self.city) + output.write_utf(self.state) - def read_data(self, object_data_input): - self.street = object_data_input.read_utf() - self.zip_code = object_data_input.read_int() - self.city = object_data_input.read_utf() - self.state = object_data_input.read_utf() + def read_data(self, input): + self.street = input.read_utf() + self.zip_code = input.read_int() + self.city = input.read_utf() + self.state = input.read_utf() ``` -> Note: For IdentifiedDataSerializable to work in Python client, the class that inherits it should have default valued parameters in its `__init__` method so that an instance of that class can be created without passing any arguments to it. -The IdentifiedDataSerializable uses `get_class_id()` and `get_factory_id()` methods to reconstitute the object. To complete the implementation, an `IdentifiedDataSerializable factory` should also be created and registered into the `SerializationConfig` which can be accessed from `Config.serialization_config`. A factory is a dictionary that stores class ID and the `IdentifiedDataSerializable` class type pairs as the key and value. The factory's responsibility is to store the right `IdentifiedDataSerializable` class type for the given class ID. +> **NOTE: Refer to `ObjectDataInput`/`ObjectDataOutput` classes in the `hazelcast.serialization.api` package to understand methods available on the `input`/`output` objects.** + +> **NOTE: For IdentifiedDataSerializable to work in Python client, the class that inherits it should have default valued parameters in its `__init__` method so that an instance of that class can be created without passing any arguments to it.** + +The IdentifiedDataSerializable uses `get_class_id()` and `get_factory_id()` methods to reconstitute the object. +To complete the implementation, an `IdentifiedDataSerializable factory` should also be created and registered into the `SerializationConfig` which can be accessed from `config.serialization`. +A factory is a dictionary that stores class ID and the `IdentifiedDataSerializable` class type pairs as the key and value. +The factory's responsibility is to store the right `IdentifiedDataSerializable` class type for the given class ID. A sample `IdentifiedDataSerializable factory` could be created as follows: @@ -753,22 +723,25 @@ Note that the keys of the dictionary should be the same as the class IDs of thei The last step is to register the `IdentifiedDataSerializable factory` to the `SerializationConfig`. ```python -config.serialization_config.data_serializable_factories[1] = factory +config.serialization.data_serializable_factories[1] = factory ``` Note that the ID that is passed to the `SerializationConfig` is same as the factory ID that the `Address` class returns. ## 4.2. Portable Serialization -As an alternative to the existing serialization methods, Hazelcast offers portable serialization. To use it, you need to extend the `Portable` class. Portable serialization has the following advantages: +As an alternative to the existing serialization methods, Hazelcast offers portable serialization. +To use it, you need to extend the `Portable` class. Portable serialization has the following advantages: - Supporting multiversion of the same object type. - Fetching individual fields without having to rely on the reflection. - Querying and indexing support without deserialization and/or reflection. -In order to support these features, a serialized `Portable` object contains meta information like the version and concrete location of the each field in the binary data. This way Hazelcast is able to navigate in the binary data and deserialize only the required field without actually deserializing the whole object which improves the query performance. +In order to support these features, a serialized `Portable` object contains meta information like the version and concrete location of the each field in the binary data. +This way Hazelcast is able to navigate in the binary data and deserialize only the required field without actually deserializing the whole object which improves the query performance. -With multiversion support, you can have two members each having different versions of the same object; Hazelcast stores both meta information and uses the correct one to serialize and deserialize portable objects depending on the member. This is very helpful when you are doing a rolling upgrade without shutting down the cluster. +With multiversion support, you can have two members each having different versions of the same object; Hazelcast stores both meta information and uses the correct one to serialize and deserialize portable objects depending on the member. +This is very helpful when you are doing a rolling upgrade without shutting down the cluster. Also note that portable serialization is totally language independent and is used as the binary protocol between Hazelcast server and clients. @@ -777,19 +750,15 @@ A sample portable implementation of a `Foo` class looks like the following: ```python from hazelcast.serialization.api import Portable -class Foo(Portable): - - CLASS_ID = 1 - FACTORY_ID = 1 - +class Foo(Portable): def __init__(self, foo=None): self.foo = foo def get_class_id(self): - return CLASS_ID + return 1 def get_factory_id(self): - return FACTORY_ID + return 1 def write_portable(self, writer): writer.write_utf("foo", self.foo) @@ -798,9 +767,12 @@ class Foo(Portable): self.foo = reader.read_utf("foo") ``` +> **NOTE: Refer to `PortableReader`/`PortableWriter` classes in the `hazelcast.serialization.api` package to understand methods available on the `reader`/`writer` objects.** + > **NOTE: For Portable to work in Python client, the class that inherits it should have default valued parameters in its `__init__` method so that an instance of that class can be created without passing any arguments to it.** -Similar to `IdentifiedDataSerializable`, a `Portable` class must provide the `get_class_id()` and `get_factory_id()` methods. The factory dictionary will be used to create the `Portable` object given the class ID. +Similar to `IdentifiedDataSerializable`, a `Portable` class must provide the `get_class_id()` and `get_factory_id()` methods. +The factory dictionary will be used to create the `Portable` object given the class ID. A sample `Portable factory` could be created as follows: @@ -813,31 +785,26 @@ Note that the keys of the dictionary should be the same as the class IDs of thei The last step is to register the `Portable factory` to the `SerializationConfig`. ```python -config.serialization_config.data_serializable_factories[1] = factory +config.serialization.data_serializable_factories[1] = factory ``` Note that the ID that is passed to the `SerializationConfig` is same as the factory ID that `Foo` class returns. ### 4.2.1. Versioning for Portable Serialization -More than one version of the same class may need to be serialized and deserialized. For example, a client may have an older version of a class and the member to which it is connected may have a newer version of the same class. +More than one version of the same class may need to be serialized and deserialized. +For example, a client may have an older version of a class and the member to which it is connected may have a newer version of the same class. Portable serialization supports versioning. It is a global versioning, meaning that all portable classes that are serialized through a member get the globally configured portable version. -You can declare the version in the `hazelcast.xml` configuration file using the `portable-version` element, as shown below. +You can declare the version using the `config.serialization.portable_version` option, as shown below. -```xml - - ... - - 1 - - ... - +```python +config.serialization.portable_version = 0 ``` -If you update the class by changing the type of one of the fields or by adding a new field, it is a good idea to upgrade the version of the class, rather than sticking to the global version specified in the `hazelcast.xml` file. -In the Python client, you can achieve this by simply adding the `get_class_version()` method to your class’s implementation of `Portable`, and setting the `CLASS_VERSION` to be different than the default global version. +If you update the class by changing the type of one of the fields or by adding a new field, it is a good idea to upgrade the version of the class, rather than sticking to the global version specified in the configuration. +In the Python client, you can achieve this by simply adding the `get_class_version()` method to your class’s implementation of `Portable`, and returning class version different than the default global version. > **NOTE: If you do not use the `get_class_version()` method in your `Portable` implementation, it will have the global version, by default.** @@ -847,23 +814,18 @@ Here is an example implementation of creating a version 2 for the above Foo clas from hazelcast.serialization.api import Portable class Foo(Portable): - - CLASS_ID = 1 - FACTORY_ID = 1 - CLASS_VERSION = 2 - def __init__(self, foo=None, foo2=None): self.foo = foo self.foo2 = foo2 def get_class_id(self): - return CLASS_ID + return 1 def get_factory_id(self): - return FACTORY_ID + return 1 def get_class_version(self): - return CLASS_VERSION + return 2 def write_portable(self, writer): writer.write_utf("foo", self.foo) @@ -885,11 +847,13 @@ You should consider the following when you perform versioning: Assume that a new client joins to the cluster with a class that has been modified and class's version has been upgraded due to this modification. -If you modified the class by adding a new field, the new client’s put operations include that new field. If this new client tries to get an object that was put from the older clients, it gets null for the newly added field. +If you modified the class by adding a new field, the new client’s put operations include that new field. +If this new client tries to get an object that was put from the older clients, it gets null for the newly added field. If you modified the class by removing a field, the old clients get null for the objects that are put by the new client. -If you modified the class by changing the type of a field to an incompatible type (such as from `int` to `String`), a `TypeError` (wrapped as `HazelcastSerializationError`) is generated as the client tries accessing an object with the older version of the class. The same applies if a client with the old version tries to access a new version object. +If you modified the class by changing the type of a field to an incompatible type (such as from `int` to `String`), a `TypeError` (wrapped as `HazelcastSerializationError`) is generated as the client tries accessing an object with the older version of the class. +The same applies if a client with the old version tries to access a new version object. If you did not modify a class at all, it works as usual. @@ -917,40 +881,39 @@ class MusicianSerializer(StreamSerializer): def destroy(self): pass - def write(self, out, obj): - out.write_int(len(obj.name)) - for s in obj.name: - out.write_char(s) + def write(self, output, obj): + output.write_utf(obj.name) - def read(self, inp): - length = inp.read_int() - name = "" - for i in range(length): - name += chr(inp.read_int()) + def read(self, input): + name = input.read_utf() return Musician(name) ``` -Note that the serializer `id` must be unique as Hazelcast will use it to lookup the `MusicianSerializer` while it deserializes the object. Now the last required step is to register the `MusicianSerializer` to the configuration. +Note that the serializer `id` must be unique as Hazelcast will use it to lookup the `MusicianSerializer` while it deserializes the object. +Now the last required step is to register the `MusicianSerializer` to the configuration. ```python -config.serialization_config.set_custom_serializer(Musician, MusicianSerializer) +config.serialization.set_custom_serializer(Musician, MusicianSerializer) ``` From now on, Hazelcast will use `MusicianSerializer` to serialize `Musician` objects. ## 4.4. JSON Serialization -You can use the JSON formatted strings as objects in Hazelcast cluster. Starting with Hazelcast IMDG 3.12, the JSON serialization is one of the formerly supported serialization methods. Creating JSON objects in the cluster does not require any server side coding and hence you can just send a JSON formatted string object to the cluster and query these objects by fields. +You can use the JSON formatted strings as objects in Hazelcast cluster. +Creating JSON objects in the cluster does not require any server side coding and hence you can just send a JSON formatted string object to the cluster and query these objects by fields. -In order to use JSON serialization, you should use the `HazelcastJsonValue` object for the key or value. Here is an example IMap usage: +In order to use JSON serialization, you should use the `HazelcastJsonValue` object for the key or value. `HazelcastJsonValue` is a simple wrapper and identifier for the JSON formatted strings. You can get the JSON string from the `HazelcastJsonValue` object using the `to_string()` method. You can construct `HazelcastJsonValue` from strings or JSON serializable Python objects. If a Python object is provided to the constructor, `HazelcastJsonValue` tries to convert it to a JSON string. If an error occurs during the conversion, it is raised directly. If a string argument is provided to the constructor, it is used as it is. -No JSON parsing is performed but it is your responsibility to provide correctly formatted JSON strings. The client will not validate the string, and it will send it to the cluster as it is. If you submit incorrectly formatted JSON strings and, later, if you query those objects, it is highly possible that you will get formatting errors since the server will fail to deserialize or find the query fields. +No JSON parsing is performed but it is your responsibility to provide correctly formatted JSON strings. +The client will not validate the string, and it will send it to the cluster as it is. +If you submit incorrectly formatted JSON strings and, later, if you query those objects, it is highly possible that you will get formatting errors since the server will fail to deserialize or find the query fields. Here is an example of how you can construct a `HazelcastJsonValue` and put to the map: @@ -973,9 +936,11 @@ print("Entry is {}".format(result[0].to_string())) ## 4.5. Global Serialization -The global serializer is identical to custom serializers from the implementation perspective. The global serializer is registered as a fallback serializer to handle all other objects if a serializer cannot be located for them. +The global serializer is identical to custom serializers from the implementation perspective. +The global serializer is registered as a fallback serializer to handle all other objects if a serializer cannot be located for them. -By default, `cPickle/pickle` serialization is used if the class is not `IdentifiedDataSerializable` or `Portable` or there is no custom serializer for it. When you configure a global serializer, it is used instead of `cPickle/pickle` serialization. +By default, `cPickle/pickle` serialization is used if the class is not `IdentifiedDataSerializable` or `Portable` or there is no custom serializer for it. +When you configure a global serializer, it is used instead of `cPickle/pickle` serialization. **Use Cases:** @@ -996,33 +961,31 @@ class GlobalSerializer(StreamSerializer): def destroy(self): pass - def write(self, out, obj): - out.write_utf(some_third_party_serializer.serialize(obj)) + def write(self, output, obj): + output.write_utf(some_third_party_serializer.serialize(obj)) - def read(self, inp): - return some_third_party_serializer.deserialize(inp.read_utf()) + def read(self, input): + return some_third_party_serializer.deserialize(input.read_utf()) ``` You should register the global serializer in the configuration. ```python -config.serialization_config.global_serializer = GlobalSerializer +config.serialization.global_serializer = GlobalSerializer ``` # 5. Setting Up Client Network -All network related configuration of Hazelcast Python client is performed via the `ClientNetworkConfig` class when using programmatic configuration. Let's first give the examples for this approach. Then we will look at its sub-elements and attributes. +Main parts of network related configuration for Hazelcast Python client may be tuned via the `ClientNetworkConfig`. -Here is an example of configuring the network for Python Client programmatically. +Here is an example of configuring the network for Python client. ```python -config.network_config.addresses.extend(["10.1.1.21""10.1.1.22:5703"]) -config.network_config.smart_routing = True -config.network_config.redo_operation = True -config.network_config.connection_timeout = 6.0 -config.network_config.connection_attempt_period = 5.0 -config.network_config.connection_attempt_limit = 5 +config.network.addresses = ["10.1.1.21""10.1.1.22:5703"] +config.network.smart_routing = True +config.network.redo_operation = True +config.network.connection_timeout = 6.0 ``` ## 5.1. Providing Member Addresses @@ -1032,33 +995,32 @@ list to find an alive member. Although it may be enough to give only one address (since all members communicate with each other), it is recommended that you give the addresses for all the members. ```python -config.network_config.addresses.append("10.1.1.21") # single value -config.network_config.addresses.extend(["10.1.1.23", "10.1.1.22:5703"]) # multiple values +config.network.addresses = ["10.1.1.23", "10.1.1.22:5703"] ``` -If the port part is omitted, then 5701, 5702 and 5703 will be tried in a random order. +If the port part is omitted, then `5701`, `5702` and `5703` will be tried in a random order. -You can specify multiple addresses with or without the port information as seen above. The provided list is shuffled and tried in a random order. Its default value is `localhost`. +You can specify multiple addresses with or without the port information as seen above. +The provided list is shuffled and tried in a random order. Its default value is `localhost`. ## 5.2. Setting Smart Routing Smart routing defines whether the client mode is smart or unisocket. See the [Python Client Operation Modes section](#72-python-client-operation-modes) for the description of smart and unisocket modes. - -The following is an example configuration. ```python -config.network_config.smart_routing = True +config.network.smart_routing = True ``` Its default value is `True` (smart client mode). ## 5.3. Enabling Redo Operation -It enables/disables redo-able operations. While sending the requests to the related members, the operations can fail due to various reasons. Read-only operations are retried by default. If you want to enable retry for the other operations, you can set the `redo_operation` to `True`. +It enables/disables redo-able operations. While sending the requests to the related members, the operations can fail due to various reasons. +Read-only operations are retried by default. If you want to enable retry for the other operations, you can set the `redo_operation` to `True`. ```python -config.network_config.redo_operation = True +config.network.redo_operation = True ``` Its default value is `False` (disabled). @@ -1066,249 +1028,115 @@ Its default value is `False` (disabled). ## 5.4. Setting Connection Timeout Connection timeout is the timeout value in seconds for the members to accept the client connection requests. -If the member does not respond within the timeout, the client will retry to connect as many as `ClientNetworkConfig.connection_attempt_limit` times. -The following is an example configuration. - ```python -config.network_config.connection_timeout = 6.0 +config.network.connection_timeout = 6.0 ``` Its default value is `5.0` seconds. -## 5.5. Setting Connection Attempt Limit - -While the client is trying to connect initially to one of the members in the `ClientNetworkConfig.addresses`, that member might not be available at that moment. Instead of giving up, throwing an error and stopping the client, the client will retry as many as `ClientNetworkConfig.connection_attempt_limit` times. This is also the case when the previously established connection between the client and that member goes down. - -The following is an example configuration. - -```python -config.network_config.connection_attempt_limit = 5 -``` - -Its default value is `2`. - -## 5.6. Setting Connection Attempt Period - -Connection attempt period is the duration in seconds between the connection attempts defined by `ClientNetworkConfig.connection_attempt_limit`. - -The following is an example configuration. - -```python -config.network_config.connection_attempt_period = 5.0 -``` - -Its default value is `3.0` seconds. - -## 5.7. Enabling Client TLS/SSL +## 5.5. Enabling Client TLS/SSL You can use TLS/SSL to secure the connection between the clients and members. If you want to enable TLS/SSL -for the client-cluster connection, you should set the SSL configuration. Please see the [TLS/SSL section](#61-tlsssl). +for the client-cluster connection, you should set the SSL configuration. Please see the [TLS/SSL section](#81-tlsssl). -As explained in the [TLS/SSL section](#61-tlsssl), Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast Python clients have certificate authorities used to define which members they can trust. Hazelcast has the mutual authentication feature which allows the Python clients also to have their private keys and public certificates, and members to have their certificate authorities so that the members can know which clients they can trust. See the [Mutual Authentication section](#613-mutual-authentication). +As explained in the [TLS/SSL section](#81-tlsssl), Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast Python clients have certificate authorities used to define which members they can trust. +Hazelcast has the mutual authentication feature which allows the Python clients also to have their private keys and public certificates, and members to have their certificate authorities so that the members can know which clients they can trust. +See the [Mutual Authentication section](#813-mutual-authentication). -## 5.8. Enabling Hazelcast Cloud Discovery +## 5.6. Enabling Hazelcast Cloud Discovery -The purpose of Hazelcast Cloud Discovery is to provide the clients the means to use IP addresses provided by `hazelcast orchestrator`. To enable Hazelcast Cloud Discovery, specify a token for the `discovery_token` field and set the `enabled` field to `True`. - +Hazelcast Python client can discover and connect to Hazelcast clusters running on [Hazelcast Cloud](https://cloud.hazelcast.com/). +For this, provide authentication information as `cluster_name`, enable `cloud_config` and set your `discovery_token` as shown below. The following is the example configuration. ```python -config.group_config.name = "hazel" -config.group_config.password = "cast" - -config.network_config.ssl_config.enabled = True - -config.network_config.cloud_config.enabled = True -config.network_config.cloud_config.discovery_token = "dc9220bc5d9" -``` - -To be able to connect to the provided IP addresses, you should use secure TLS/SSL connection between the client and members. Therefore, you should enable the SSL configuration as described in the [TLS/SSL for Hazelcast Python Client section](#612-tlsssl-for-hazelcast-python-clients). - -# 6. Securing Client Connection - -This chapter describes the security features of Hazelcast Python client. These include using TLS/SSL for connections between members and between clients and members, and mutual authentication. These security features require **Hazelcast IMDG Enterprise** edition. - -### 6.1. TLS/SSL - -One of the offers of Hazelcast is the TLS/SSL protocol which you can use to establish an encrypted communication across your cluster with key stores and trust stores. - -* A Java `keyStore` is a file that includes a private key and a public certificate. The equivalent of a key store is the combination of `keyfile` and `certfile` at the Python client side. - -* A Java `trustStore` is a file that includes a list of certificates trusted by your application which is named certificate authority. The equivalent of a trust store is a `cafile` at the Python client side. - -You should set `keyStore` and `trustStore` before starting the members. See the next section on how to set `keyStore` and `trustStore` on the server side. - -#### 6.1.1. TLS/SSL for Hazelcast Members - -Hazelcast allows you to encrypt socket level communication between Hazelcast members and between Hazelcast clients and members, for end to end encryption. To use it, see the [TLS/SSL for Hazelcast Members section](http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#tls-ssl-for-hazelcast-members). - -#### 6.1.2. TLS/SSL for Hazelcast Python Clients - -TLS/SSL for the Hazelcast Python client can be configured using the `SSLConfig` class. Let's first give an example of a sample configuration and then go over the configuration options one by one: - -```python -import hazelcast -from hazelcast.config import PROTOCOL +config.cluster_name = "hz-cluster" -config = hazelcast.ClientConfig() -config.network_config.ssl_config.enabled = True -config.network_config.ssl_config.cafile = "/home/hazelcast/cafile.pem" -config.network_config.ssl_config.certfile = "/home/hazelcast/certfile.pem" -config.network_config.ssl_config.keyfile = "/home/hazelcast/keyfile.pem" -config.network_config.ssl_config.password = "hazelcast" -config.network_config.ssl_config.protocol = PROTOCOL.TLSv1_3 -config.network_config.ssl_config.ciphers = "DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA" +config.network.cloud.enabled = True +config.network.cloud.discovery_token = "EXAMPLE_TOKEN" ``` -##### Enabling TLS/SSL +If you have enabled encryption for your cluster, you should also enable TLS/SSL configuration for the client to secure communication between your +client and cluster members as described in the [TLS/SSL for Hazelcast Python Client section](#812-tlsssl-for-hazelcast-python-clients). -TLS/SSL for the Hazelcast Python client can be enabled/disabled using the `enabled` option. When this option is set to `True`, TLS/SSL will be configured with respect to the other `SSLConfig` options. -Setting this option to `False` will result in discarding the other `SSLConfig` options. +# 6. Client Connection Strategy -The following is an example configuration: +Hazelcast Python client can be configured to connect to a cluster in an async manner during the client start and reconnecting +after a cluster disconnect. Both of these options are configured via `ConnectionStrategyConfig`. -```python -config.network_config.ssl_config.enabled = True -``` +You can configure the client’s starting mode as async or sync using the configuration element `async_start`. +When it is set to `True` (async), the behavior of `hazelcast.HazelcastClient()` call changes. +It resolves a client instance without waiting to establish a cluster connection. +In this case, the client rejects any network dependent operation with `ClientOfflineError` immediately until it connects to the cluster. +If it is `False`, the call is not resolved and the client is not created until a connection with the cluster is established. +Its default value is `False` (sync). -Default value is `False` (disabled). +You can also configure how the client reconnects to the cluster after a disconnection. This is configured using the +configuration element `reconnect_mode`; it has three options: -##### Setting CA File +* `OFF`: Client rejects to reconnect to the cluster and triggers the shutdown process. +* `ON`: Client opens a connection to the cluster in a blocking manner by not resolving any of the waiting invocations. +* `ASYNC`: Client opens a connection to the cluster in a non-blocking manner by resolving all the waiting invocations with `ClientOfflineError`. -Certificates of the Hazelcast members can be validated against `cafile`. This option should point to the absolute path of the concatenated CA certificates in PEM format. -When SSL is enabled and `cafile` is not set, a set of default CA certificates from default locations will be used. +Its default value is `ON`. -The following is an example configuration: +The example configuration below show how to configure a Node.js client’s starting and reconnecting modes. ```python -config.network_config.ssl_config.cafile = "/home/hazelcast/cafile.pem" +config.connection_strategy.async_start = False +config.connection_strategy.reconnect_mode = RECONNECT_MODE.ON ``` -##### Setting Client Certificate - -When mutual authentication is enabled on the member side, clients or other members should also provide a certificate file that identifies themselves. -Then, Hazelcast members can use these certificates to validate the identity of their peers. - -Client certificate can be set using the `certfile`. This option should point to the absolute path of the client certificate in PEM format. +## 6.1. Configuring Client Connection Retry -The following is an example configuration: +When the client is disconnected from the cluster, it searches for new connections to reconnect. +You can configure the frequency of the reconnection attempts and client shutdown behavior using the `ConnectionRetryConfig`. ```python -config.network_config.ssl_config.certfile = "/home/hazelcast/certfile.pem" +retry_config = config.connection_strategy.connection_retry +retry_config.initial_backoff = 1 +retry_config.max_backoff = 60 +retry_config.multiplier = 2 +retry_config.cluster_connect_timeout = 50 +retry_config.jitter = 0.2 ``` -##### Setting Private Key +The following are configuration element descriptions: -Private key of the `certfile` can be set using the `keyfile`. This option should point to the absolute path of the private key file for the client certificate in the PEM format. +* `initial_backoff`: Specifies how long to wait (backoff), in seconds, after the first failure before retrying. Its default value is `1` s. It must be non-negative. +* `max_backoff`: Specifies the upper limit for the backoff in seconds. Its default value is `30` s. It must be non-negative. +* `multiplier`: Factor to multiply the backoff after a failed retry. Its default value is `1`. It must be greater than or equal to `1`. +* `clusterConnectTimeoutMillis`: Timeout value in seconds for the client to give up to connect to the current cluster. Its default value is `20` s. +* `jitter`: Specifies by how much to randomize backoffs. Its default value is `0`. It must be in range `0` to `1`. -If this option is not set, private key will be taken from `certfile`. In this case, `certfile` should be in the following format. +A pseudo-code is as follows: +```text +begin_time = get_current_time() +current_backoff = INITIAL_BACKOFF +while (try_connect(connection_timeout)) != SUCCESS) { + if (get_current_time() - begin_time >= CLUSTER_CONNECT_TIMEOUT) { + // Give up to connecting to the current cluster and switch to another if exists. + } + sleep(current_backoff + uniform_random(-JITTER * current_backoff, JITTER * current_backoff)) + current_backoff = min(current_backoff * MULTIPLIER, MAX_BACKOFF) +} ``` ------BEGIN RSA PRIVATE KEY----- -... (private key in base64 encoding) ... ------END RSA PRIVATE KEY----- ------BEGIN CERTIFICATE----- -... (certificate in base64 PEM encoding) ... ------END CERTIFICATE----- -``` - -The following is an example configuration: - -```python -config.network_config.ssl_config.keyfile = "/home/hazelcast/keyfile.pem" -``` - -##### Setting Password of the Private Key - -If the private key is encrypted using a password, `password` will be used to decrypt it. The `password` may be a function to call to get the password. -In that case, it will be called with no arguments, and it should return a string, bytes or bytearray. If the return value is a string it will be encoded as UTF-8 before using it to decrypt the key. - -Alternatively a string, bytes or bytearray value may be supplied directly as the password. - -The following is an example configuration: - -```python -config.network_config.ssl_config.password = "hazelcast" -``` - -##### Setting the Protocol - -`protocol` can be used to select the protocol that will be used in the TLS/SSL communication. Hazelcast Python client offers the following protocols: - -* **SSLv2** : SSL 2.0 Protocol. *RFC 6176 prohibits the usage of SSL 2.0.* -* **SSLv3** : SSL 3.0 Protocol. *RFC 7568 prohibits the usage of SSL 3.0.* -* **SSL** : Alias for SSL 3.0 -* **TLSv1** : TLS 1.0 Protocol described in RFC 2246 -* **TLSv1_1** : TLS 1.1 Protocol described in RFC 4346 -* **TLSv1_2** : TLS 1.2 Protocol described in RFC 5246 -* **TLSv1_3** : TLS 1.3 Protocol described in RFC 8446 -* **TLS** : Alias for TLS 1.2 - -> Note that TLSv1+ requires at least Python 2.7.9 or Python 3.4 built with OpenSSL 1.0.1+, and TLSv1_3 requires at least Python 2.7.15 or Python 3.7 built with OpenSSL 1.1.1+. - -These protocol versions can be selected using the `hazelcast.config.PROTOCOL` as follows: - -```python -from hazelcast.config import PROTOCOL - -config.network_config.ssl_config.protocol = PROTOCOL.TLSv1_3 -``` - -> Note that the Hazelcast Python client and the Hazelcast members should have the same protocol version in order for TLS/SSL to work. In case of the protocol mismatch, connection attempts will be refused. - -Default value is `PROTOCOL.TLS` which is an alias for `PROTOCOL.TLSv1_2`. - -##### Setting Cipher Suites - -Cipher suites that will be used in the TLS/SSL communication can be set using the `ciphers` option. Cipher suites should be in the -OpenSSL cipher list format. More than one cipher suite can be set by separating them with a colon. - -TLS/SSL implementation will honor the cipher suite order. So, Hazelcast Python client will offer the ciphers to the Hazelcast members with the given order. - -Note that, when this option is not set, all the available ciphers will be offered to the Hazelcast members with their default order. - -The following is an example configuration: - -```python -config.network_config.ssl_config.ciphers = "DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA" -``` - -#### 6.1.3. Mutual Authentication - -As explained above, Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast clients have trust stores used to define which members they can trust. - -Using mutual authentication, the clients also have their key stores and members have their trust stores so that the members can know which clients they can trust. - -To enable mutual authentication, firstly, you need to set the following property on the server side in the `hazelcast.xml` file: - -```xml - - - - REQUIRED - - - -``` - -You can see the details of setting mutual authentication on the server side in the [Mutual Authentication section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#mutual-authentication) of the Hazelcast IMDG Reference Manual. - -On the client side, you have to provide `SSLConfig.cafile`, `SSLConfig.certfile` and `SSLConfig.keyfile` on top of the other TLS/SSL configurations. See the [TLS/SSL for Hazelcast Python Clients](#612-tlsssl-for-hazelcast-python-clients) for the details of these options. +Note that, `try_connect` above tries to connect to any member that the client knows, and for each connection we have a connection timeout; see the [Setting Connection Timeout](#54-setting-connection-timeout) section. # 7. Using Python Client with Hazelcast IMDG This chapter provides information on how you can use Hazelcast IMDG's data structures in the Python client, after giving some basic information including an overview to the client API, operation modes of the client and how it handles the failures. - ## 7.1. Python Client API Overview Hazelcast Python client is designed to be fully asynchronous. See the [Basic Usage section](#15-basic-usage) to learn more about asynchronous nature of the Python Client. If you are ready to go, let's start to use Hazelcast Python client. -The first step is configuration. You can configure the Python client programmatically. +The first step is configuration. See the [Configuration Options section](#31-configuration-options) for details. The following is an example on how to create a `ClientConfig` object and configure it programmatically: @@ -1316,8 +1144,8 @@ The following is an example on how to create a `ClientConfig` object and configu import hazelcast config = hazelcast.ClientConfig() -config.group_config.name = "dev" -config.network_config.addresses.append("10.90.0.1") +config.cluster_name = "dev" +config.network.addresses = ["10.90.0.1"] ``` The second step is initializing the `HazelcastClient` to be connected to the cluster: @@ -1326,7 +1154,7 @@ The second step is initializing the `HazelcastClient` to be connected to the clu client = hazelcast.HazelcastClient(config) ``` -**This client object is your gateway to access all the Hazelcast distributed objects.** +This client object is your gateway to access all the Hazelcast distributed objects. Let's create a map and populate it with some data, as shown below. @@ -1337,7 +1165,8 @@ customer_map.put("2", "Richard Miles") customer_map.put("3", "Judy Doe") ``` -As the final step, if you are done with your client, you can shut it down as shown below. This will release all the used resources and close connections to the cluster. +As the final step, if you are done with your client, you can shut it down as shown below. +This will release all the used resources and close connections to the cluster. ```python client.shutdown() @@ -1346,16 +1175,20 @@ client.shutdown() ## 7.2. Python Client Operation Modes The client has two operation modes because of the distributed nature of the data and cluster: smart and unisocket. +Refer to the [Setting Smart Routing](#52-setting-smart-routing) section to see how to configure the client for different operation modes. ### 7.2.1. Smart Client -In the smart mode, the clients connect to each cluster member. Since each data partition uses the well known and consistent hashing algorithm, each client can send an operation to the relevant cluster member, which increases the overall throughput and efficiency. Smart mode is the default mode. +In the smart mode, the clients connect to each cluster member. Since each data partition uses the well known and consistent hashing algorithm, each client can send an operation to the relevant cluster member, which increases the overall throughput and efficiency. +Smart mode is the default mode. ### 7.2.2. Unisocket Client -For some cases, the clients can be required to connect to a single member instead of each member in the cluster. Firewalls, security or some custom networking issues can be the reason for these cases. +For some cases, the clients can be required to connect to a single member instead of each member in the cluster. +Firewalls, security or some custom networking issues can be the reason for these cases. -In the unisocket client mode, the client will only connect to one of the configured addresses. This single member will behave as a gateway to the other members. For any operation requested from the client, it will redirect the request to the relevant member and return the response back to the client returned from this member. +In the unisocket client mode, the client will only connect to one of the configured addresses. +This single member will behave as a gateway to the other members. For any operation requested from the client, it will redirect the request to the relevant member and return the response back to the client returned from this member. ## 7.3. Handling Failures @@ -1363,25 +1196,32 @@ There are two main failure cases you should be aware of. Below sections explain ### 7.3.1. Handling Client Connection Failure -While the client is trying to connect initially to one of the members in the `ClientNetworkConfig.addresses`, all the members might not be available. Instead of giving up, throwing an error and stopping the client, the client will retry as many as `connection_attempt_limit` times. - -You can configure `connection_attempt_limit` for the number of times you want the client to retry connecting. See the [Setting Connection Attempt Limit section](#55-setting-connection-attempt-limit). +While the client is trying to connect initially to one of the members in the `network.addresses`, all the members might not be available. +Instead of giving up, throwing an error and stopping the client, the client retries to connect as configured. This behavior is described in the [Configuring Client Connection Retry](#61-configuring-client-connection-retry) section. The client executes each operation through the already established connection to the cluster. If this connection(s) disconnects or drops, the client will try to reconnect as configured. ### 7.3.2. Handling Retry-able Operation Failure -While sending the requests to the related members, the operations can fail due to various reasons. Read-only operations are retried by default. If you want to enable retrying for the other operations, you can set the `redo_operation` to `True`. See the [Enabling Redo Operation section](#53-enabling-redo-operation). +While sending the requests to the related members, the operations can fail due to various reasons. +Read-only operations are retried by default. If you want to enable retrying for the other operations, you can set the `redo_operation` to `True`. +See the [Enabling Redo Operation section](#53-enabling-redo-operation). -You can set a timeout for retrying the operations sent to a member. This can be provided by using the property `hazelcast.client.invocation.timeout.seconds` via `ClientConfig.set_property` method. The client will retry an operation within this given period, of course, if it is a read-only operation or you enabled the `redo_operation` as stated in the above paragraph. This timeout value is important when there is a failure resulted by either of the following causes: +You can set a timeout for retrying the operations sent to a member. This can be provided by using the property `hazelcast.client.invocation.timeout.seconds` via `config.set_property` method. +The client will retry an operation within this given period, of course, if it is a read-only operation or you enabled the `redo_operation` as stated in the above. +This timeout value is important when there is a failure resulted by either of the following causes: * Member throws an exception. - * Connection between the client and member is closed. - * Client’s heartbeat requests are timed out. -When a connection problem occurs, an operation is retried if it is certain that it has not run on the member yet or if it is idempotent such as a read-only operation, i.e., retrying does not have a side effect. If it is not certain whether the operation has run on the member, then the non-idempotent operations are not retried. However, as explained in the first paragraph of this section, you can force all the client operations to be retried (`redo_operation`) when there is a connection failure between the client and member. But in this case, you should know that some operations may run multiple times causing conflicts. For example, assume that your client sent a `queue.offer` operation to the member and then the connection is lost. Since there will be no response for this operation, you will not know whether it has run on the member or not. If you enabled `redo_operation`, it means this operation may run again, which may cause two instances of the same object in the queue. +When a connection problem occurs, an operation is retried if it is certain that it has not run on the member yet or if it is idempotent such as a read-only operation, i.e., retrying does not have a side effect. +If it is not certain whether the operation has run on the member, then the non-idempotent operations are not retried. +However, as explained in the first paragraph of this section, you can force all the client operations to be retried (`redo_operation`) when there is a connection failure between the client and member. +But in this case, you should know that some operations may run multiple times causing conflicts. +For example, assume that your client sent a `queue.offer` operation to the member and then the connection is lost. +Since there will be no response for this operation, you will not know whether it has run on the member or not. I +f you enabled `redo_operation`, it means this operation may run again, which may cause two instances of the same object in the queue. When invocation is being retried, the client may wait some time before it retries again. This duration can be configured using the following property: @@ -1397,31 +1237,33 @@ Most of the distributed data structures are supported by the Python client. In t ### 7.4.1. Using Map -Hazelcast Map is a distributed dictionary. Through the Python client, you can perform operations like reading and writing from/to a Hazelcast Map with the well known get and put methods. For details, see the [Map section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#map) in the Hazelcast IMDG Reference Manual. +Hazelcast Map is a distributed dictionary. Through the Python client, you can perform operations like reading and writing from/to a Hazelcast Map with the well known get and put methods. +For details, see the [Map section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#map) in the Hazelcast IMDG Reference Manual. A Map usage example is shown below. ```python -# Get the Distributed Map from Cluster. +# Get a Map called 'my-distributed-map' my_map = client.get_map("my-distributed-map").blocking() -# Standard Put and Get +# Run Put and Get operations my_map.put("key", "value") my_map.get("key") -# Concurrent Map methods, optimistic updating +# Run concurrent Map operations (optimistic updates) my_map.put_if_absent("somekey", "somevalue") my_map.replace_if_same("key", "value", "newvalue") ``` ### 7.4.2. Using MultiMap -Hazelcast MultiMap is a distributed and specialized map where you can store multiple values under a single key. For details, see the [MultiMap section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#multimap) in the Hazelcast IMDG Reference Manual. +Hazelcast MultiMap is a distributed and specialized map where you can store multiple values under a single key. +For details, see the [MultiMap section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#multimap) in the Hazelcast IMDG Reference Manual. A MultiMap usage example is shown below. ```python -# Get the Distributed MultiMap from Cluster. +# Get a MultiMap called 'my-distributed-multimap' multi_map = client.get_multi_map("my-distributed-multimap").blocking() # Put values in the map against the same key @@ -1429,7 +1271,7 @@ multi_map.put("my-key", "value1") multi_map.put("my-key", "value2") multi_map.put("my-key", "value3") -# Print out all the values for associated with key called "my-key" +# Read and print out all the values for associated with key called 'my-key' # Outputs '['value2', 'value1', 'value3']' values = multi_map.get("my-key") print(values) @@ -1440,16 +1282,18 @@ multi_map.remove("my-key", "value2") ### 7.4.3. Using Replicated Map -Hazelcast Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster. It provides full replication of entries to all members for high speed access. For details, see the [Replicated Map section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#replicated-map) in the Hazelcast IMDG Reference Manual. +Hazelcast Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster. +It provides full replication of entries to all members for high speed access. +For details, see the [Replicated Map section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#replicated-map) in the Hazelcast IMDG Reference Manual. A Replicated Map usage example is shown below. ```python -# Get a Replicated Map called "my-replicated-map" +# Get a ReplicatedMap called 'my-replicated-map' replicated_map = client.get_replicated_map("my-replicated-map").blocking() -# Put and Get a value from the Replicated Map -# key/value replicated to all members +# Put and get a value from the Replicated Map +# (key/value is replicated to all members) replaced_value = replicated_map.put("key", "value") # Will be None as its first update @@ -1463,23 +1307,24 @@ print("value for key = {}".format(value)) # Outputs 'value for key = value' ### 7.4.4. Using Queue -Hazelcast Queue is a distributed queue which enables all cluster members to interact with it. For details, see the [Queue section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#queue) in the Hazelcast IMDG Reference Manual. +Hazelcast Queue is a distributed queue which enables all cluster members to interact with it. +For details, see the [Queue section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#queue) in the Hazelcast IMDG Reference Manual. A Queue usage example is shown below. ```python -# Get a Blocking Queue called "my-distributed-queue" +# Get a Queue called 'my-distributed-queue' queue = client.get_queue("my-distributed-queue").blocking() -# Offer a String into the Distributed Queue +# Offer a string into the Queue queue.offer("item") -# Poll the Distributed Queue and return the String +# Poll the Queue and return the string item = queue.poll() -# Timed blocking Operations -queue.offer("another-item", 1) -another_item = queue.poll(5) +# Timed-restricted operations +queue.offer("another-item", 0.5) # waits up to 0.5 seconds +another_item = queue.poll(5) # waits up to 5 seconds # Indefinitely blocking Operations queue.put("yet-another-item") @@ -1489,15 +1334,16 @@ print(queue.take()) # Outputs 'yet-another-item' ### 7.4.5. Using Set -Hazelcast Set is a distributed set which does not allow duplicate elements. For details, see the [Set section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#set) in the Hazelcast IMDG Reference Manual. +Hazelcast Set is a distributed set which does not allow duplicate elements. +For details, see the [Set section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#set) in the Hazelcast IMDG Reference Manual. A Set usage example is shown below. ```python -# Get the Distributed Set from Cluster. +# Get a Set called 'my-distributed-set' my_set = client.get_set("my-distributed-set").blocking() -# Add items to the set with duplicates +# Add items to the Set with duplicates my_set.add("item1") my_set.add("item1") my_set.add("item2") @@ -1512,12 +1358,13 @@ for item in my_set.get_all(): ### 7.4.6. Using List -Hazelcast List is a distributed list which allows duplicate elements and preserves the order of elements. For details, see the [List section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#list) in the Hazelcast IMDG Reference Manual. +Hazelcast List is a distributed list which allows duplicate elements and preserves the order of elements. +For details, see the [List section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#list) in the Hazelcast IMDG Reference Manual. A List usage example is shown below. ```python -# Get the Distributed List from Cluster. +# Get a List called 'my-distributed-list' my_list = client.get_list("my-distributed-list").blocking() # Add element to the list @@ -1536,7 +1383,11 @@ my_list.clear() ### 7.4.7. Using Ringbuffer -Hazelcast Ringbuffer is a replicated but not partitioned data structure that stores its data in a ring-like structure. You can think of it as a circular array with a given capacity. Each Ringbuffer has a tail and a head. The tail is where the items are added and the head is where the items are overwritten or expired. You can reach each element in a Ringbuffer using a sequence ID, which is mapped to the elements between the head and tail (inclusive) of the Ringbuffer. For details, see the [Ringbuffer section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#ringbuffer) in the Hazelcast IMDG Reference Manual. +Hazelcast Ringbuffer is a replicated but not partitioned data structure that stores its data in a ring-like structure. +You can think of it as a circular array with a given capacity. Each Ringbuffer has a tail and a head. +The tail is where the items are added and the head is where the items are overwritten or expired. +You can reach each element in a Ringbuffer using a sequence ID, which is mapped to the elements between the head and tail (inclusive) of the Ringbuffer. +For details, see the [Ringbuffer section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#ringbuffer) in the Hazelcast IMDG Reference Manual. A Ringbuffer usage example is shown below. @@ -1559,7 +1410,8 @@ print(ringbuffer.read_one(sequence)) # Outputs '200' ### 7.4.8. Using Topic -Hazelcast Topic is a distribution mechanism for publishing messages that are delivered to multiple subscribers. For details, see the [Topic section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#topic) in the Hazelcast IMDG Reference Manual. +Hazelcast Topic is a distribution mechanism for publishing messages that are delivered to multiple subscribers. +For details, see the [Topic section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#topic) in the Hazelcast IMDG Reference Manual. A Topic usage example is shown below. @@ -1578,67 +1430,13 @@ topic.add_listener(print_on_message) topic.publish("Hello to distributed world") # Outputs 'Got message: Hello to distributed world' ``` -### 7.4.9. Using Lock - -Hazelcast Lock is a distributed lock implementation. You can synchronize Hazelcast members and clients using a `Lock`. For details, see the [Lock section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#lock) in the Hazelcast IMDG Reference Manual. - -A Lock usage example is shown below. - -```python -# Get a distributed lock called "my-distributed-lock" -lock = client.get_lock("my-distributed-lock").blocking() - -# Now create a lock and execute some guarded code -lock.lock() -try: - # Do something here - pass -finally: - lock.unlock() -``` - -### 7.4.10. Using Atomic Long - -Hazelcast Atomic Long is the distributed long which offers most of the operations such as `get`, `set`, `get_and_set`, `compare_and_set` and `increment_and_get`. For details, see the [Atomic Long section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#iatomiclong) in the Hazelcast IMDG Reference Manual. - -An Atomic Long usage example is shown below. - -```python -# Get an Atomic Counter, we'll call it "counter" -counter = client.get_atomic_long("counter").blocking() - -# Add and Get the "counter" -counter.add_and_get(3) # value is 3 - -# Display the "counter" value -print("counter: {}".format(counter.get())) # Outputs 'counter: 3' -``` - -### 7.4.11. Using Semaphore - -Hazelcast Semaphore is a distributed semaphore implementation. For details, see the [Semaphore section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#isemaphore) in the Hazelcast IMDG Reference Manual. - -A Semaphore usage example is shown below. - -```python -# Get a Semaphore called "my-distributed-semaphore" -semaphore = client.get_semaphore("my-distributed-semaphore").blocking() - -# Initialize the Semaphore with 10 permits -semaphore.init(10) - -# Acquire 5 permits -semaphore.acquire(5) - -# Print the number of the available permits -print(semaphore.available_permits()) # Outputs '5' -``` - -### 7.4.12. Using Transactions +### 7.4.9. Using Transactions Hazelcast Python client provides transactional operations like beginning transactions, committing transactions and retrieving transactional data structures like the `TransactionalMap`, `TransactionalSet`, `TransactionalList`, `TransactionalQueue` and `TransactionalMultiMap`. -You can create a `Transaction` object using the Python client to begin, commit and rollback a transaction. You can obtain transaction-aware instances of queues, maps, sets, lists and multimaps via the `Transaction` object, work with them and commit or rollback in one shot. For details, see the [Transactions section](https://docs.hazelcast.org//docs/latest/manual/html-single/index.html#transactions) in the Hazelcast IMDG Reference Manual. +You can create a `Transaction` object using the Python client to begin, commit and rollback a transaction. +You can obtain transaction-aware instances of queues, maps, sets, lists and multimaps via the `Transaction` object, work with them and commit or rollback in one shot. +For details, see the [Transactions section](https://docs.hazelcast.org//docs/latest/manual/html-single/index.html#transactions) in the Hazelcast IMDG Reference Manual. ```python # Create a Transaction object and begin the transaction @@ -1666,15 +1464,21 @@ except Exception as ex: transaction.rollback() print("Transaction failed! {}".format(ex.args)) ``` -In a transaction, operations will not be executed immediately. Their changes will be local to the `Transaction` object until committed. However, they will ensure the changes via locks. +In a transaction, operations will not be executed immediately. Their changes will be local to the `Transaction` object until committed. +However, they will ensure the changes via locks. -For the above example, when `txn_map.put()` is executed, no data will be put in the map but the key will be locked against changes. While committing, operations will be executed, the value will be put to the map and the key will be unlocked. +For the above example, when `txn_map.put()` is executed, no data will be put in the map but the key will be locked against changes. +While committing, operations will be executed, the value will be put to the map and the key will be unlocked. -The isolation level in Hazelcast Transactions is `READ_COMMITTED` on the level of a single partition. If you are in a transaction, you can read the data in your transaction and the data that is already committed. If you are not in a transaction, you can only read the committed data. +The isolation level in Hazelcast Transactions is `READ_COMMITTED` on the level of a single partition. +If you are in a transaction, you can read the data in your transaction and the data that is already committed. +If you are not in a transaction, you can only read the committed data. -### 7.4.13. Using PN Counter +### 7.4.10. Using PN Counter -Hazelcast `PNCounter` (Positive-Negative Counter) is a CRDT positive-negative counter implementation. It is an eventually consistent counter given there is no member failure. For details, see the [PN Counter section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#pn-counter) in the Hazelcast IMDG Reference Manual. +Hazelcast `PNCounter` (Positive-Negative Counter) is a CRDT positive-negative counter implementation. +It is an eventually consistent counter given there is no member failure. +For details, see the [PN Counter section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#pn-counter) in the Hazelcast IMDG Reference Manual. A PN Counter usage example is shown below. @@ -1696,12 +1500,11 @@ print(pn_counter.get_and_increment()) # 4 print(pn_counter.get()) # 5 ``` -### 7.4.14. Using Flake ID Generator +### 7.4.11. Using Flake ID Generator Hazelcast `FlakeIdGenerator` is used to generate cluster-wide unique identifiers. Generated identifiers are long primitive values and are k-ordered (roughly ordered). IDs are in the range from 0 to `2^63-1` (maximum signed long value). -For details, see the [FlakeIdGenerator section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#flakeidgenerator) in the -Hazelcast IMDG Reference Manual. +For details, see the [FlakeIdGenerator section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#flakeidgenerator) in the Hazelcast IMDG Reference Manual. ```python # Get a Flake ID Generator called 'flake-id-generator' @@ -1711,9 +1514,30 @@ generator = client.get_flake_id_generator("flake-id-generator").blocking() print("ID: {}".format(generator.new_id())) ``` +#### 7.4.11.1 Configuring Flake ID Generator + +You may configure `FlakeIdGenerator`s as the following: + +```python +generator_config = FlakeIdGeneratorConfig() +generator_config.name = "flake-id-generator" +generator_config.prefetch_count = 123 +generator_config.prefetch_validity_in_millis = 150000 +config.add_flake_id_generator_config(generator_config) +``` + +The following are the descriptions of configuration elements and attributes: + +* `name`: Name of your Flake ID Generator. +* `prefetchCount`: Count of IDs which are pre-fetched on the background when one call to `FlakeIdGenerator.newId()` is made. Its value must be in the range `1` - `100,000`. Its default value is `100`. +* `prefetchValidityMillis`: Specifies for how long the pre-fetched IDs can be used. After this time elapses, a new batch of IDs are fetched. Time unit is milliseconds. Its default value is `600,000` milliseconds (`10` minutes). The IDs contain a timestamp component, which ensures a rough global ordering of them. If an ID is assigned to an object that was created later, it will be out of order. If ordering is not important, set this value to `0`. + +> **NOTE: When you use `default` as the Flake ID Generator configuration key, it has a special meaning. Hazelcast client will use that configuration as the default one for all Flake ID Generators, unless there is a specific configuration for the generator.** + ## 7.5. Distributed Events -This chapter explains when various events are fired and describes how you can add event listeners on a Hazelcast Python client. These events can be categorized as cluster and distributed data structure events. +This chapter explains when various events are fired and describes how you can add event listeners on a Hazelcast Python client. +These events can be categorized as cluster and distributed data structure events. ### 7.5.1. Cluster Events @@ -1721,7 +1545,7 @@ You can add event listeners to a Hazelcast Python client. You can configure the * Membership Listener: Notifies when a member joins to/leaves the cluster. -* Lifecycle Listener: Notifies when the client is starting, started, shutting down and shutdown. +* Lifecycle Listener: Notifies when the client is starting, started, connected, disconnected, shutting down and shutdown. #### 7.5.1.1. Listening for Member Events @@ -1735,14 +1559,24 @@ The `ClusterService` class exposes an `add_listener()` method that allows one or The following is a membership listener registration by using the `add_listener()` method. ```python -client.cluster.add_listener(member_added=lambda m: print("Member Added: The address is {}".format(m.address))) +def added_listener(member): + print("Member Added: The address is {}".format(member.address)) + +def removed_listener(member): + print("Member Removed. The address is {}".format(member.address)) + +client.cluster_service.add_listener(member_added=added_listener, member_removed=removed_listener, fire_for_existing=True) ``` +Also, you can set the `fire_for_existing` flag to `True` to receive the events for list of available members when the +listener is registered. + #### 7.5.1.2. Listening for Distributed Object Events The events for distributed objects are invoked when they are created and destroyed in the cluster. When an event is received, listener function will be called. The parameter passed into the listener function will be of the type ``DistributedObjectEvent``. A ``DistributedObjectEvent`` contains the following fields: + * ``name``: Name of the distributed object. * ``service_name``: Service name of the distributed object. * ``event_type``: Type of the invoked event. It is either ``CREATED`` or ``DESTROYED``. @@ -1778,9 +1612,11 @@ Distributed object event >>> test_map hz:impl:mapService DESTROYED The `Lifecycle Listener` notifies for the following events: * `STARTING`: The client is starting. -* `CONNECTED`: The client is connected. +* `STARTED`: The client has started. +* `CONNECTED`: The client connected to a member. * `SHUTTING_DOWN`: The client is shutting down. -* `SHUTDOWN`: The client’s shutdown has completed. +* `DISCONNECTED`: The client disconnected from a member. +* `SHUTDOWN`: The client has shutdown. The following is an example of the `Lifecycle listener` that is added to the `ClientConfig` object and its output. @@ -1795,10 +1631,41 @@ client.shutdown() **Output:** ``` +Sep 03, 2020 05:00:29 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is STARTING Lifecycle Event >>> STARTING +Sep 03, 2020 05:00:29 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is STARTED +Lifecycle Event >>> STARTED +Sep 03, 2020 05:00:29 PM HazelcastClient.ConnectionManager +INFO: [4.0.0] [dev] [hz.client_0] Trying to connect to Address(host=127.0.0.1, port=5701) +Sep 03, 2020 05:00:29 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is CONNECTED Lifecycle Event >>> CONNECTED +Sep 03, 2020 05:00:29 PM HazelcastClient.ConnectionManager +INFO: [4.0.0] [dev] [hz.client_0] Authenticated with server Address(host=192.168.1.10, port=5701):7362c66f-ef9f-4a6a-a003-f8b33dfd292a, server version: 4.1-SNAPSHOT, local address: Address(host=127.0.0.1, port=36302) +Sep 03, 2020 05:00:29 PM HazelcastClient.ClusterService +INFO: [4.0.0] [dev] [hz.client_0] + +Members [1] { + Member [192.168.1.10]:5701 - 7362c66f-ef9f-4a6a-a003-f8b33dfd292a +} + +Sep 03, 2020 05:00:29 PM HazelcastClient +INFO: [4.0.0] [dev] [hz.client_0] Client started. +Sep 03, 2020 05:00:29 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is SHUTTING_DOWN Lifecycle Event >>> SHUTTING_DOWN +Sep 03, 2020 05:00:29 PM HazelcastClient.ConnectionManager +INFO: [4.0.0] [dev] [hz.client_0] Removed connection to Address(host=127.0.0.1, port=5701):7362c66f-ef9f-4a6a-a003-f8b33dfd292a, connection: Connection(id=0, live=False, remote_address=Address(host=192.168.1.10, port=5701)) +Sep 03, 2020 05:00:29 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is DISCONNECTED +Lifecycle Event >>> DISCONNECTED +Sep 03, 2020 05:00:29 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is SHUTDOWN Lifecycle Event >>> SHUTDOWN +Sep 03, 2020 05:00:29 PM HazelcastClient +INFO: [4.0.0] [dev] [hz.client_0] Client shutdown. ``` ### 7.5.2. Distributed Data Structure Events @@ -1852,9 +1719,11 @@ This chapter explains how you can use Hazelcast IMDG's entry processor implement Hazelcast supports entry processing. An entry processor is a function that executes your code on a map entry in an atomic way. -An entry processor is a good option if you perform bulk processing on a `Map`. Usually you perform a loop of keys -- executing `Map.get(key)`, mutating the value, and finally putting the entry back in the map using `Map.put(key,value)`. If you perform this process from a client or from a member where the keys do not exist, you effectively perform two network hops for each update: the first to retrieve the data and the second to update the mutated value. +An entry processor is a good option if you perform bulk processing on a `Map`. Usually you perform a loop of keys -- executing `Map.get(key)`, mutating the value, and finally putting the entry back in the map using `Map.put(key,value)`. +If you perform this process from a client or from a member where the keys do not exist, you effectively perform two network hops for each update: the first to retrieve the data and the second to update the mutated value. -If you are doing the process described above, you should consider using entry processors. An entry processor executes a read and updates upon the member where the data resides. This eliminates the costly network hops described above. +If you are doing the process described above, you should consider using entry processors. An entry processor executes a read and updates upon the member where the data resides. +This eliminates the costly network hops described above. > **NOTE: Entry processor is meant to process a single entry per call. Processing multiple entries and data structures in an entry processor is not supported as it may result in deadlocks on the server side.** @@ -1865,9 +1734,7 @@ Hazelcast sends the entry processor to each cluster member and these members app The `Map` class provides the following methods for entry processing: * `execute_on_key` processes an entry mapped by a key. - * `execute_on_keys` processes entries mapped by a list of keys. - * `execute_on_entries` can process all entries in a map with a defined predicate. Predicate is optional. In the Python client, an `EntryProcessor` should be `IdentifiedDataSerializable` or `Portable` because the server should be able to deserialize it to process. @@ -1894,7 +1761,8 @@ class IdentifiedEntryProcessor(IdentifiedDataSerializable): return 1 ``` -Now, you need to make sure that the Hazelcast member recognizes the entry processor. For this, you need to implement the Java equivalent of your entry processor and its factory, and create your own compiled class or JAR files. For adding your own compiled class or JAR files to the server's `CLASSPATH`, see the [Adding User Library to CLASSPATH section](#adding-user-library-to-classpath). +Now, you need to make sure that the Hazelcast member recognizes the entry processor. For this, you need to implement the Java equivalent of your entry processor and its factory, and create your own compiled class or JAR files. +For adding your own compiled class or JAR files to the server's `CLASSPATH`, see the [Adding User Library to CLASSPATH section](#1212-adding-user-library-to-classpath). The following is the Java equivalent of the entry processor in Python client given above: @@ -1974,9 +1842,11 @@ Now you need to configure the `hazelcast.xml` to add your factory as shown below ``` -The code that runs on the entries is implemented in Java on the server side. The client side entry processor is used to specify which entry processor should be called. For more details about the Java implementation of the entry processor, see the [Entry Processor section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#entry-processor) in the Hazelcast IMDG Reference Manual. +The code that runs on the entries is implemented in Java on the server side. The client side entry processor is used to specify which entry processor should be called. +For more details about the Java implementation of the entry processor, see the [Entry Processor section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#entry-processor) in the Hazelcast IMDG Reference Manual. -After the above implementations and configuration are done and you start the server where your library is added to its `CLASSPATH`, you can use the entry processor in the `Map` methods. See the following example. +After the above implementations and configuration are done and you start the server where your library is added to its `CLASSPATH`, you can use the entry processor in the `Map` methods. +See the following example. ```python distributed_map = client.get_map("my-distributed-map").blocking() @@ -1989,7 +1859,9 @@ print(distributed_map.get("key")) # Outputs 'processed' ## 7.7. Distributed Query -Hazelcast partitions your data and spreads it across cluster of members. You can iterate over the map entries and look for certain entries (specified by predicates) you are interested in. However, this is not very efficient because you will have to bring the entire entry set and iterate locally. Instead, Hazelcast allows you to run distributed queries on your distributed map. +Hazelcast partitions your data and spreads it across cluster of members. You can iterate over the map entries and look for certain entries (specified by predicates) you are interested in. +However, this is not very efficient because you will have to bring the entire entry set and iterate locally. +Instead, Hazelcast allows you to run distributed queries on your distributed map. ### 7.7.1. How Distributed Query Works @@ -1997,7 +1869,8 @@ Hazelcast partitions your data and spreads it across cluster of members. You can 2. Each member looks at its own local entries and filters them according to the predicate. At this stage, key-value pairs of the entries are deserialized and then passed to the predicate. 3. The predicate requester merges all the results coming from each member into a single set. -Distributed query is highly scalable. If you add new members to the cluster, the partition count for each member is reduced and thus the time spent by each member on iterating its entries is reduced. In addition, the pool of partition threads evaluates the entries concurrently in each member, and the network traffic is also reduced since only filtered data is sent to the requester. +Distributed query is highly scalable. If you add new members to the cluster, the partition count for each member is reduced and thus the time spent by each member on iterating its entries is reduced. +In addition, the pool of partition threads evaluates the entries concurrently in each member, and the network traffic is also reduced since only filtered data is sent to the requester. **Predicate Module Operators** @@ -2060,7 +1933,9 @@ class Employee(Portable): Note that `Employee` extends `Portable`. As portable types are not deserialized on the server side for querying, you don’t need to implement its Java equivalent on the server side. -For types that are not portable, you need to implement its Java equivalent and its data serializable factory on the server side for server to reconstitute the objects from binary formats. In this case, you need to compile the `Employee` and related factory classes with server's `CLASSPATH` and add them to the `user-lib` directory in the extracted `hazelcast-.zip` (or `tar`) before starting the server. See the [Adding User Library to CLASSPATH section](#adding-user-library-to-classpath). +For types that are not portable, you need to implement its Java equivalent and its data serializable factory on the server side for server to reconstitute the objects from binary formats. +In this case, you need to compile the `Employee` and related factory classes with server's `CLASSPATH` and add them to the `user-lib` directory in the extracted `hazelcast-.zip` (or `tar`) before starting the server. +See the [Adding User Library to CLASSPATH section](#1212-adding-user-library-to-classpath). > **NOTE: Querying with `Portable` class is faster as compared to `IdentifiedDataSerializable`.** @@ -2078,7 +1953,8 @@ predicate = and_(is_equal_to('active', True), is_less_than('age', 30)) employees = employee_map.values(predicate).result() ``` -In the above example code, `predicate` verifies whether the entry is active and its `age` value is less than 30. This `predicate` is applied to the `employee` map using the `Map.values` method. This method sends the predicate to all cluster members and merges the results coming from them. +In the above example code, `predicate` verifies whether the entry is active and its `age` value is less than 30. +This `predicate` is applied to the `employee` map using the `Map.values` method. This method sends the predicate to all cluster members and merges the results coming from them. > **NOTE: Predicates can also be applied to `key_set` and `entry_set` of the Hazelcast IMDG's distributed map.** @@ -2191,7 +2067,6 @@ You can use ``HazelcastJsonValue``s both as keys and values in the distributed d possible to query these objects using the Hazelcast query methods explained in this section. ```python -from hazelcast.core import HazelcastJsonValue person1 = "{ \"name\": \"John\", \"age\": 35 }" person2 = "{ \"name\": \"Jane\", \"age\": 24 }" person3 = {"name": "Trey", "age": 17} @@ -2241,13 +2116,41 @@ as querying other Hazelcast objects using the `Predicate`s.` department_with_peter = departments.values(is_equal_to("people[any].name", "Peter")) ``` -`HazelcastJsonValue` is a lightweight wrapper around your JSON strings. It is used merely as a way to indicate that the contained string should be treated as a valid JSON value. Hazelcast does not check the validity of JSON strings put into to the maps. Putting an invalid JSON string into a map is permissible. However, in that case whether such an entry is going to be returned or not from a query is not defined. +`HazelcastJsonValue` is a lightweight wrapper around your JSON strings. It is used merely as a way to indicate that the contained string should be treated as a valid JSON value. +Hazelcast does not check the validity of JSON strings put into to the maps. Putting an invalid JSON string into a map is permissible. +However, in that case whether such an entry is going to be returned or not from a query is not defined. + +##### Metadata Creation for JSON Querying + +Hazelcast stores a metadata object per JSON serialized object stored. This metadata object is created every time a JSON serialized object is put into an `Map`. +Metadata is later used to speed up the query operations. Metadata creation is on by default. Depending on your application’s needs, you may want to turn off the metadata creation to decrease the put latency and increase the throughput. + +You can configure this using `metadata-policy` element for the map configuration on the member side as follows: + +```xml + + ... + + + OFF + + ... + +``` + ## 7.8. Performance ### 7.8.1. Near Cache -Map entries in Hazelcast are partitioned across the cluster members. Hazelcast clients do not have local data at all. Suppose you read the key `k` a number of times from a Hazelcast client and `k` is owned by a member in your cluster. Then each `map.get(k)` will be a remote operation, which creates a lot of network trips. If you have a map that is mostly read, then you should consider creating a local Near Cache, so that reads are sped up and less network traffic is created. +Map entries in Hazelcast are partitioned across the cluster members. Hazelcast clients do not have local data at all. +Suppose you read the key `k` a number of times from a Hazelcast client and `k` is owned by a member in your cluster. +Then each `map.get(k)` will be a remote operation, which creates a lot of network trips. +If you have a map that is mostly read, then you should consider creating a local Near Cache, so that reads are sped up and less network traffic is created. These benefits do not come for free, please consider the following trade-offs: @@ -2327,7 +2230,8 @@ The actual expiration is performed when a record is accessed: it is checked if t #### 7.8.1.5. Near Cache Invalidation -Invalidation is the process of removing an entry from the Near Cache when its value is updated or it is removed from the original map (to prevent stale reads). See the [Near Cache Invalidation section](https://docs.hazelcast.org/docs/latest/manual/html-single/#near-cache-invalidation) in the Hazelcast IMDG Reference Manual. +Invalidation is the process of removing an entry from the Near Cache when its value is updated or it is removed from the original map (to prevent stale reads). +See the [Near Cache Invalidation section](https://docs.hazelcast.org/docs/latest/manual/html-single/#near-cache-invalidation) in the Hazelcast IMDG Reference Manual. ## 7.9. Monitoring and Logging @@ -2335,19 +2239,7 @@ Invalidation is the process of removing an entry from the Near Cache when its va You can monitor your clients using Hazelcast Management Center. -As a prerequisite, you need to enable the client statistics before starting your clients. This can be done by setting the `hazelcast.client.statistics.enabled` system property to `true` on the **member** as the following: - -```xml - - ... - - true - - ... - -``` - -Also, you need to enable the client statistics in the Python client. There are two properties related to client statistics: +As a prerequisite, you need to enable the client statistics before starting your clients. There are two properties related to client statistics: - `hazelcast.client.statistics.enabled`: If set to `True`, it enables collecting the client statistics and sending them to the cluster. When it is `True` you can monitor the clients that are connected to your Hazelcast cluster, using Hazelcast Management Center. Its default value is `False`. @@ -2363,7 +2255,8 @@ config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, 4) Hazelcast Python client can collect statistics related to the client and Near Caches without an extra dependency. However, to get the statistics about the runtime and operating system, [psutil](https://pypi.org/project/psutil/) is used as an extra dependency. -If the `psutil` is installed, runtime and operating system statistics will be sent to cluster along with statistics related to the client and Near Caches. If not, only the client and Near Cache statistics will be sent. +If the `psutil` is installed, runtime and operating system statistics will be sent to cluster along with statistics related to the client and Near Caches. +If not, only the client and Near Cache statistics will be sent. `psutil` can be installed independently or with the Hazelcast Python client as follows: @@ -2372,7 +2265,19 @@ If the `psutil` is installed, runtime and operating system statistics will be se pip install hazelcast-python-client[stats] ``` -**From source** +**From source**This can be done by setting the `hazelcast.client.statistics.enabled` system property to `true` on the **member** as the following: + +```xml + + ... + + true + + ... + +``` + +Also, you need to enable the client statistics in the Python client. ``` pip install -e .[stats] @@ -2399,33 +2304,35 @@ client.shutdown() **Output to the `sys.stderr`** ``` -Feb 15, 2019 12:57:13 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] A non-empty group password is configured for the Hazelcast client. Starting with Hazelcast IMDG version 3.11, clients with the same group name, but with different group passwords (that do not use authentication) will be accepted to a cluster. The group password configuration will be removed completely in a future release. -Feb 15, 2019 12:57:13 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is STARTING -Feb 15, 2019 12:57:13 PM HazelcastClient.ClusterService -INFO: [3.10] [dev] [hz.client_0] Connecting to Address(host=127.0.0.1, port=5701) -Feb 15, 2019 12:57:13 PM HazelcastClient.ConnectionManager -INFO: [3.10] [dev] [hz.client_0] Authenticated with Connection(address=('127.0.0.1', 5701), id=0) -Feb 15, 2019 12:57:13 PM HazelcastClient.ClusterService -INFO: [3.10] [dev] [hz.client_0] New member list: +Sep 03, 2020 05:41:35 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is STARTING +Sep 03, 2020 05:41:35 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is STARTED +Sep 03, 2020 05:41:35 PM HazelcastClient.ConnectionManager +INFO: [4.0.0] [dev] [hz.client_0] Trying to connect to Address(host=127.0.0.1, port=5701) +Sep 03, 2020 05:41:35 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is CONNECTED +Sep 03, 2020 05:41:35 PM HazelcastClient.ConnectionManager +INFO: [4.0.0] [dev] [hz.client_0] Authenticated with server Address(host=192.168.1.10, port=5701):7362c66f-ef9f-4a6a-a003-f8b33dfd292a, server version: 4.1-SNAPSHOT, local address: Address(host=127.0.0.1, port=37026) +Sep 03, 2020 05:41:35 PM HazelcastClient.ClusterService +INFO: [4.0.0] [dev] [hz.client_0] Members [1] { - Member [10.216.1.49]:5701 - 1f4bb35d-b68f-46eb-bd65-61e3f4bc9922 + Member [192.168.1.10]:5701 - 7362c66f-ef9f-4a6a-a003-f8b33dfd292a } -Feb 15, 2019 12:57:13 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is CONNECTED -Feb 15, 2019 12:57:13 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] Client started. -Feb 15, 2019 12:57:13 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is SHUTTING_DOWN -Feb 15, 2019 12:57:13 PM HazelcastClient.AsyncoreReactor -WARNING: [3.10] [dev] [hz.client_0] Connection closed by server -Feb 15, 2019 12:57:13 PM HazelcastClient.LifecycleService -INFO: [3.10] [dev] [hz.client_0] (20181119 - 9080a46) HazelcastClient is SHUTDOWN -Feb 15, 2019 12:57:13 PM HazelcastClient -INFO: [3.10] [dev] [hz.client_0] Client shutdown. +Sep 03, 2020 05:41:35 PM HazelcastClient +INFO: [4.0.0] [dev] [hz.client_0] Client started. +Sep 03, 2020 05:41:35 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is SHUTTING_DOWN +Sep 03, 2020 05:41:35 PM HazelcastClient.ConnectionManager +INFO: [4.0.0] [dev] [hz.client_0] Removed connection to Address(host=127.0.0.1, port=5701):7362c66f-ef9f-4a6a-a003-f8b33dfd292a, connection: Connection(id=0, live=False, remote_address=Address(host=192.168.1.10, port=5701)) +Sep 03, 2020 05:41:35 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is DISCONNECTED +Sep 03, 2020 05:41:35 PM HazelcastClient.LifecycleService +INFO: [4.0.0] [dev] [hz.client_0] (20190802 - 85a237d) HazelcastClient is SHUTDOWN +Sep 03, 2020 05:41:35 PM HazelcastClient +INFO: [4.0.0] [dev] [hz.client_0] Client shutdown. ``` Let's go over the `LoggerConfig` options one by one. @@ -2449,10 +2356,10 @@ For example, setting the logging level to `logging.DEBUG` will cause all the log By default, the logging level is set to `logging.INFO`. -To turn off the logging, you can set `ClientConfig.logger_config.level` to a value greater than the numeric value of `logging.CRITICAL`. For example, the configuration below turns off the logging for the Hazelcast Python client. +To turn off the logging, you can set `ClientConfig.logger.level` to a value greater than the numeric value of `logging.CRITICAL`. For example, the configuration below turns off the logging for the Hazelcast Python client. ```python -config.logger_config.level = 100 # Any value greater than 50 will turn off the logging +config.logger.level = 100 # Any value greater than 50 will turn off the logging client = hazelcast.HazelcastClient(config) ``` @@ -2527,7 +2434,7 @@ class HazelcastFormatter(logging.Formatter): import hazelcast config = hazelcast.ClientConfig() -config.logger_config.config_file = "/home/hazelcast/config.json" +config.logger.config_file = "/home/hazelcast/config.json" client = hazelcast.HazelcastClient(config) @@ -2538,11 +2445,237 @@ client.shutdown() To learn more about the `logging` module and its capabilities, please see the [logging cookbook](https://docs.python.org/3/howto/logging-cookbook.html) and [documentation](https://docs.python.org/3/library/logging.html) of the `logging` module. -# 8. Development and Testing + +## 7.10. Defining Client Labels + +Through the client labels, you can assign special roles for your clients and use these roles to perform some actions +specific to those client connections. + +You can also group your clients using the client labels. These client groups can be blacklisted in Hazelcast Management Center so that they can be prevented from connecting to a cluster. +See the [related section](https://docs.hazelcast.org/docs/management-center/latest/manual/html/index.html#changing-cluster-client-filtering) in the Hazelcast Management Center Reference Manual for more information on this topic. + +You can define the client labels using the `labels` config option. See the below example. + +```python +config.labels.add("role admin") +config.labels.add("region foo") +``` + +## 7.11. Defining Client Name + +Each client has a name associated with it. By default, it is set to `hz.client_${CLIENT_ID}`. +Here `CLIENT_ID` starts from `0` and it is incremented by `1` for each new client. +This id is incremented and set by the client, so it may not be unique between different clients used by different applications. + +You can set the client name using the `client_name` configuration element. + +```python +config.client_name = "blue_client_0" +``` + +## 7.12. Configuring Load Balancer + +Load Balancer configuration allows you to specify which cluster member to send next operation when queried. + +If it is a [smart client](#721-smart-client), only the operations that are not key-based are routed to the member +that is returned by the `LoadBalancer`. If it is not a smart client, `LoadBalancer` is ignored. + +By default, client uses round robin load balancer which picks each cluster member in turn. +Also, the client provides random load balancer which picks the next member randomly as the name suggests. +You can use one of them by setting the `load_balancer` config option. + +The following are example configurations. + +```javascript +from hazelcast.cluster import RandomLB + +config.load_balancer = RandomLB() +``` + +You can also provide a custom load balancer implementation to use different load balancing policies. +To do so, you should provide a class that implements the `AbstractLoadBalancer`s interface or extend the `AbstractLoadBalancer` class for that purpose and provide the load balancer object into the `load_balancer` config option. + +# 8. Securing Client Connection + +This chapter describes the security features of Hazelcast Python client. +These include using TLS/SSL for connections between members and between clients and members, and mutual authentication. +These security features require **Hazelcast IMDG Enterprise** edition. + +### 8.1. TLS/SSL + +One of the offers of Hazelcast is the TLS/SSL protocol which you can use to establish an encrypted communication across your cluster with key stores and trust stores. + +* A Java `keyStore` is a file that includes a private key and a public certificate. The equivalent of a key store is the combination of `keyfile` and `certfile` at the Python client side. + +* A Java `trustStore` is a file that includes a list of certificates trusted by your application which is named certificate authority. The equivalent of a trust store is a `cafile` at the Python client side. + +You should set `keyStore` and `trustStore` before starting the members. See the next section on how to set `keyStore` and `trustStore` on the server side. + +#### 8.1.1. TLS/SSL for Hazelcast Members + +Hazelcast allows you to encrypt socket level communication between Hazelcast members and between Hazelcast clients and members, for end to end encryption. +To use it, see the [TLS/SSL for Hazelcast Members section](http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#tls-ssl-for-hazelcast-members). + +#### 8.1.2. TLS/SSL for Hazelcast Python Clients + +TLS/SSL for the Hazelcast Python client can be configured using the `SSLConfig` class. +Let's first give an example of a sample configuration and then go over the configuration options one by one: + +```python +import hazelcast +from hazelcast.config import PROTOCOL + +config = hazelcast.ClientConfig() +config.network.ssl.enabled = True +config.network.ssl.cafile = "/home/hazelcast/cafile.pem" +config.network.ssl.certfile = "/home/hazelcast/certfile.pem" +config.network.ssl.keyfile = "/home/hazelcast/keyfile.pem" +config.network.ssl.password = "hazelcast" +config.network.ssl.protocol = PROTOCOL.TLSv1_3 +config.network.ssl.ciphers = "DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA" +``` + +##### Enabling TLS/SSL + +TLS/SSL for the Hazelcast Python client can be enabled/disabled using the `enabled` option. When this option is set to `True`, TLS/SSL will be configured with respect to the other `SSLConfig` options. +Setting this option to `False` will result in discarding the other `SSLConfig` options. + +The following is an example configuration: + +```python +config.network.ssl.enabled = True +``` + +Default value is `False` (disabled). + +##### Setting CA File + +Certificates of the Hazelcast members can be validated against `cafile`. This option should point to the absolute path of the concatenated CA certificates in PEM format. +When SSL is enabled and `cafile` is not set, a set of default CA certificates from default locations will be used. + +The following is an example configuration: + +```python +config.network.ssl.cafile = "/home/hazelcast/cafile.pem" +``` + +##### Setting Client Certificate + +When mutual authentication is enabled on the member side, clients or other members should also provide a certificate file that identifies themselves. +Then, Hazelcast members can use these certificates to validate the identity of their peers. + +Client certificate can be set using the `certfile`. This option should point to the absolute path of the client certificate in PEM format. + +The following is an example configuration: + +```python +config.network.ssl.certfile = "/home/hazelcast/certfile.pem" +``` + +##### Setting Private Key + +Private key of the `certfile` can be set using the `keyfile`. This option should point to the absolute path of the private key file for the client certificate in the PEM format. + +If this option is not set, private key will be taken from `certfile`. In this case, `certfile` should be in the following format. + +``` +-----BEGIN RSA PRIVATE KEY----- +... (private key in base64 encoding) ... +-----END RSA PRIVATE KEY----- +-----BEGIN CERTIFICATE----- +... (certificate in base64 PEM encoding) ... +-----END CERTIFICATE----- +``` + +The following is an example configuration: + +```python +config.network.ssl.keyfile = "/home/hazelcast/keyfile.pem" +``` + +##### Setting Password of the Private Key + +If the private key is encrypted using a password, `password` will be used to decrypt it. The `password` may be a function to call to get the password. +In that case, it will be called with no arguments, and it should return a string, bytes or bytearray. If the return value is a string it will be encoded as UTF-8 before using it to decrypt the key. + +Alternatively a string, bytes or bytearray value may be supplied directly as the password. + +The following is an example configuration: + +```python +config.network.ssl.password = "hazelcast" +``` + +##### Setting the Protocol + +`protocol` can be used to select the protocol that will be used in the TLS/SSL communication. Hazelcast Python client offers the following protocols: + +* **SSLv2** : SSL 2.0 Protocol. *RFC 6176 prohibits the usage of SSL 2.0.* +* **SSLv3** : SSL 3.0 Protocol. *RFC 7568 prohibits the usage of SSL 3.0.* +* **SSL** : Alias for SSL 3.0 +* **TLSv1** : TLS 1.0 Protocol described in RFC 2246 +* **TLSv1_1** : TLS 1.1 Protocol described in RFC 4346 +* **TLSv1_2** : TLS 1.2 Protocol described in RFC 5246 +* **TLSv1_3** : TLS 1.3 Protocol described in RFC 8446 +* **TLS** : Alias for TLS 1.2 + +> Note that TLSv1+ requires at least Python 2.7.9 or Python 3.4 built with OpenSSL 1.0.1+, and TLSv1_3 requires at least Python 2.7.15 or Python 3.7 built with OpenSSL 1.1.1+. + +These protocol versions can be selected using the `hazelcast.config.PROTOCOL` as follows: + +```python +from hazelcast.config import PROTOCOL + +config.network.ssl.protocol = PROTOCOL.TLSv1_3 +``` + +> Note that the Hazelcast Python client and the Hazelcast members should have the same protocol version in order for TLS/SSL to work. In case of the protocol mismatch, connection attempts will be refused. + +Default value is `PROTOCOL.TLS` which is an alias for `PROTOCOL.TLSv1_2`. + +##### Setting Cipher Suites + +Cipher suites that will be used in the TLS/SSL communication can be set using the `ciphers` option. Cipher suites should be in the +OpenSSL cipher list format. More than one cipher suite can be set by separating them with a colon. + +TLS/SSL implementation will honor the cipher suite order. So, Hazelcast Python client will offer the ciphers to the Hazelcast members with the given order. + +Note that, when this option is not set, all the available ciphers will be offered to the Hazelcast members with their default order. + +The following is an example configuration: + +```python +config.network.ssl.ciphers = "DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA" +``` + +#### 8.1.3. Mutual Authentication + +As explained above, Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast clients have trust stores used to define which members they can trust. + +Using mutual authentication, the clients also have their key stores and members have their trust stores so that the members can know which clients they can trust. + +To enable mutual authentication, firstly, you need to set the following property on the server side in the `hazelcast.xml` file: + +```xml + + + + REQUIRED + + + +``` + +You can see the details of setting mutual authentication on the server side in the [Mutual Authentication section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#mutual-authentication) of the Hazelcast IMDG Reference Manual. + +On the client side, you have to provide `SSLConfig.cafile`, `SSLConfig.certfile` and `SSLConfig.keyfile` on top of the other TLS/SSL configurations. See the [TLS/SSL for Hazelcast Python Clients](#812-tlsssl-for-hazelcast-python-clients) for the details of these options. + + +# 9. Development and Testing If you want to help with bug fixes, develop new features or tweak the implementation to your application's needs, you can follow the steps in this section. -## 8.1. Building and Using Client From Sources +## 9.1. Building and Using Client From Sources Follow the below steps to build and install Hazelcast Python client from its source: @@ -2551,17 +2684,17 @@ Follow the below steps to build and install Hazelcast Python client from its sou If you are planning to contribute, please make sure that it fits the guidelines described in [PEP8](https://www.python.org/dev/peps/pep-0008/). -## 8.2. Testing +## 9.2. Testing In order to test Hazelcast Python client locally, you will need the following: -* Java 6 or newer +* Java 8 or newer * Maven Following commands starts the tests according to your operating system: ```bash -sh run-tests.sh +bash run-tests.sh ``` or @@ -2572,7 +2705,7 @@ PS> .\run-tests.ps1 Test script automatically downloads `hazelcast-remote-controller` and Hazelcast IMDG. The script uses Maven to download those. -# 9. Getting Help +# 10. Getting Help You can use the following channels for your questions and development/usage issues: @@ -2580,15 +2713,15 @@ You can use the following channels for your questions and development/usage issu * Our Google Groups directory: https://groups.google.com/forum/#!forum/hazelcast * Stack Overflow: https://stackoverflow.com/questions/tagged/hazelcast -# 10. Contributing +# 11. Contributing -Besides your development contributions as explained in the [Development and Testing chapter](#8-development-and-testing) above, you can always open a pull request on this repository for your other requests. +Besides your development contributions as explained in the [Development and Testing chapter](#9-development-and-testing) above, you can always open a pull request on this repository for your other requests. -# 11. License +# 12. License [Apache 2 License](https://github.com/hazelcast/hazelcast-python-client/blob/master/LICENSE.txt). -# 12. Copyright +# 13. Copyright Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved. diff --git a/benchmarks/map_async_bench.py b/benchmarks/map_async_bench.py index 0e77469ea7..91c21f0a14 100644 --- a/benchmarks/map_async_bench.py +++ b/benchmarks/map_async_bench.py @@ -38,9 +38,9 @@ def do_benchmark(): logger.info("Remote Controller Server OK...") rc_cluster = rc.createCluster(None, None) rc_member = rc.startMember(rc_cluster.id) - config.network_config.addresses.append('{}:{}'.format(rc_member.host, rc_member.port)) + config.network.addresses.append('{}:{}'.format(rc_member.host, rc_member.port)) except (ImportError, NameError): - config.network_config.addresses.append('127.0.0.1') + config.network.addresses.append('127.0.0.1') client = hazelcast.HazelcastClient(config) diff --git a/benchmarks/map_bench.py b/benchmarks/map_bench.py index eea9240221..3225b7f891 100644 --- a/benchmarks/map_bench.py +++ b/benchmarks/map_bench.py @@ -36,9 +36,9 @@ def do_benchmark(): logger.info("Remote Controller Server OK...") rc_cluster = rc.createCluster(None, None) rc_member = rc.startMember(rc_cluster.id) - config.network_config.addresses.append('{}:{}'.format(rc_member.host, rc_member.port)) + config.network.addresses.append('{}:{}'.format(rc_member.host, rc_member.port)) except (ImportError, NameError): - config.network_config.addresses.append('127.0.0.1') + config.network.addresses.append('127.0.0.1') client = hazelcast.HazelcastClient(config) my_map = client.get_map("default") diff --git a/benchmarks/simple_map_bench.py b/benchmarks/simple_map_bench.py index 2cfaef229e..51c995fbdb 100644 --- a/benchmarks/simple_map_bench.py +++ b/benchmarks/simple_map_bench.py @@ -40,9 +40,9 @@ def do_benchmark(): logger.info("Remote Controller Server OK...") rc_cluster = rc.createCluster(None, None) rc_member = rc.startMember(rc_cluster.id) - config.network_config.addresses.append('{}:{}'.format(rc_member.host, rc_member.port)) + config.network.addresses.append('{}:{}'.format(rc_member.host, rc_member.port)) except (ImportError, NameError): - config.network_config.addresses.append('127.0.0.1') + config.network.addresses.append('127.0.0.1') client = hazelcast.HazelcastClient(config) diff --git a/benchmarks/simple_map_bench_multiprocess.py b/benchmarks/simple_map_bench_multiprocess.py index 67b716231c..23feab6489 100644 --- a/benchmarks/simple_map_bench_multiprocess.py +++ b/benchmarks/simple_map_bench_multiprocess.py @@ -38,9 +38,9 @@ def do_benchmark(): logger.info("Remote Controller Server OK...") rc_cluster = rc.createCluster(None, None) rc_member = rc.startMember(rc_cluster.id) - config.network_config.addresses.append('{}:{}'.format(rc_member.host, rc_member.port)) + config.network.addresses.append('{}:{}'.format(rc_member.host, rc_member.port)) except (ImportError, NameError): - config.network_config.addresses.append('127.0.0.1') + config.network.addresses.append('127.0.0.1') client = hazelcast.HazelcastClient(config) diff --git a/benchmarks/simple_map_nearcache_bench.py b/benchmarks/simple_map_nearcache_bench.py index 5433a90db5..d8a68eeb57 100644 --- a/benchmarks/simple_map_nearcache_bench.py +++ b/benchmarks/simple_map_nearcache_bench.py @@ -29,7 +29,7 @@ def init(): config = hazelcast.ClientConfig() config.group_config.name = "dev" config.group_config.password = "dev-pass" - config.network_config.addresses.append("127.0.0.1") + config.network.addresses.append("127.0.0.1") near_cache_config = NearCacheConfig(MAP_NAME) near_cache_config.in_memory_format = IN_MEMORY_FORMAT.OBJECT @@ -46,9 +46,9 @@ def init(): logger.info("Remote Controller Server OK...") rc_cluster = rc.createCluster(None, None) rc_member = rc.startMember(rc_cluster.id) - config.network_config.addresses.append('{}:{}'.format(rc_member.host, rc_member.port)) + config.network.addresses.append('{}:{}'.format(rc_member.host, rc_member.port)) except (ImportError, NameError): - config.network_config.addresses.append('127.0.0.1') + config.network.addresses.append('127.0.0.1') client = hazelcast.HazelcastClient(config) diff --git a/examples/__init__.py b/examples/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/cloud-discovery/__init__.py b/examples/cloud-discovery/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/cloud-discovery/hazelcast_cloud_discovery_example.py b/examples/cloud-discovery/hazelcast_cloud_discovery_example.py index be295ed64b..25ea54035c 100644 --- a/examples/cloud-discovery/hazelcast_cloud_discovery_example.py +++ b/examples/cloud-discovery/hazelcast_cloud_discovery_example.py @@ -1,25 +1,28 @@ import hazelcast -if __name__ == "__main__": - config = hazelcast.ClientConfig() +config = hazelcast.ClientConfig() - # Set up group name and password for authentication - config.group_config.name = "name" - config.group_config.password = "password" +# Set up cluster name for authentication +config.cluster_name.name = "YOUR_CLUSTER_NAME" - # Enable SSL for encryption. Default CA certificates will be used. - config.network_config.ssl_config.enabled = True +# Enable Hazelcast.Cloud configuration and set the token of your cluster. +config.network.cloud.enabled = True +config.network.cloud.discovery_token = "YOUR_CLUSTER_DISCOVERY_TOKEN" - # Enable Hazelcast.Cloud configuration and set the token of your cluster. - config.network_config.cloud_config.enabled = True - config.network_config.cloud_config.discovery_token = "token" +# If you have enabled encryption for your cluster, also configure TLS/SSL for the client. +# Otherwise, skip this step. +config.network.ssl.enabled = True +config.network.ssl.cafile = "/path/to/ca.pem" +config.network.ssl.certfile = "/path/to/cert.pem" +config.network.ssl.keyfile = "/path/to/key.pem" +config.network.ssl.password = "YOUR_KEY_STORE_PASSWORD" - # Start a new Hazelcast client with this configuration. - client = hazelcast.HazelcastClient(config) +# Start a new Hazelcast client with this configuration. +client = hazelcast.HazelcastClient(config) - my_map = client.get_map("map-on-the-cloud") - my_map.put("key", "hazelcast.cloud") +my_map = client.get_map("map-on-the-cloud").blocking() +my_map.put("key", "value") - print(my_map.get("key")) +print(my_map.get("key")) - client.shutdown() +client.shutdown() diff --git a/examples/flake-id-generator/__init__.py b/examples/flake-id-generator/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/flake-id-generator/flake_id_generator_example.py b/examples/flake-id-generator/flake_id_generator_example.py index cb5b39eaa7..b9e37cc03f 100644 --- a/examples/flake-id-generator/flake_id_generator_example.py +++ b/examples/flake-id-generator/flake_id_generator_example.py @@ -1,25 +1,20 @@ import hazelcast -import logging -if __name__ == "__main__": - logging.basicConfig() - logging.getLogger().setLevel(logging.INFO) +config = hazelcast.ClientConfig() +flake_id_generator_config = hazelcast.FlakeIdGeneratorConfig() - config = hazelcast.ClientConfig() - flake_id_generator_config = hazelcast.FlakeIdGeneratorConfig() +# Default value is 600000 (10 minutes) +flake_id_generator_config.prefetch_validity_in_millis = 30000 - # Default value is 600000 (10 minutes) - flake_id_generator_config.prefetch_validity_in_millis = 30000 +# Default value is 100 +flake_id_generator_config.prefetch_count = 50 - # Default value is 100 - flake_id_generator_config.prefetch_count = 50 +config.add_flake_id_generator_config(flake_id_generator_config) +client = hazelcast.HazelcastClient(config) - config.add_flake_id_generator_config(flake_id_generator_config) - client = hazelcast.HazelcastClient(config) +generator = client.get_flake_id_generator("id-generator").blocking() - generator = client.get_flake_id_generator("id-generator").blocking() +for _ in range(100): + print("Id: {}".format(generator.new_id())) - for _ in range(100): - print("Id: {}".format(generator.new_id())) - - client.shutdown() +client.shutdown() diff --git a/examples/hazelcast-json-value/__init__.py b/examples/hazelcast-json-value/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/hazelcast-json-value/hazelcast_json_value_example.py b/examples/hazelcast-json-value/hazelcast_json_value_example.py index 90c18f4a9c..bb5b359058 100644 --- a/examples/hazelcast-json-value/hazelcast_json_value_example.py +++ b/examples/hazelcast-json-value/hazelcast_json_value_example.py @@ -3,28 +3,27 @@ from hazelcast.core import HazelcastJsonValue from hazelcast.serialization.predicate import and_, is_greater_than, sql -if __name__ == "__main__": - client = hazelcast.HazelcastClient() - employees_map = client.get_map("employees").blocking() +client = hazelcast.HazelcastClient() +employees_map = client.get_map("employees").blocking() - alice = "{\"name\": \"Alice\", \"age\": 35}" - andy = "{\"name\": \"Andy\", \"age\": 22}" - bob = {"name": "Bob", "age": 37} +alice = "{\"name\": \"Alice\", \"age\": 35}" +andy = "{\"name\": \"Andy\", \"age\": 22}" +bob = {"name": "Bob", "age": 37} - # HazelcastJsonValue can be constructed from JSON strings - employees_map.put(0, HazelcastJsonValue(alice)) - employees_map.put(1, HazelcastJsonValue(andy)) +# HazelcastJsonValue can be constructed from JSON strings +employees_map.put(0, HazelcastJsonValue(alice)) +employees_map.put(1, HazelcastJsonValue(andy)) - # or from JSON serializable objects - employees_map.put(2, HazelcastJsonValue(bob)) +# or from JSON serializable objects +employees_map.put(2, HazelcastJsonValue(bob)) - # Employees whose name starts with 'A' and age is greater than 30 - predicate = and_(sql("name like A%"), is_greater_than("age", 30)) +# Employees whose name starts with 'A' and age is greater than 30 +predicate = and_(sql("name like A%"), is_greater_than("age", 30)) - values = employees_map.values(predicate) +values = employees_map.values(predicate) - for value in values: - print(value.to_string()) # As JSON string - print(value.loads()) # As Python object +for value in values: + print(value.to_string()) # As JSON string + print(value.loads()) # As Python object - client.shutdown() +client.shutdown() diff --git a/examples/learning-basics/1-configure_client.py b/examples/learning-basics/1-configure_client.py index 4f133dfdc6..4935e472ea 100644 --- a/examples/learning-basics/1-configure_client.py +++ b/examples/learning-basics/1-configure_client.py @@ -1,18 +1,16 @@ import hazelcast -if __name__ == "__main__": - # Create configuration for the client - config = hazelcast.ClientConfig() - print("Cluster name: {}".format(config.group_config.name)) +# Create configuration for the client +config = hazelcast.ClientConfig() +print("Cluster name: {}".format(config.cluster_name)) - # Add member's host:port to the configuration. - # For each member on your Hazelcast cluster, you should add its host:port pair to the configuration. - config.network_config.addresses.append("127.0.0.1:5701") - config.network_config.addresses.append("127.0.0.1:5702") +# Add member's host:port to the configuration. +# For each member on your Hazelcast cluster, you should add its host:port pair to the configuration. +config.network.addresses.append("127.0.0.1:5701") +config.network.addresses.append("127.0.0.1:5702") - # Create a client using the configuration above - client = hazelcast.HazelcastClient(config) - print("Client is {}".format(client.lifecycle.state)) +# Create a client using the configuration above +client = hazelcast.HazelcastClient(config) - # Disconnect the client and shutdown - client.shutdown() +# Disconnect the client and shutdown +client.shutdown() diff --git a/examples/learning-basics/2-create_a_map.py b/examples/learning-basics/2-create_a_map.py index 6bd65f8185..447051df9e 100644 --- a/examples/learning-basics/2-create_a_map.py +++ b/examples/learning-basics/2-create_a_map.py @@ -1,26 +1,25 @@ import hazelcast -if __name__ == "__main__": - # Connect - config = hazelcast.ClientConfig() - config.network_config.addresses.append("127.0.0.1:5701") - client = hazelcast.HazelcastClient(config) +# Connect +config = hazelcast.ClientConfig() +config.network.addresses.append("127.0.0.1:5701") +client = hazelcast.HazelcastClient(config) - # Get a map that is stored on the server side. We can access it from the client - greetings_map = client.get_map("greetings-map") +# Get a map that is stored on the server side. We can access it from the client +greetings_map = client.get_map("greetings-map") - # Map is empty on the first run. It will be non-empty if Hazelcast has data on this map - print("Map: {}, Size: {}".format(greetings_map.name, greetings_map.size().result())) +# Map is empty on the first run. It will be non-empty if Hazelcast has data on this map +print("Map: {}, Size: {}".format(greetings_map.name, greetings_map.size().result())) - # Write data to map. If there is a data with the same key already, it will be overwritten - greetings_map.put("English", "hello world") - greetings_map.put("Spanish", "hola mundo") - greetings_map.put("Italian", "ciao mondo") - greetings_map.put("German", "hallo welt") - greetings_map.put("French", "bonjour monde") +# Write data to map. If there is a data with the same key already, it will be overwritten +greetings_map.put("English", "hello world") +greetings_map.put("Spanish", "hola mundo") +greetings_map.put("Italian", "ciao mondo") +greetings_map.put("German", "hallo welt") +greetings_map.put("French", "bonjour monde") - # 5 data is added to the map. There should be at least 5 data on the server side - print("Map: {}, Size: {}".format(greetings_map.name, greetings_map.size().result())) +# 5 data is added to the map. There should be at least 5 data on the server side +print("Map: {}, Size: {}".format(greetings_map.name, greetings_map.size().result())) - # Shutdown the client - client.shutdown() +# Shutdown the client +client.shutdown() diff --git a/examples/learning-basics/3-read_from_a_map.py b/examples/learning-basics/3-read_from_a_map.py index a58a40ad22..f0ac79669c 100644 --- a/examples/learning-basics/3-read_from_a_map.py +++ b/examples/learning-basics/3-read_from_a_map.py @@ -1,20 +1,19 @@ import hazelcast -if __name__ == "__main__": - # Connect - config = hazelcast.ClientConfig() - config.network_config.addresses.append("127.0.0.1:5701") - client = hazelcast.HazelcastClient(config) +# Connect +config = hazelcast.ClientConfig() +config.network.addresses.append("127.0.0.1:5701") +client = hazelcast.HazelcastClient(config) - # We can access maps on the server from the client. Let's access the greetings map that we created already - greetings_map = client.get_map("greetings-map") +# We can access maps on the server from the client. Let's access the greetings map that we created already +greetings_map = client.get_map("greetings-map") - # Get the keys of the map - keys = greetings_map.key_set().result() +# Get the keys of the map +keys = greetings_map.key_set().result() - # Print key-value pairs - for key in keys: - print("{} -> {}".format(key, greetings_map.get(key).result())) +# Print key-value pairs +for key in keys: + print("{} -> {}".format(key, greetings_map.get(key).result())) - # Shutdown the client - client.shutdown() +# Shutdown the client +client.shutdown() diff --git a/examples/learning-basics/__init__.py b/examples/learning-basics/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/list/__init__.py b/examples/list/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/list/list_example.py b/examples/list/list_example.py index ab3cf89388..a4a1e0752b 100644 --- a/examples/list/list_example.py +++ b/examples/list/list_example.py @@ -1,22 +1,21 @@ import hazelcast -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +client = hazelcast.HazelcastClient() - my_list = client.get_list("cities-list") +my_list = client.get_list("cities-list") - my_list.add("Tokyo") - my_list.add("Paris") - my_list.add("London") - my_list.add("New York") - my_list.add("Istanbul") +my_list.add("Tokyo") +my_list.add("Paris") +my_list.add("London") +my_list.add("New York") +my_list.add("Istanbul") - print("List size: {}".format(my_list.size().result())) - print("First element: {}".format(my_list.get(0).result())) - print("Contains Istanbul: {}".format(my_list.contains("Istanbul").result())) - print("Sublist: {}".format(my_list.sub_list(3, 5).result())) +print("List size: {}".format(my_list.size().result())) +print("First element: {}".format(my_list.get(0).result())) +print("Contains Istanbul: {}".format(my_list.contains("Istanbul").result())) +print("Sublist: {}".format(my_list.sub_list(3, 5).result())) - my_list.remove("Tokyo") - print("Final size: {}".format(my_list.size().result())) +my_list.remove("Tokyo") +print("Final size: {}".format(my_list.size().result())) - client.shutdown() +client.shutdown() diff --git a/examples/map/__init__.py b/examples/map/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/map/map_async_example.py b/examples/map/map_async_example.py index 5c907a0847..8e02bd603b 100644 --- a/examples/map/map_async_example.py +++ b/examples/map/map_async_example.py @@ -4,29 +4,30 @@ def fill_map(hz_map, count=10): - for i in range(count): - hz_map.put("key-" + str(i), "value-" + str(i)) + entries = {"key-" + str(i): "value-" + str(i) for i in range(count)} + hz_map.put_all(entries) -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +def put_callback(future): + print("Map put: {}".format(future.result())) - my_map = client.get_map("async-map") - fill_map(my_map) - print("Map size: {}".format(my_map.size().result())) +def contains_callback(future): + print("Map contains: {}".format(future.result())) - def put_callback(future): - print("Map put: {}".format(future.result())) - my_map.put("key", "async-value").add_done_callback(put_callback) +client = hazelcast.HazelcastClient() - def contains_callback(future): - print("Map contains: {}".format(future.result())) +my_map = client.get_map("async-map") +fill_map(my_map) - key = random.random() - print("Random key: {}".format(key)) - my_map.contains_key(key).add_done_callback(contains_callback) +print("Map size: {}".format(my_map.size().result())) - time.sleep(10) - client.shutdown() +my_map.put("key", "async-value").add_done_callback(put_callback) + +key = random.random() +print("Random key: {}".format(key)) +my_map.contains_key(key).add_done_callback(contains_callback) + +time.sleep(3) +client.shutdown() diff --git a/examples/map/map_basic_example.py b/examples/map/map_basic_example.py index 3e14bf2f15..2d1ea1dd98 100644 --- a/examples/map/map_basic_example.py +++ b/examples/map/map_basic_example.py @@ -1,24 +1,23 @@ import hazelcast -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +client = hazelcast.HazelcastClient() - my_map = client.get_map("my-map") +my_map = client.get_map("my-map") - # Fill the map - my_map.put("1", "Tokyo") - my_map.put("2", "Paris") - my_map.put("3", "Istanbul") +# Fill the map +my_map.put("1", "Tokyo") +my_map.put("2", "Paris") +my_map.put("3", "Istanbul") - print("Entry with key 3: {}".format(my_map.get("3").result())) +print("Entry with key 3: {}".format(my_map.get("3").result())) - print("Map size: {}".format(my_map.size().result())) +print("Map size: {}".format(my_map.size().result())) - # Print the map - print("\nIterating over the map: \n") +# Print the map +print("\nIterating over the map: \n") - entries = my_map.entry_set().result() - for key, value in entries: - print("{} -> {}".format(key, value)) +entries = my_map.entry_set().result() +for key, value in entries: + print("{} -> {}".format(key, value)) - client.shutdown() +client.shutdown() diff --git a/examples/map/map_blocking_example.py b/examples/map/map_blocking_example.py index d12b1ce67b..97f634ed87 100644 --- a/examples/map/map_blocking_example.py +++ b/examples/map/map_blocking_example.py @@ -4,31 +4,29 @@ def fill_map(hz_map, count=10): - for i in range(count): - hz_map.put("key-" + str(i), "value-" + str(i)) + entries = {"key-" + str(i): "value-" + str(i) for i in range(count)} + hz_map.put_all(entries) -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +client = hazelcast.HazelcastClient() - my_map = client.get_map("sync-map").blocking() - fill_map(my_map) +my_map = client.get_map("sync-map").blocking() +fill_map(my_map) - print("Map size: {}".format(my_map.size())) +print("Map size: {}".format(my_map.size())) - random_key = random.random() - my_map.put(random_key, "value") - print("Map contains {}: {}".format(random_key, my_map.contains_key(random_key))) - print("Map size: {}".format(my_map.size())) +random_key = random.random() +my_map.put(random_key, "value") +print("Map contains {}: {}".format(random_key, my_map.contains_key(random_key))) +print("Map size: {}".format(my_map.size())) - my_map.remove(random_key) - print("Map contains {}: {}".format(random_key, my_map.contains_key(random_key))) - print("Map size: {}".format(my_map.size())) +my_map.remove(random_key) +print("Map contains {}: {}".format(random_key, my_map.contains_key(random_key))) +print("Map size: {}".format(my_map.size())) - print("\nIterate over the map\n") +print("\nIterate over the map\n") - for key, value in my_map.entry_set(): - print("Key: {} -> Value: {}".format(key, value)) +for key, value in my_map.entry_set(): + print("Key: {} -> Value: {}".format(key, value)) - time.sleep(10) - client.shutdown() +client.shutdown() diff --git a/examples/map/map_entry_processor_example.py b/examples/map/map_entry_processor_example.py index 51f7c264cc..a59c1fe8fe 100644 --- a/examples/map/map_entry_processor_example.py +++ b/examples/map/map_entry_processor_example.py @@ -21,19 +21,16 @@ def get_class_id(self): return self.CLASS_ID -if __name__ == "__main__": +client = hazelcast.HazelcastClient() - config = hazelcast.ClientConfig() - client = hazelcast.HazelcastClient(config) +my_map = client.get_map("processor-map") - my_map = client.get_map("processor-map") +my_map.put("test_key", 0) - my_map.put("test_key", 0) +# Entry Processor should be implemented on the server side +my_map.execute_on_key("test_key", EntryProcessor()) - # Entry Processor should be implemented on the server side - my_map.execute_on_key("test_key", EntryProcessor()) +value = my_map.get("test_key").result() +print(value) - value = my_map.get("test_key").result() - print(value) - - client.shutdown() +client.shutdown() diff --git a/examples/map/map_listener_example.py b/examples/map/map_listener_example.py index 677c81a804..96f04d82ef 100644 --- a/examples/map/map_listener_example.py +++ b/examples/map/map_listener_example.py @@ -1,3 +1,5 @@ +import time + import hazelcast @@ -15,15 +17,16 @@ def entry_updated(event): event.value)) -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +client = hazelcast.HazelcastClient() + +my_map = client.get_map("listener-map").blocking() - my_map = client.get_map("listener-map").blocking() +my_map.add_entry_listener(True, added_func=entry_added, removed_func=entry_removed, updated_func=entry_updated) - my_map.add_entry_listener(True, added_func=entry_added, removed_func=entry_removed, updated_func=entry_updated) +my_map.put("key", "value") +my_map.put("key", "new value") +my_map.remove("key") - my_map.put("key", "value") - my_map.put("key", "new value") - my_map.remove("key") +time.sleep(3) - client.shutdown() +client.shutdown() diff --git a/examples/map/map_portable_query_example.py b/examples/map/map_portable_query_example.py index debdf67b80..b919947319 100644 --- a/examples/map/map_portable_query_example.py +++ b/examples/map/map_portable_query_example.py @@ -28,37 +28,37 @@ def get_class_id(self): return self.CLASS_ID def __str__(self): - return "Employee[ name:{} age:{} ]".format(self.name, self.age) + return "Employee(name:%s, age:%s)" % (self.name, self.age) def __eq__(self, other): - return type(self) == type(other) and self.name == other.name and self.age == other.age + return isinstance(other, Employee) and self.name == other.name and self.age == other.age -if __name__ == '__main__': - config = hazelcast.ClientConfig() +config = hazelcast.ClientConfig() - config.serialization_config.portable_factories[Employee.FACTORY_ID] = \ - {Employee.CLASS_ID: Employee} +config.serialization.portable_factories[Employee.FACTORY_ID] = {Employee.CLASS_ID: Employee} - client = hazelcast.HazelcastClient(config) +client = hazelcast.HazelcastClient(config) - my_map = client.get_map("employee-map") - # - my_map.put(0, Employee("Jack", 28)) - my_map.put(1, Employee("Jane", 29)) - my_map.put(2, Employee("Joe", 30)) +my_map = client.get_map("employee-map") - print("Map Size: {}".format(my_map.size().result())) +my_map.put(0, Employee("Jack", 28)) +my_map.put(1, Employee("Jane", 29)) +my_map.put(2, Employee("Joe", 30)) - predicate = sql("age <= 29") +print("Map Size: {}".format(my_map.size().result())) - def values_callback(f): - result_set = f.result() - print("Query Result Size: {}".format(len(result_set))) - for value in result_set: - print("value: {}".format(value)) +predicate = sql("age <= 29") - my_map.values(predicate).add_done_callback(values_callback) - time.sleep(10) - client.shutdown() +def values_callback(f): + result_set = f.result() + print("Query Result Size: {}".format(len(result_set))) + for value in result_set: + print("value: {}".format(value)) + + +my_map.values(predicate).add_done_callback(values_callback) + +time.sleep(3) +client.shutdown() diff --git a/examples/map/map_portable_versioning_example.py b/examples/map/map_portable_versioning_example.py index a05c183355..558d8f038b 100644 --- a/examples/map/map_portable_versioning_example.py +++ b/examples/map/map_portable_versioning_example.py @@ -1,4 +1,5 @@ import hazelcast +from hazelcast.errors import HazelcastSerializationError from hazelcast.serialization.api import Portable @@ -8,6 +9,7 @@ # versions of the same object, and Hazelcast will store both meta information and use the # correct one to serialize and deserialize portable objects depending on the member. + # Default (version 1) Employee class. class Employee(Portable): FACTORY_ID = 666 @@ -32,7 +34,7 @@ def get_class_id(self): return self.CLASS_ID def __str__(self): - return "Employee[ name:{} age:{} ]".format(self.name, self.age) + return "Employee(name:%s, age:%s)" % (self.name, self.age) def __eq__(self, other): return type(self) == type(other) and self.name == other.name and self.age == other.age @@ -41,11 +43,12 @@ def __eq__(self, other): # it is a good idea to upgrade the version of the class, rather than sticking to the global versioning # that is specified in the hazelcast.xml file. + # Version 2: Added new field manager name (str). class Employee2(Portable): FACTORY_ID = 666 CLASS_ID = 1 - CLASS_VERSION = 2 # specifies version different than the global version + CLASS_VERSION = 2 # specifies version different than the global version def __init__(self, name=None, age=None, manager=None): self.name = name @@ -73,10 +76,10 @@ def get_class_version(self): return self.CLASS_VERSION def __str__(self): - return "Employee[ name:{} age:{} manager:{} ]".format(self.name, self.age, self.manager) + return "Employee(name:%s, age:%s, manager:%s)" % (self.name, self.age, self.manager) def __eq__(self, other): - return type(self) == type(other) and self.name == other.name and self.age == other.age \ + return isinstance(other, Employee2) and self.name == other.name and self.age == other.age \ and self.manager == other.manager @@ -114,71 +117,66 @@ def get_class_version(self): return self.CLASS_VERSION def __str__(self): - return "Employee[ name:{} age:{} manager:{} ]".format(self.name, self.age, self.manager) + return "Employee(name:%s, age:%s manager:%s)" % (self.name, self.age, self.manager) def __eq__(self, other): - return type(self) == type(other) and self.name == other.name and self.age == other.age \ + return isinstance(other, Employee3) and self.name == other.name and self.age == other.age \ and self.manager == other.manager -if __name__ == '__main__': - - # Let's now configure 3 clients with 3 different versions of Employee. - config = hazelcast.ClientConfig() - config.serialization_config.portable_factories[Employee.FACTORY_ID] = \ - {Employee.CLASS_ID: Employee} - client = hazelcast.HazelcastClient(config) +# Let's now configure 3 clients with 3 different versions of Employee. +config = hazelcast.ClientConfig() +config.serialization.portable_factories[Employee.FACTORY_ID] = {Employee.CLASS_ID: Employee} +client = hazelcast.HazelcastClient(config) - config2 = hazelcast.ClientConfig() - config2.serialization_config.portable_factories[Employee2.FACTORY_ID] = \ - {Employee2.CLASS_ID: Employee2} - client2 = hazelcast.HazelcastClient(config2) +config2 = hazelcast.ClientConfig() +config2.serialization.portable_factories[Employee2.FACTORY_ID] = {Employee2.CLASS_ID: Employee2} +client2 = hazelcast.HazelcastClient(config2) - config3 = hazelcast.ClientConfig() - config3.serialization_config.portable_factories[Employee3.FACTORY_ID] = \ - {Employee3.CLASS_ID: Employee3} - client3 = hazelcast.HazelcastClient(config3) +config3 = hazelcast.ClientConfig() +config3.serialization.portable_factories[Employee3.FACTORY_ID] = {Employee3.CLASS_ID: Employee3} +client3 = hazelcast.HazelcastClient(config3) - # Assume that a member joins a cluster with a newer version of a class. - # If you modified the class by adding a new field, the new member's put operations include that - # new field. - my_map = client.get_map("employee-map").blocking() - my_map2 = client2.get_map("employee-map").blocking() +# Assume that a member joins a cluster with a newer version of a class. +# If you modified the class by adding a new field, the new member's put operations include that +# new field. +my_map = client.get_map("employee-map").blocking() +my_map2 = client2.get_map("employee-map").blocking() - my_map.clear() - my_map.put(0, Employee("Jack", 28)) - my_map2.put(1, Employee2("Jane", 29, "Josh")) +my_map.clear() +my_map.put(0, Employee("Jack", 28)) +my_map2.put(1, Employee2("Jane", 29, "Josh")) - print('Map Size: {}'.format(my_map.size())) +print('Map Size: {}'.format(my_map.size())) - # If this new member tries to get an object that was put from the older members, it - # gets null for the newly added field. - for v in my_map.values(): - print(v) +# If this new member tries to get an object that was put from the older members, it +# gets null for the newly added field. +for v in my_map.values(): + print(v) - for v in my_map2.values(): - print(v) +for v in my_map2.values(): + print(v) - # Let's try now to put a version 3 Employee object to the map and see what happens. - my_map3 = client3.get_map("employee-map").blocking() - my_map3.put(2, Employee3("Joe", "30", "Mary")) +# Let's try now to put a version 3 Employee object to the map and see what happens. +my_map3 = client3.get_map("employee-map").blocking() +my_map3.put(2, Employee3("Joe", "30", "Mary")) - print('Map Size: {}'.format(my_map.size())) +print('Map Size: {}'.format(my_map.size())) - # As clients with incompatible versions of the class try to access each other, a HazelcastSerializationError - # is raised (caused by a TypeError). - try: - # Client that has class with int type age field tries to read Employee3 object with String age field. - print(my_map.get(2)) - except hazelcast.exception.HazelcastSerializationError as ex: - print("Failed due to: {}".format(ex)) +# As clients with incompatible versions of the class try to access each other, a HazelcastSerializationError +# is raised (caused by a TypeError). +try: + # Client that has class with int type age field tries to read Employee3 object with String age field. + print(my_map.get(2)) +except HazelcastSerializationError as ex: + print("Failed due to: {}".format(ex)) - try: - # Client that has class with String type age field tries to read Employee object with int age field. - print(my_map3.get(0)) - except hazelcast.exception.HazelcastSerializationError as ex: - print("Failed due to: {}".format(ex)) +try: + # Client that has class with String type age field tries to read Employee object with int age field. + print(my_map3.get(0)) +except HazelcastSerializationError as ex: + print("Failed due to: {}".format(ex)) - client.shutdown() - client2.shutdown() - client3.shutdown() \ No newline at end of file +client.shutdown() +client2.shutdown() +client3.shutdown() diff --git a/examples/map/map_predicate_example.py b/examples/map/map_predicate_example.py index e97c088a3a..17389baffb 100644 --- a/examples/map/map_predicate_example.py +++ b/examples/map/map_predicate_example.py @@ -1,19 +1,18 @@ import hazelcast -from hazelcast.serialization.predicate import BetweenPredicate +from hazelcast.serialization.predicate import is_between -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +client = hazelcast.HazelcastClient() - predicate_map = client.get_map("predicate-map") - for i in range(10): - predicate_map.put("key" + str(i), i) +predicate_map = client.get_map("predicate-map") +for i in range(10): + predicate_map.put("key" + str(i), i) - predicate = BetweenPredicate("this", 3, 5) +predicate = is_between("this", 3, 5) - entry_set = predicate_map.entry_set(predicate).result() +entry_set = predicate_map.entry_set(predicate).result() - for key, value in entry_set: - print("{} -> {}".format(key, value)) +for key, value in entry_set: + print("{} -> {}".format(key, value)) - client.shutdown() +client.shutdown() diff --git a/examples/monitoring/__init__.py b/examples/monitoring/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/monitoring/cluster_listener_example.py b/examples/monitoring/cluster_listener_example.py index ee2bdb3f72..3b5f2ca956 100644 --- a/examples/monitoring/cluster_listener_example.py +++ b/examples/monitoring/cluster_listener_example.py @@ -3,17 +3,16 @@ def member_added(member): - print("Member added: {}".format(member.address)) + print("Member added: {}".format(member)) def member_removed(member): - print("Member removed: {}".format(member.address)) + print("Member removed: {}".format(member)) -if __name__ == "__main__": - client = hazelcast.HazelcastClient() - client.cluster.add_listener(member_added, member_removed, True) +client = hazelcast.HazelcastClient() +client.cluster_service.add_listener(member_added, member_removed, True) - # Add/Remove member now to see the listeners in action - time.sleep(100) - client.shutdown() +# Add/Remove member now to see the listeners in action +time.sleep(100) +client.shutdown() diff --git a/examples/monitoring/distributed_object_listener.py b/examples/monitoring/distributed_object_listener.py index 59cff5920c..75deaade75 100644 --- a/examples/monitoring/distributed_object_listener.py +++ b/examples/monitoring/distributed_object_listener.py @@ -5,24 +5,23 @@ def distributed_object_listener(event): print("Distributed object event >>>", event.name, event.service_name, event.event_type) -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +client = hazelcast.HazelcastClient() - # Register the listener - reg_id = client.add_distributed_object_listener(distributed_object_listener) +# Register the listener +reg_id = client.add_distributed_object_listener(distributed_object_listener) - map_name = "test_map" +map_name = "test_map" - # This call causes a CREATED event - test_map = client.get_map(map_name) +# This call causes a CREATED event +test_map = client.get_map(map_name) - # This causes no event because map was already created - test_map2 = client.get_map(map_name) +# This causes no event because map was already created +test_map2 = client.get_map(map_name) - # This causes a DESTROYED event - test_map.destroy() +# This causes a DESTROYED event +test_map.destroy() - # Deregister the listener - client.remove_distributed_object_listener(reg_id) +# Deregister the listener +client.remove_distributed_object_listener(reg_id) - client.shutdown() +client.shutdown() diff --git a/examples/monitoring/lifecycle_listener_example.py b/examples/monitoring/lifecycle_listener_example.py index 15e17722b8..ab9af95d4d 100644 --- a/examples/monitoring/lifecycle_listener_example.py +++ b/examples/monitoring/lifecycle_listener_example.py @@ -5,10 +5,9 @@ def on_state_change(state): print("State changed to {}".format(state)) -if __name__ == "__main__": - config = hazelcast.ClientConfig() - config.add_lifecycle_listener(on_state_change) +config = hazelcast.ClientConfig() +config.add_lifecycle_listener(on_state_change) - client = hazelcast.HazelcastClient(config) +client = hazelcast.HazelcastClient(config) - client.shutdown() +client.shutdown() diff --git a/examples/multi-map/__init__.py b/examples/multi-map/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/multi-map/multi_map_example.py b/examples/multi-map/multi_map_example.py index 9109c013d0..b5dddc71c3 100644 --- a/examples/multi-map/multi_map_example.py +++ b/examples/multi-map/multi_map_example.py @@ -1,28 +1,27 @@ import hazelcast -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +client = hazelcast.HazelcastClient() - multi_map = client.get_multi_map("multi-map") +multi_map = client.get_multi_map("multi-map") - multi_map.put("key1", "value1") - multi_map.put("key1", "value2") - multi_map.put("key2", "value3") - multi_map.put("key3", "value4") +multi_map.put("key1", "value1") +multi_map.put("key1", "value2") +multi_map.put("key2", "value3") +multi_map.put("key3", "value4") - value = multi_map.get("key1").result() - print("Get: {}".format(value)) +value = multi_map.get("key1").result() +print("Get: {}".format(value)) - values = multi_map.values().result() - print("Values: {}".format(values)) +values = multi_map.values().result() +print("Values: {}".format(values)) - key_set = multi_map.key_set().result() - print("Key Set: {}".format(key_set)) +key_set = multi_map.key_set().result() +print("Key Set: {}".format(key_set)) - size = multi_map.size().result() - print("Size: {}".format(size)) +size = multi_map.size().result() +print("Size: {}".format(size)) - for key, value in multi_map.entry_set().result(): - print("{} -> {}".format(key, value)) +for key, value in multi_map.entry_set().result(): + print("{} -> {}".format(key, value)) - client.shutdown() +client.shutdown() diff --git a/examples/org-website/__init__.py b/examples/org-website/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/org-website/atomic_long_sample.py b/examples/org-website/atomic_long_sample.py index 21e998a508..6d91a20694 100644 --- a/examples/org-website/atomic_long_sample.py +++ b/examples/org-website/atomic_long_sample.py @@ -1,14 +1,16 @@ +# TODO Fix this when we add CP Atomic Long + import hazelcast -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get an Atomic Counter, we'll call it "counter" - counter = hz.get_atomic_long("counter").blocking() - # Add and Get the "counter" - counter.add_and_get(3) - # value is 3 - # Display the "counter" value - print("counter: {}".format(counter.get())) - # Shutdown this Hazelcast Client - hz.shutdown() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get an Atomic Counter, we'll call it "counter" +counter = hz.get_atomic_long("counter").blocking() +# Add and Get the "counter" +counter.add_and_get(3) +# value is 3 +# Display the "counter" value +print("counter: {}".format(counter.get())) +# Shutdown this Hazelcast Client +hz.shutdown() + diff --git a/examples/org-website/custom_serializer_sample.py b/examples/org-website/custom_serializer_sample.py index 0498d49d40..05886a5c9f 100644 --- a/examples/org-website/custom_serializer_sample.py +++ b/examples/org-website/custom_serializer_sample.py @@ -26,11 +26,10 @@ def destroy(self): pass -if __name__ == "__main__": - config = hazelcast.ClientConfig() - config.serialization_config.set_custom_serializer(CustomSerializableType, CustomSerializer) - - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient(config) - # CustomSerializer will serialize/deserialize CustomSerializable objects - hz.shutdown() +config = hazelcast.ClientConfig() +config.serialization.set_custom_serializer(CustomSerializableType, CustomSerializer) + +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient(config) +# CustomSerializer will serialize/deserialize CustomSerializable objects +hz.shutdown() diff --git a/examples/org-website/entry_processor_sample.py b/examples/org-website/entry_processor_sample.py index c5cdce8f12..02a21c9340 100644 --- a/examples/org-website/entry_processor_sample.py +++ b/examples/org-website/entry_processor_sample.py @@ -20,16 +20,15 @@ def get_class_id(self): return self.CLASS_ID -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get the Distributed Map from Cluster. - map = hz.get_map("my-distributed-map").blocking() - # Put the integer value of 0 into the Distributed Map - map.put("key", 0) - # Run the IncEntryProcessor class on the Hazelcast Cluster Member holding the key called "key" - map.execute_on_key("key", IncEntryProcessor()) - # Show that the IncEntryProcessor updated the value. - print("new value: {}".format(map.get("key"))) - # Shutdown this Hazelcast Client - hz.shutdown() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get the Distributed Map from Cluster. +map = hz.get_map("my-distributed-map").blocking() +# Put the integer value of 0 into the Distributed Map +map.put("key", 0) +# Run the IncEntryProcessor class on the Hazelcast Cluster Member holding the key called "key" +map.execute_on_key("key", IncEntryProcessor()) +# Show that the IncEntryProcessor updated the value. +print("new value: {}".format(map.get("key"))) +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/executor_service_sample.py b/examples/org-website/executor_service_sample.py index 0bc9dd13c2..a43ad1360c 100644 --- a/examples/org-website/executor_service_sample.py +++ b/examples/org-website/executor_service_sample.py @@ -23,18 +23,17 @@ def get_class_id(self): return self.CLASS_ID -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get the Distributed Executor Service - ex = hz.get_executor("my-distributed-executor") - # Get the an Hazelcast Cluster Member - member = hz.cluster.get_member_list()[0] - # Submit the MessagePrinter Runnable to the first Hazelcast Cluster Member - ex.execute_on_member(member, MessagePrinter("message to very first member of the cluster")) - # Submit the MessagePrinter Runnable to all Hazelcast Cluster Members - ex.execute_on_all_members(MessagePrinter("message to all members in the cluster")) - # Submit the MessagePrinter Runnable to the Hazelcast Cluster Member owning the key called "key" - ex.execute_on_key_owner("key", MessagePrinter("message to the member that owns the following key")) - # Shutdown this Hazelcast Client - hz.shutdown() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get the Distributed Executor Service +ex = hz.get_executor("my-distributed-executor") +# Get the an Hazelcast Cluster Member +member = hz.cluster_service.get_members()[0] +# Submit the MessagePrinter Runnable to the first Hazelcast Cluster Member +ex.execute_on_member(member, MessagePrinter("message to very first member of the cluster")) +# Submit the MessagePrinter Runnable to all Hazelcast Cluster Members +ex.execute_on_all_members(MessagePrinter("message to all members in the cluster")) +# Submit the MessagePrinter Runnable to the Hazelcast Cluster Member owning the key called "key" +ex.execute_on_key_owner("key", MessagePrinter("message to the member that owns the following key")) +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/global_serializer_sample.py b/examples/org-website/global_serializer_sample.py index 85c3128a6d..a69710c47a 100644 --- a/examples/org-website/global_serializer_sample.py +++ b/examples/org-website/global_serializer_sample.py @@ -20,10 +20,9 @@ def destroy(self): pass -if __name__ == "__main__": - config = ClientConfig() - config.serialization_config.global_serializer = GlobalSerializer - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient(config) - # GlobalSerializer will serialize/deserialize all non-builtin types - hz.shutdown() +config = ClientConfig() +config.serialization.global_serializer = GlobalSerializer +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient(config) +# GlobalSerializer will serialize/deserialize all non-builtin types +hz.shutdown() diff --git a/examples/org-website/identified_data_serializable_sample.py b/examples/org-website/identified_data_serializable_sample.py index a495481d04..4dc2e8545b 100644 --- a/examples/org-website/identified_data_serializable_sample.py +++ b/examples/org-website/identified_data_serializable_sample.py @@ -27,11 +27,10 @@ def get_class_id(self): return self.CLASS_ID -if __name__ == "__main__": - config = ClientConfig() - my_factory = {Employee.CLASS_ID: Employee} - config.serialization_config.add_data_serializable_factory(Employee.FACTORY_ID, my_factory) - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient(config) - # Employee can be used here - hz.shutdown() +config = ClientConfig() +my_factory = {Employee.CLASS_ID: Employee} +config.serialization.add_data_serializable_factory(Employee.FACTORY_ID, my_factory) +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient(config) +# Employee can be used here +hz.shutdown() diff --git a/examples/org-website/list_sample.py b/examples/org-website/list_sample.py index 662fd868f8..50378c2fac 100644 --- a/examples/org-website/list_sample.py +++ b/examples/org-website/list_sample.py @@ -1,19 +1,18 @@ import hazelcast -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get the Distributed List from Cluster. - list = hz.get_list("my-distributed-list").blocking() - # Add element to the list - list.add("item1") - list.add("item2") +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get the Distributed List from Cluster. +list = hz.get_list("my-distributed-list").blocking() +# Add element to the list +list.add("item1") +list.add("item2") - # Remove the first element - print("Removed: {}".format(list.remove_at(0))) - # There is only one element left - print("Current size is {}".format(list.size())) - # Clear the list - list.clear() - # Shutdown this Hazelcast Client - hz.shutdown() +# Remove the first element +print("Removed: {}".format(list.remove_at(0))) +# There is only one element left +print("Current size is {}".format(list.size())) +# Clear the list +list.clear() +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/lock_sample.py b/examples/org-website/lock_sample.py index 2ff28393a8..a4403f9640 100644 --- a/examples/org-website/lock_sample.py +++ b/examples/org-website/lock_sample.py @@ -1,17 +1,18 @@ +# TODO Fix this when we add CP Lock + import hazelcast -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get a distributed lock called "my-distributed-lock" - lock = hz.get_lock("my-distributed-lock").blocking() - # Now create a lock and execute some guarded code. - lock.lock() - try: - # do something here - pass - finally: - lock.unlock() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get a distributed lock called "my-distributed-lock" +lock = hz.get_lock("my-distributed-lock").blocking() +# Now create a lock and execute some guarded code. +lock.lock() +try: + # do something here + pass +finally: + lock.unlock() - # Shutdown this Hazelcast Client - hz.shutdown() +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/map_sample.py b/examples/org-website/map_sample.py index ad93efce83..a16d16f03d 100644 --- a/examples/org-website/map_sample.py +++ b/examples/org-website/map_sample.py @@ -1,15 +1,14 @@ import hazelcast -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get the Distributed Map from Cluster. - map = hz.get_map("my-distributed-map").blocking() - # Standard Put and Get - map.put("key", "value") - map.get("key") - # Concurrent Map methods, optimistic updating - map.put_if_absent("somekey", "somevalue") - map.replace_if_same("key", "value", "newvalue") - # Shutdown this Hazelcast Client - hz.shutdown() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get the Distributed Map from Cluster. +map = hz.get_map("my-distributed-map").blocking() +# Standard Put and Get +map.put("key", "value") +map.get("key") +# Concurrent Map methods, optimistic updating +map.put_if_absent("somekey", "somevalue") +map.replace_if_same("key", "value", "newvalue") +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/multimap_sample.py b/examples/org-website/multimap_sample.py index 45188dd633..d25c462c75 100644 --- a/examples/org-website/multimap_sample.py +++ b/examples/org-website/multimap_sample.py @@ -1,18 +1,17 @@ import hazelcast -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get the Distributed MultiMap from Cluster. - multi_map = hz.get_multi_map("my-distributed-multimap").blocking() - # Put values in the map against the same key - multi_map.put("my-key", "value1") - multi_map.put("my-key", "value2") - multi_map.put("my-key", "value3") - # Print out all the values for associated with key called "my-key" - values = multi_map.get("my-key") - print(values) - # remove specific key/value pair - multi_map.remove("my-key", "value2") - # Shutdown this Hazelcast Client - hz.shutdown() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get the Distributed MultiMap from Cluster. +multi_map = hz.get_multi_map("my-distributed-multimap").blocking() +# Put values in the map against the same key +multi_map.put("my-key", "value1") +multi_map.put("my-key", "value2") +multi_map.put("my-key", "value3") +# Print out all the values for associated with key called "my-key" +values = multi_map.get("my-key") +print(values) +# remove specific key/value pair +multi_map.remove("my-key", "value2") +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/portable_serializable_sample.py b/examples/org-website/portable_serializable_sample.py index a6532270fc..40442cb9af 100644 --- a/examples/org-website/portable_serializable_sample.py +++ b/examples/org-website/portable_serializable_sample.py @@ -30,11 +30,10 @@ def get_class_id(self): return self.CLASS_ID -if __name__ == "__main__": - config = ClientConfig() - my_factory = {Customer.CLASS_ID: Customer} - config.serialization_config.add_portable_factory(Customer.FACTORY_ID, my_factory) - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient(config) - # Customer can be used here - hz.shutdown() +config = ClientConfig() +my_factory = {Customer.CLASS_ID: Customer} +config.serialization.add_portable_factory(Customer.FACTORY_ID, my_factory) +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient(config) +# Customer can be used here +hz.shutdown() diff --git a/examples/org-website/query_sample.py b/examples/org-website/query_sample.py index 80da5e6a17..de026737b3 100644 --- a/examples/org-website/query_sample.py +++ b/examples/org-website/query_sample.py @@ -2,7 +2,7 @@ from hazelcast import ClientConfig from hazelcast.serialization.api import Portable -from hazelcast.serialization.predicate import SqlPredicate, and_, is_between, is_equal_to +from hazelcast.serialization.predicate import sql, and_, is_between, is_equal_to class User(Portable): @@ -31,7 +31,7 @@ def get_class_id(self): return self.CLASS_ID def __repr__(self): - return "User[username='{}', age={}, active={}]".format(self.username, self.age, self.active) + return "User(username=%s, age=%s, active=%s]" % (self.username, self.age, self.active) def generate_users(users): @@ -40,25 +40,24 @@ def generate_users(users): users.put("Freddy", User("Freddy", 23, True)) -if __name__ == "__main__": - config = ClientConfig() - portable_factory = {User.CLASS_ID: User} - config.serialization_config.add_portable_factory(User.FACTORY_ID, portable_factory) - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient(config) - # Get a Distributed Map called "users" - users = hz.get_map("users").blocking() - # Add some users to the Distributed Map - generate_users(users) - # Create a Predicate from a String (a SQL like Where clause) - sql_query = SqlPredicate("active AND age BETWEEN 18 AND 21)") - # Creating the same Predicate as above but with a builder - criteria_query = and_(is_equal_to("active", True), is_between("age", 18, 21)) - # Get result collections using the two different Predicates - result1 = users.values(sql_query) - result2 = users.values(criteria_query) - # Print out the results - print(result1) - print(result2) - # Shutdown this Hazelcast Client - hz.shutdown() +config = ClientConfig() +portable_factory = {User.CLASS_ID: User} +config.serialization.add_portable_factory(User.FACTORY_ID, portable_factory) +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient(config) +# Get a Distributed Map called "users" +users = hz.get_map("users").blocking() +# Add some users to the Distributed Map +generate_users(users) +# Create a Predicate from a String (a SQL like Where clause) +sql_query = sql("active AND age BETWEEN 18 AND 21)") +# Creating the same Predicate as above but with a builder +criteria_query = and_(is_equal_to("active", True), is_between("age", 18, 21)) +# Get result collections using the two different Predicates +result1 = users.values(sql_query) +result2 = users.values(criteria_query) +# Print out the results +print(result1) +print(result2) +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/queue_sample.py b/examples/org-website/queue_sample.py index eca15dd185..c00aae227b 100644 --- a/examples/org-website/queue_sample.py +++ b/examples/org-website/queue_sample.py @@ -1,19 +1,18 @@ import hazelcast -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get a Blocking Queue called "my-distributed-queue" - queue = hz.get_queue("my-distributed-queue").blocking() - # Offer a String into the Distributed Queue - queue.offer("item") - # Poll the Distributed Queue and return the String - item = queue.poll() - # Timed blocking Operations - queue.offer("anotheritem", 0.5) - another_item = queue.poll(5) - # Indefinitely blocking Operations - queue.put("yetanotheritem") - print(queue.take()) - # Shutdown this Hazelcast Client - hz.shutdown() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get a Blocking Queue called "my-distributed-queue" +queue = hz.get_queue("my-distributed-queue").blocking() +# Offer a String into the Distributed Queue +queue.offer("item") +# Poll the Distributed Queue and return the String +item = queue.poll() +# Timed blocking Operations +queue.offer("anotheritem", 0.5) +another_item = queue.poll(5) +# Indefinitely blocking Operations +queue.put("yetanotheritem") +print(queue.take()) +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/replicated_map_sample.py b/examples/org-website/replicated_map_sample.py index 4d15ededff..afac051456 100644 --- a/examples/org-website/replicated_map_sample.py +++ b/examples/org-website/replicated_map_sample.py @@ -1,17 +1,16 @@ import hazelcast -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get a Replicated Map called "my-replicated-map" - map = hz.get_replicated_map("my-replicated-map").blocking() - # Put and Get a value from the Replicated Map - replaced_value = map.put("key", "value") - # key/value replicated to all members - print("replaced value = {}".format(replaced_value)) - # Will be None as its first update - value = map.get("key") - # the value is retrieved from a random member in the cluster - print("value for key = {}".format(value)) - # Shutdown this Hazelcast Client - hz.shutdown() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get a Replicated Map called "my-replicated-map" +map = hz.get_replicated_map("my-replicated-map").blocking() +# Put and Get a value from the Replicated Map +replaced_value = map.put("key", "value") +# key/value replicated to all members +print("replaced value = {}".format(replaced_value)) +# Will be None as its first update +value = map.get("key") +# the value is retrieved from a random member in the cluster +print("value for key = {}".format(value)) +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/ringbuffer_sample.py b/examples/org-website/ringbuffer_sample.py index 6e0cb51493..fcae6ae6de 100644 --- a/examples/org-website/ringbuffer_sample.py +++ b/examples/org-website/ringbuffer_sample.py @@ -1,17 +1,16 @@ import hazelcast -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - rb = hz.get_ringbuffer("rb").blocking() - # add two items into ring buffer - rb.add(100) - rb.add(200) - # we start from the oldest item. - # if you want to start from the next item, call rb.tailSequence()+1 - sequence = rb.head_sequence() - print(rb.read_one(sequence)) - sequence += 1 - print(rb.read_one(sequence)) - # Shutdown this Hazelcast Client - hz.shutdown() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +rb = hz.get_ringbuffer("rb").blocking() +# add two items into ring buffer +rb.add(100) +rb.add(200) +# we start from the oldest item. +# if you want to start from the next item, call rb.tailSequence()+1 +sequence = rb.head_sequence() +print(rb.read_one(sequence)) +sequence += 1 +print(rb.read_one(sequence)) +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/set_sample.py b/examples/org-website/set_sample.py index 13334f6ebc..97e80ac5de 100644 --- a/examples/org-website/set_sample.py +++ b/examples/org-website/set_sample.py @@ -1,19 +1,18 @@ import hazelcast -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get the Distributed Set from Cluster. - set = hz.get_set("my-distributed-set").blocking() - # Add items to the set with duplicates - set.add("item1") - set.add("item1") - set.add("item2") - set.add("item2") - set.add("item2") - set.add("item3") - # Get the items. Note that there are no duplicates. - for item in set.get_all(): - print(item) - # Shutdown this Hazelcast Client - hz.shutdown() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get the Distributed Set from Cluster. +set = hz.get_set("my-distributed-set").blocking() +# Add items to the set with duplicates +set.add("item1") +set.add("item1") +set.add("item2") +set.add("item2") +set.add("item2") +set.add("item3") +# Get the items. Note that there are no duplicates. +for item in set.get_all(): + print(item) +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/org-website/topic_sample.py b/examples/org-website/topic_sample.py index 84d9c8d7ea..450b7687c9 100644 --- a/examples/org-website/topic_sample.py +++ b/examples/org-website/topic_sample.py @@ -5,14 +5,13 @@ def print_on_message(topic_message): print("Got message ", topic_message.message) -if __name__ == "__main__": - # Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 - hz = hazelcast.HazelcastClient() - # Get a Topic called "my-distributed-topic" - topic = hz.get_topic("my-distributed-topic") - # Add a Listener to the Topic - topic.add_listener(print_on_message) - # Publish a message to the Topic - topic.publish("Hello to distributed world") - # Shutdown this Hazelcast Client - hz.shutdown() +# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1 +hz = hazelcast.HazelcastClient() +# Get a Topic called "my-distributed-topic" +topic = hz.get_topic("my-distributed-topic") +# Add a Listener to the Topic +topic.add_listener(print_on_message) +# Publish a message to the Topic +topic.publish("Hello to distributed world") +# Shutdown this Hazelcast Client +hz.shutdown() diff --git a/examples/pn-counter/__init__.py b/examples/pn-counter/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/pn-counter/pn_counter_example.py b/examples/pn-counter/pn_counter_example.py index 4ed484b5c2..36eea24e89 100644 --- a/examples/pn-counter/pn_counter_example.py +++ b/examples/pn-counter/pn_counter_example.py @@ -1,20 +1,15 @@ import hazelcast -import logging -if __name__ == "__main__": - logging.basicConfig() - logging.getLogger().setLevel(logging.INFO) +client = hazelcast.HazelcastClient() - client = hazelcast.HazelcastClient() +pn_counter = client.get_pn_counter("pn-counter").blocking() - pn_counter = client.get_pn_counter("pn-counter").blocking() +print("Counter is initialized with {}".format(pn_counter.get())) - print("Counter is initialized with {}".format(pn_counter.get())) +for i in range(10): + print("Added {} to the counter. Current value is {}".format(i, pn_counter.add_and_get(i))) - for i in range(10): - print("Added {} to the counter. Current value is {}".format(i, pn_counter.add_and_get(i))) +print("Incremented the counter after getting the current value. " + "Previous value is {}".format(pn_counter.get_and_increment())) - print("Incremented the counter after getting the current value. " - "Previous value is {}".format(pn_counter.get_and_increment())) - - print("Final value is {}".format(pn_counter.get())) +print("Final value is {}".format(pn_counter.get())) diff --git a/examples/queue/__init__.py b/examples/queue/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/queue/queue.py b/examples/queue/queue.py deleted file mode 100644 index f06a557e5d..0000000000 --- a/examples/queue/queue.py +++ /dev/null @@ -1,29 +0,0 @@ -import hazelcast -import threading - -if __name__ == "__main__": - client = hazelcast.HazelcastClient() - - queue = client.get_queue("queue") - - def produce(): - for i in range(100): - queue.offer("value-" + str(i)) - - def consume(): - consumed_count = 0 - while consumed_count < 100: - head = queue.take().result() - print("Consuming {}".format(head)) - consumed_count += 1 - - produce_thread = threading.Thread(target=produce) - consume_thread = threading.Thread(target=consume) - - produce_thread.start() - consume_thread.start() - - produce_thread.join() - consume_thread.join() - - client.shutdown() diff --git a/examples/queue/queue_example.py b/examples/queue/queue_example.py new file mode 100644 index 0000000000..fcbe51e5e5 --- /dev/null +++ b/examples/queue/queue_example.py @@ -0,0 +1,31 @@ +import hazelcast +import threading + +client = hazelcast.HazelcastClient() + +queue = client.get_queue("queue") + + +def produce(): + for i in range(100): + queue.offer("value-" + str(i)) + + +def consume(): + consumed_count = 0 + while consumed_count < 100: + head = queue.take().result() + print("Consuming {}".format(head)) + consumed_count += 1 + + +producer_thread = threading.Thread(target=produce) +consumer_thread = threading.Thread(target=consume) + +producer_thread.start() +consumer_thread.start() + +producer_thread.join() +consumer_thread.join() + +client.shutdown() diff --git a/examples/ring-buffer/__init__.py b/examples/ring-buffer/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/ring-buffer/ring_buffer_example.py b/examples/ring-buffer/ring_buffer_example.py index a2a0173a11..b7c5524e88 100644 --- a/examples/ring-buffer/ring_buffer_example.py +++ b/examples/ring-buffer/ring_buffer_example.py @@ -1,15 +1,14 @@ import hazelcast -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +client = hazelcast.HazelcastClient() - ring_buffer = client.get_ringbuffer("ring-buffer") - print("Capacity of the ring buffer: {}".format(ring_buffer.capacity().result())) +ring_buffer = client.get_ringbuffer("ring-buffer") +print("Capacity of the ring buffer: {}".format(ring_buffer.capacity().result())) - sequence = ring_buffer.add("First item").result() - print("Size: {}".format(ring_buffer.size().result())) +sequence = ring_buffer.add("First item").result() +print("Size: {}".format(ring_buffer.size().result())) - item = ring_buffer.read_one(sequence).result() - print("The item at the sequence {} is {}".format(sequence, item)) +item = ring_buffer.read_one(sequence).result() +print("The item at the sequence {} is {}".format(sequence, item)) - client.shutdown() +client.shutdown() diff --git a/examples/serialization/__init__.py b/examples/serialization/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/serialization/custom_serialization_example.py b/examples/serialization/custom_serialization_example.py index efca6b9140..20cb884e8c 100644 --- a/examples/serialization/custom_serialization_example.py +++ b/examples/serialization/custom_serialization_example.py @@ -33,17 +33,16 @@ def destroy(self): pass -if __name__ == "__main__": - config = hazelcast.ClientConfig() - config.serialization_config.set_custom_serializer(type(TimeOfDay), CustomSerializer) +config = hazelcast.ClientConfig() +config.serialization.set_custom_serializer(type(TimeOfDay), CustomSerializer) - client = hazelcast.HazelcastClient(config) +client = hazelcast.HazelcastClient(config) - my_map = client.get_map("map") - time_of_day = TimeOfDay(13, 36, 59) - my_map.put("time", time_of_day) +my_map = client.get_map("map") +time_of_day = TimeOfDay(13, 36, 59) +my_map.put("time", time_of_day) - time = my_map.get("time").result() - print("Time is {}:{}:{}".format(time.hour, time.minute, time.second)) +time = my_map.get("time").result() +print("Time is {}:{}:{}".format(time.hour, time.minute, time.second)) - client.shutdown() +client.shutdown() diff --git a/examples/serialization/global_serialization_example.py b/examples/serialization/global_serialization_example.py index 6f28fdf090..411e4f4a00 100644 --- a/examples/serialization/global_serialization_example.py +++ b/examples/serialization/global_serialization_example.py @@ -34,24 +34,22 @@ def destroy(self): pass -if __name__ == "__main__": - config = hazelcast.ClientConfig() - config.serialization_config.global_serializer = GlobalSerializer +config = hazelcast.ClientConfig() +config.serialization.global_serializer = GlobalSerializer - client = hazelcast.HazelcastClient(config) +client = hazelcast.HazelcastClient(config) - group = ColorGroup(id=1, - name="Reds", - colors=["Crimson", "Red", "Ruby", "Maroon"]) +group = ColorGroup(id=1, name="Reds", + colors=["Crimson", "Red", "Ruby", "Maroon"]) - my_map = client.get_map("map") +my_map = client.get_map("map") - my_map.put("group1", group) +my_map.put("group1", group) - color_group = my_map.get("group1").result() +color_group = my_map.get("group1").result() - print("ID: {}\nName: {}\nColor: {}".format(color_group.id, - color_group.name, - color_group.colors)) +print("ID: {}\nName: {}\nColor: {}".format(color_group.id, + color_group.name, + color_group.colors)) - client.shutdown() +client.shutdown() diff --git a/examples/serialization/identified_data_serializable_example.py b/examples/serialization/identified_data_serializable_example.py index 0839ca093d..2b16b48ec5 100644 --- a/examples/serialization/identified_data_serializable_example.py +++ b/examples/serialization/identified_data_serializable_example.py @@ -29,23 +29,22 @@ def get_class_id(self): return self.CLASS_ID -if __name__ == "__main__": - config = hazelcast.ClientConfig() - factory = {Student.CLASS_ID: Student} - config.serialization_config.add_data_serializable_factory(Student.FACTORY_ID, factory) +config = hazelcast.ClientConfig() +factory = {Student.CLASS_ID: Student} +config.serialization.add_data_serializable_factory(Student.FACTORY_ID, factory) - client = hazelcast.HazelcastClient(config) +client = hazelcast.HazelcastClient(config) - my_map = client.get_map("map") +my_map = client.get_map("map") - student = Student(1, "John Doe", 3.0) +student = Student(1, "John Doe", 3.0) - my_map.put("student1", student) +my_map.put("student1", student) - returned_student = my_map.get("student1").result() +returned_student = my_map.get("student1").result() - print("ID: {}\nName: {}\nGPA: {}".format(returned_student.id, - returned_student.name, - returned_student.gpa)) +print("ID: {}\nName: {}\nGPA: {}".format(returned_student.id, + returned_student.name, + returned_student.gpa)) - client.shutdown() +client.shutdown() diff --git a/examples/serialization/portable_example.py b/examples/serialization/portable_example.py index 9fdabe314f..2cb9c93c05 100644 --- a/examples/serialization/portable_example.py +++ b/examples/serialization/portable_example.py @@ -29,23 +29,22 @@ def get_class_id(self): return self.CLASS_ID -if __name__ == "__main__": - config = hazelcast.ClientConfig() - factory = {Engineer.CLASS_ID: Engineer} - config.serialization_config.add_portable_factory(Engineer.FACTORY_ID, factory) +config = hazelcast.ClientConfig() +factory = {Engineer.CLASS_ID: Engineer} +config.serialization.add_portable_factory(Engineer.FACTORY_ID, factory) - client = hazelcast.HazelcastClient(config) +client = hazelcast.HazelcastClient(config) - my_map = client.get_map("map") +my_map = client.get_map("map") - engineer = Engineer("John Doe", 30, ["Python", "Java", "C#", "C++", "Node.js", "Go"]) +engineer = Engineer("John Doe", 30, ["Python", "Java", "C#", "C++", "Node.js", "Go"]) - my_map.put("engineer1", engineer) +my_map.put("engineer1", engineer) - returned_engineer = my_map.get("engineer1").result() +returned_engineer = my_map.get("engineer1").result() - print("Name: {}\nAge: {}\nLanguages: {}".format(returned_engineer.name, - returned_engineer.age, - returned_engineer.languages)) +print("Name: {}\nAge: {}\nLanguages: {}".format(returned_engineer.name, + returned_engineer.age, + returned_engineer.languages)) - client.shutdown() +client.shutdown() diff --git a/examples/set/__init__.py b/examples/set/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/set/set_example.py b/examples/set/set_example.py index 656a26126c..daa3a2f1e6 100644 --- a/examples/set/set_example.py +++ b/examples/set/set_example.py @@ -1,22 +1,21 @@ import hazelcast -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +client = hazelcast.HazelcastClient() - my_set = client.get_set("set") +my_set = client.get_set("set") - my_set.add("Item1") - my_set.add("Item1") - my_set.add("Item2") +my_set.add("Item1") +my_set.add("Item1") +my_set.add("Item2") - found = my_set.contains("Item2").result() - print("Set contains Item2: {}".format(found)) +found = my_set.contains("Item2").result() +print("Set contains Item2: {}".format(found)) - items = my_set.get_all().result() - print("Size of set: {}".format(len(items))) +items = my_set.get_all().result() +print("Size of set: {}".format(len(items))) - print("\nAll Items:") - for item in items: - print(item) +print("\nAll Items:") +for item in items: + print(item) - client.shutdown() +client.shutdown() diff --git a/examples/ssl/__init__.py b/examples/ssl/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/ssl/ssl_example.py b/examples/ssl/ssl_example.py index 5f20924d5d..3f7b4ddb4f 100644 --- a/examples/ssl/ssl_example.py +++ b/examples/ssl/ssl_example.py @@ -3,29 +3,28 @@ from hazelcast.config import PROTOCOL # Hazelcast server should be started with SSL enabled to use SSLConfig -if __name__ == "__main__": - config = hazelcast.ClientConfig() +config = hazelcast.ClientConfig() - # SSL Config - ssl_config = hazelcast.SSLConfig() - ssl_config.enabled = True +# SSL Config +ssl_config = hazelcast.SSLConfig() +ssl_config.enabled = True - # Absolute path of PEM file should be given - ssl_config.cafile = os.path.abspath("server.pem") +# Absolute path of PEM file should be given +ssl_config.cafile = os.path.abspath("server.pem") - # Select the protocol used in SSL communication. This step is optional. Default is TLSv1_2 - ssl_config.protocol = PROTOCOL.TLSv1_3 +# Select the protocol used in SSL communication. This step is optional. Default is TLSv1_2 +ssl_config.protocol = PROTOCOL.TLSv1_3 - config.network_config.ssl_config = ssl_config +config.network.ssl = ssl_config - config.network_config.addresses.append("foo.bar.com:8888") +config.network.addresses.append("foo.bar.com:8888") - # Start a new Hazelcast client with SSL configuration. - client = hazelcast.HazelcastClient(config) +# Start a new Hazelcast client with SSL configuration. +client = hazelcast.HazelcastClient(config) - hz_map = client.get_map("ssl-map") - hz_map.put("key", "value") +hz_map = client.get_map("ssl-map") +hz_map.put("key", "value") - print(hz_map.get("key").result()) +print(hz_map.get("key").result()) - client.shutdown() +client.shutdown() diff --git a/examples/ssl/ssl_mutual_authentication_example.py b/examples/ssl/ssl_mutual_authentication_example.py index 255a53e7f8..5c91ddc8be 100644 --- a/examples/ssl/ssl_mutual_authentication_example.py +++ b/examples/ssl/ssl_mutual_authentication_example.py @@ -4,36 +4,35 @@ # To use SSLConfig with mutual authentication, Hazelcast server should be started with # SSL and mutual authentication enabled -if __name__ == "__main__": - config = hazelcast.ClientConfig() +config = hazelcast.ClientConfig() - # SSL Config - ssl_config = hazelcast.SSLConfig() - ssl_config.enabled = True +# SSL Config +ssl_config = hazelcast.SSLConfig() +ssl_config.enabled = True - # Absolute path of PEM files should be given - ssl_config.cafile = os.path.abspath("server.pem") +# Absolute path of PEM files should be given +ssl_config.cafile = os.path.abspath("server.pem") - # To use mutual authentication client certificate and private key should be provided - ssl_config.certfile = os.path.abspath("client.pem") - ssl_config.keyfile = os.path.abspath("client-key.pem") +# To use mutual authentication client certificate and private key should be provided +ssl_config.certfile = os.path.abspath("client.pem") +ssl_config.keyfile = os.path.abspath("client-key.pem") - # If private key file is encrypted, password is required to decrypt it - ssl_config.password = "key-file-password" +# If private key file is encrypted, password is required to decrypt it +ssl_config.password = "key-file-password" - # Select the protocol used in SSL communication. This step is optional. Default is TLSv1_2 - ssl_config.protocol = PROTOCOL.TLSv1_3 +# Select the protocol used in SSL communication. This step is optional. Default is TLSv1_2 +ssl_config.protocol = PROTOCOL.TLSv1_3 - config.network_config.ssl_config = ssl_config +config.network.ssl = ssl_config - config.network_config.addresses.append("foo.bar.com:8888") +config.network.addresses.append("foo.bar.com:8888") - # Start a new Hazelcast client with SSL configuration. - client = hazelcast.HazelcastClient(config) +# Start a new Hazelcast client with SSL configuration. +client = hazelcast.HazelcastClient(config) - hz_map = client.get_map("ssl-map") - hz_map.put("key", "value") +hz_map = client.get_map("ssl-map") +hz_map.put("key", "value") - print(hz_map.get("key").result()) +print(hz_map.get("key").result()) - client.shutdown() +client.shutdown() diff --git a/examples/topic/__init__.py b/examples/topic/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/topic/topic_example.py b/examples/topic/topic_example.py index 669f146bbf..4290ad1e78 100644 --- a/examples/topic/topic_example.py +++ b/examples/topic/topic_example.py @@ -7,14 +7,13 @@ def on_message(event): print("Publish time: {}\n".format(event.publish_time)) -if __name__ == "__main__": - client = hazelcast.HazelcastClient() +client = hazelcast.HazelcastClient() - topic = client.get_topic("topic") - topic.add_listener(on_message) +topic = client.get_topic("topic") +topic.add_listener(on_message) - for i in range(10): - topic.publish("Message " + str(i)) - time.sleep(0.1) +for i in range(10): + topic.publish("Message " + str(i)) + time.sleep(0.1) - client.shutdown() +client.shutdown() diff --git a/examples/transactions/__init__.py b/examples/transactions/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/examples/transactions/transaction_basic_example.py b/examples/transactions/transaction_basic_example.py index 52fa4c5d77..af045e69ac 100644 --- a/examples/transactions/transaction_basic_example.py +++ b/examples/transactions/transaction_basic_example.py @@ -1,19 +1,18 @@ import hazelcast import time -if __name__ == "__main__": - client = hazelcast.HazelcastClient() - transaction = client.new_transaction(timeout=10) - try: - transaction.begin() - transactional_map = transaction.get_map("transaction-map") - print("Map: {}".format(transactional_map)) +client = hazelcast.HazelcastClient() +transaction = client.new_transaction(timeout=10) +try: + transaction.begin() + transactional_map = transaction.get_map("transaction-map") + print("Map: {}".format(transactional_map)) - transactional_map.put("1", "1") - time.sleep(0.1) - transactional_map.put("2", "2") + transactional_map.put("1", "1") + time.sleep(0.1) + transactional_map.put("2", "2") - transaction.commit() - except Exception as ex: - transaction.rollback() - print("Transaction failed! {}".format(ex.args)) + transaction.commit() +except Exception as ex: + transaction.rollback() + print("Transaction failed! {}".format(ex.args)) diff --git a/hazelcast/__init__.py b/hazelcast/__init__.py index 7a6615a1de..923daf1629 100644 --- a/hazelcast/__init__.py +++ b/hazelcast/__init__.py @@ -1,5 +1,5 @@ from hazelcast.client import HazelcastClient -from hazelcast.config import ClientConfig, ClientNetworkConfig, SerializationConfig, GroupConfig, SSLConfig, \ +from hazelcast.config import ClientConfig, ClientNetworkConfig, SerializationConfig, SSLConfig, \ ClientCloudConfig, FlakeIdGeneratorConfig from hazelcast.version import CLIENT_VERSION_INFO as __version_info__ from hazelcast.version import CLIENT_VERSION as __version__ diff --git a/hazelcast/client.py b/hazelcast/client.py index d7a0cff47c..2e10bf85c8 100644 --- a/hazelcast/client.py +++ b/hazelcast/client.py @@ -2,106 +2,112 @@ import logging.config import sys import json +import threading -from hazelcast.cluster import ClusterService, RandomLoadBalancer +from hazelcast.cluster import ClusterService, RoundRobinLB, _InternalClusterService from hazelcast.config import ClientConfig, ClientProperties -from hazelcast.connection import ConnectionManager, Heartbeat, DefaultAddressProvider, DefaultAddressTranslator -from hazelcast.core import DistributedObjectInfo -from hazelcast.invocation import InvocationService -from hazelcast.listener import ListenerService -from hazelcast.lifecycle import LifecycleService, LIFECYCLE_STATE_SHUTTING_DOWN, LIFECYCLE_STATE_SHUTDOWN -from hazelcast.partition import PartitionService -from hazelcast.protocol.codec import client_get_distributed_objects_codec +from hazelcast.connection import ConnectionManager, DefaultAddressProvider +from hazelcast.core import DistributedObjectInfo, DistributedObjectEvent +from hazelcast.invocation import InvocationService, Invocation +from hazelcast.listener import ListenerService, ClusterViewListenerService +from hazelcast.lifecycle import LifecycleService, LifecycleState, _InternalLifecycleService +from hazelcast.partition import PartitionService, _InternalPartitionService +from hazelcast.protocol.codec import client_get_distributed_objects_codec, \ + client_add_distributed_object_listener_codec, client_remove_distributed_object_listener_codec from hazelcast.proxy import ProxyManager, MAP_SERVICE, QUEUE_SERVICE, LIST_SERVICE, SET_SERVICE, MULTI_MAP_SERVICE, \ - REPLICATED_MAP_SERVICE, ATOMIC_LONG_SERVICE, ATOMIC_REFERENCE_SERVICE, RINGBUFFER_SERVICE, COUNT_DOWN_LATCH_SERVICE, \ - TOPIC_SERVICE, RELIABLE_TOPIC_SERVICE, SEMAPHORE_SERVICE, LOCK_SERVICE, ID_GENERATOR_SERVICE, \ - ID_GENERATOR_ATOMIC_LONG_PREFIX, EXECUTOR_SERVICE, PN_COUNTER_SERVICE, FLAKE_ID_GENERATOR_SERVICE + REPLICATED_MAP_SERVICE, RINGBUFFER_SERVICE, \ + TOPIC_SERVICE, RELIABLE_TOPIC_SERVICE, \ + EXECUTOR_SERVICE, PN_COUNTER_SERVICE, FLAKE_ID_GENERATOR_SERVICE from hazelcast.near_cache import NearCacheManager from hazelcast.reactor import AsyncoreReactor from hazelcast.serialization import SerializationServiceV1 from hazelcast.statistics import Statistics from hazelcast.transaction import TWO_PHASE, TransactionManager from hazelcast.util import AtomicInteger, DEFAULT_LOGGING -from hazelcast.discovery import HazelcastCloudAddressProvider, HazelcastCloudAddressTranslator, HazelcastCloudDiscovery -from hazelcast.exception import HazelcastIllegalStateError +from hazelcast.discovery import HazelcastCloudAddressProvider, HazelcastCloudDiscovery +from hazelcast.errors import IllegalStateError class HazelcastClient(object): """ Hazelcast Client. """ - CLIENT_ID = AtomicInteger() - _config = None + _CLIENT_ID = AtomicInteger() logger = logging.getLogger("HazelcastClient") def __init__(self, config=None): + self._context = _ClientContext() self.config = config or ClientConfig() self.properties = ClientProperties(self.config.get_properties()) - self.id = HazelcastClient.CLIENT_ID.get_and_increment() + self._id = HazelcastClient._CLIENT_ID.get_and_increment() self.name = self._create_client_name() self._init_logger() - self._logger_extras = {"client_name": self.name, "group_name": self.config.group_config.name} - self._log_group_password_info() - self.lifecycle = LifecycleService(self.config, self._logger_extras) - self.reactor = AsyncoreReactor(self._logger_extras) - self._address_providers = self._create_address_providers() - self._address_translator = self._create_address_translator() - self.connection_manager = ConnectionManager(self, self.reactor.new_connection, self._address_translator) - self.heartbeat = Heartbeat(self) - self.invoker = InvocationService(self) - self.listener = ListenerService(self) - self.cluster = ClusterService(self.config, self, self._address_providers) - self.partition_service = PartitionService(self) - self.proxy = ProxyManager(self) - self.load_balancer = RandomLoadBalancer(self.cluster) - self.serialization_service = SerializationServiceV1(serialization_config=self.config.serialization_config, properties=self.properties) - self.transaction_manager = TransactionManager(self) - self.lock_reference_id_generator = AtomicInteger(1) - self.near_cache_manager = NearCacheManager(self) - self.statistics = Statistics(self) + self._logger_extras = {"client_name": self.name, "cluster_name": self.config.cluster_name} + self._reactor = AsyncoreReactor(self._logger_extras) + self._serialization_service = SerializationServiceV1(serialization_config=self.config.serialization) + self._near_cache_manager = NearCacheManager(self, self._serialization_service) + self._internal_lifecycle_service = _InternalLifecycleService(self, self._logger_extras) + self.lifecycle_service = LifecycleService(self._internal_lifecycle_service) + self._invocation_service = InvocationService(self, self._reactor, self._logger_extras) + self._address_provider = self._create_address_provider() + self._internal_partition_service = _InternalPartitionService(self, self._logger_extras) + self.partition_service = PartitionService(self._internal_partition_service) + self._internal_cluster_service = _InternalClusterService(self, self._logger_extras) + self.cluster_service = ClusterService(self._internal_cluster_service) + self._connection_manager = ConnectionManager(self, self._reactor, self._address_provider, + self._internal_lifecycle_service, + self._internal_partition_service, + self._internal_cluster_service, + self._invocation_service, + self._near_cache_manager, + self._logger_extras) + self._load_balancer = self._init_load_balancer(self.config) + self._listener_service = ListenerService(self, self._connection_manager, + self._invocation_service, + self._logger_extras) + self._proxy_manager = ProxyManager(self._context) + self._transaction_manager = TransactionManager(self._context, self._logger_extras) + self._lock_reference_id_generator = AtomicInteger(1) + self._statistics = Statistics(self, self._reactor, self._connection_manager, + self._invocation_service, self._near_cache_manager, + self._logger_extras) + self._cluster_view_listener = ClusterViewListenerService(self, self._connection_manager, + self._internal_partition_service, + self._internal_cluster_service, + self._invocation_service) + self._shutdown_lock = threading.RLock() + self._init_context() self._start() + def _init_context(self): + self._context.init_context(self.config, self._invocation_service, self._internal_partition_service, + self._internal_cluster_service, self._connection_manager, + self._serialization_service, self._listener_service, self._proxy_manager, + self._near_cache_manager, self._lock_reference_id_generator, self._logger_extras) + def _start(self): - self.reactor.start() + self._reactor.start() try: - self.invoker.start() - self.cluster.start() - self.heartbeat.start() - self.listener.start() - self.partition_service.start() - self.statistics.start() + self._internal_lifecycle_service.start() + self._invocation_service.start(self._internal_partition_service, self._connection_manager, + self._listener_service) + self._load_balancer.init(self.cluster_service, self.config) + membership_listeners = self.config.membership_listeners + self._internal_cluster_service.start(self._connection_manager, membership_listeners) + self._cluster_view_listener.start() + self._connection_manager.start(self._load_balancer) + connection_strategy = self.config.connection_strategy + if not connection_strategy.async_start: + self._internal_cluster_service.wait_initial_member_list_fetched() + self._connection_manager.connect_to_all_cluster_members() + + self._listener_service.start() + self._statistics.start() except: - self.reactor.shutdown() + self.shutdown() raise self.logger.info("Client started.", extra=self._logger_extras) - def get_atomic_long(self, name): - """ - Creates cluster-wide :class:`~hazelcast.proxy.atomic_long.AtomicLong`. - - :param name: (str), name of the AtomicLong proxy. - :return: (:class:`~hazelcast.proxy.atomic_long.AtomicLong`), AtomicLong proxy for the given name. - """ - return self.proxy.get_or_create(ATOMIC_LONG_SERVICE, name) - - def get_atomic_reference(self, name): - """ - Creates cluster-wide :class:`~hazelcast.proxy.atomic_reference.AtomicReference`. - - :param name: (str), name of the AtomicReference proxy. - :return: (:class:`~hazelcast.proxy.atomic_reference.AtomicReference`), AtomicReference proxy for the given name. - """ - return self.proxy.get_or_create(ATOMIC_REFERENCE_SERVICE, name) - - def get_count_down_latch(self, name): - """ - Creates cluster-wide :class:`~hazelcast.proxy.count_down_latch.CountDownLatch`. - - :param name: (str), name of the CountDownLatch proxy. - :return: (:class:`~hazelcast.proxy.count_down_latch.CountDownLatch`), CountDownLatch proxy for the given name. - """ - return self.proxy.get_or_create(COUNT_DOWN_LATCH_SERVICE, name) - def get_executor(self, name): """ Creates cluster-wide :class:`~hazelcast.proxy.executor.Executor`. @@ -109,7 +115,7 @@ def get_executor(self, name): :param name: (str), name of the Executor proxy. :return: (:class:`~hazelcast.proxy.executor.Executor`), Executor proxy for the given name. """ - return self.proxy.get_or_create(EXECUTOR_SERVICE, name) + return self._proxy_manager.get_or_create(EXECUTOR_SERVICE, name) def get_flake_id_generator(self, name): """ @@ -118,17 +124,7 @@ def get_flake_id_generator(self, name): :param name: (str), name of the FlakeIdGenerator proxy. :return: (:class:`~hazelcast.proxy.flake_id_generator.FlakeIdGenerator`), FlakeIdGenerator proxy for the given name """ - return self.proxy.get_or_create(FLAKE_ID_GENERATOR_SERVICE, name) - - def get_id_generator(self, name): - """ - Creates cluster-wide :class:`~hazelcast.proxy.id_generator.IdGenerator`. - - :param name: (str), name of the IdGenerator proxy. - :return: (:class:`~hazelcast.proxy.id_generator.IdGenerator`), IdGenerator proxy for the given name. - """ - atomic_long = self.get_atomic_long(ID_GENERATOR_ATOMIC_LONG_PREFIX + name) - return self.proxy.get_or_create(ID_GENERATOR_SERVICE, name, atomic_long=atomic_long) + return self._proxy_manager.get_or_create(FLAKE_ID_GENERATOR_SERVICE, name) def get_queue(self, name): """ @@ -137,7 +133,7 @@ def get_queue(self, name): :param name: (str), name of the distributed queue. :return: (:class:`~hazelcast.proxy.queue.Queue`), distributed queue instance with the specified name. """ - return self.proxy.get_or_create(QUEUE_SERVICE, name) + return self._proxy_manager.get_or_create(QUEUE_SERVICE, name) def get_list(self, name): """ @@ -146,16 +142,7 @@ def get_list(self, name): :param name: (str), name of the distributed list. :return: (:class:`~hazelcast.proxy.list.List`), distributed list instance with the specified name. """ - return self.proxy.get_or_create(LIST_SERVICE, name) - - def get_lock(self, name): - """ - Returns the distributed lock instance with the specified name. - - :param name: (str), name of the distributed lock. - :return: (:class:`~hazelcast.proxy.lock.Lock`), distributed lock instance with the specified name. - """ - return self.proxy.get_or_create(LOCK_SERVICE, name) + return self._proxy_manager.get_or_create(LIST_SERVICE, name) def get_map(self, name): """ @@ -164,7 +151,7 @@ def get_map(self, name): :param name: (str), name of the distributed map. :return: (:class:`~hazelcast.proxy.map.Map`), distributed map instance with the specified name. """ - return self.proxy.get_or_create(MAP_SERVICE, name) + return self._proxy_manager.get_or_create(MAP_SERVICE, name) def get_multi_map(self, name): """ @@ -173,7 +160,7 @@ def get_multi_map(self, name): :param name: (str), name of the distributed MultiMap. :return: (:class:`~hazelcast.proxy.multi_map.MultiMap`), distributed MultiMap instance with the specified name. """ - return self.proxy.get_or_create(MULTI_MAP_SERVICE, name) + return self._proxy_manager.get_or_create(MULTI_MAP_SERVICE, name) def get_pn_counter(self, name): """ @@ -182,7 +169,7 @@ def get_pn_counter(self, name): :param name: (str), name of the PN Counter. :return: (:class:`~hazelcast.proxy.pn_counter.PNCounter`), the PN Counter. """ - return self.proxy.get_or_create(PN_COUNTER_SERVICE, name) + return self._proxy_manager.get_or_create(PN_COUNTER_SERVICE, name) def get_reliable_topic(self, name): """ @@ -191,7 +178,7 @@ def get_reliable_topic(self, name): :param name: (str), name of the ReliableTopic. :return: (:class:`~hazelcast.proxy.reliable_topic.ReliableTopic`), the ReliableTopic. """ - return self.proxy.get_or_create(RELIABLE_TOPIC_SERVICE, name) + return self._proxy_manager.get_or_create(RELIABLE_TOPIC_SERVICE, name) def get_replicated_map(self, name): """ @@ -200,7 +187,7 @@ def get_replicated_map(self, name): :param name: (str), name of the distributed ReplicatedMap. :return: (:class:`~hazelcast.proxy.replicated_map.ReplicatedMap`), distributed ReplicatedMap instance with the specified name. """ - return self.proxy.get_or_create(REPLICATED_MAP_SERVICE, name) + return self._proxy_manager.get_or_create(REPLICATED_MAP_SERVICE, name) def get_ringbuffer(self, name): """ @@ -210,16 +197,7 @@ def get_ringbuffer(self, name): :return: (:class:`~hazelcast.proxy.ringbuffer.RingBuffer`), distributed RingBuffer instance with the specified name. """ - return self.proxy.get_or_create(RINGBUFFER_SERVICE, name) - - def get_semaphore(self, name): - """ - Returns the distributed Semaphore instance with the specified name. - - :param name: (str), name of the distributed Semaphore. - :return: (:class:`~hazelcast.proxy.semaphore.Semaphore`), distributed Semaphore instance with the specified name. - """ - return self.proxy.get_or_create(SEMAPHORE_SERVICE, name) + return self._proxy_manager.get_or_create(RINGBUFFER_SERVICE, name) def get_set(self, name): """ @@ -228,7 +206,7 @@ def get_set(self, name): :param name: (str), name of the distributed Set. :return: (:class:`~hazelcast.proxy.set.Set`), distributed Set instance with the specified name. """ - return self.proxy.get_or_create(SET_SERVICE, name) + return self._proxy_manager.get_or_create(SET_SERVICE, name) def get_topic(self, name): """ @@ -237,7 +215,7 @@ def get_topic(self, name): :param name: (str), name of the Topic. :return: (:class:`~hazelcast.proxy.topic.Topic`), the Topic. """ - return self.proxy.get_or_create(TOPIC_SERVICE, name) + return self._proxy_manager.get_or_create(TOPIC_SERVICE, name) def new_transaction(self, timeout=120, durability=1, type=TWO_PHASE): """ @@ -251,7 +229,7 @@ def new_transaction(self, timeout=120, durability=1, type=TWO_PHASE): :param type: (Transaction Type), the transaction type which can be :const:`~hazelcast.transaction.TWO_PHASE` or :const:`~hazelcast.transaction.ONE_PHASE` :return: (:class:`~hazelcast.transaction.Transaction`), new Transaction associated with the current thread. """ - return self.transaction_manager.new_transaction(timeout, durability, type) + return self._transaction_manager.new_transaction(timeout, durability, type) def add_distributed_object_listener(self, listener_func): """ @@ -260,7 +238,24 @@ def add_distributed_object_listener(self, listener_func): :param listener_func: Function to be called when a distributed object is created or destroyed. :return: (str), a registration id which is used as a key to remove the listener. """ - return self.proxy.add_distributed_object_listener(listener_func) + is_smart = self.config.network.smart_routing + request = client_add_distributed_object_listener_codec.encode_request(is_smart) + + def handle_distributed_object_event(name, service_name, event_type, source): + event = DistributedObjectEvent(name, service_name, event_type, source) + listener_func(event) + + def event_handler(client_message): + return client_add_distributed_object_listener_codec.handle(client_message, handle_distributed_object_event) + + def decode_add_listener(response): + return client_add_distributed_object_listener_codec.decode_response(response) + + def encode_remove_listener(registration_id): + return client_remove_distributed_object_listener_codec.encode_request(registration_id) + + return self._listener_service.register_listener(request, decode_add_listener, + encode_remove_listener, event_handler) def remove_distributed_object_listener(self, registration_id): """ @@ -268,7 +263,7 @@ def remove_distributed_object_listener(self, registration_id): :param registration_id: (str), id of registered listener. :return: (bool), ``true`` if registration is removed, ``false`` otherwise. """ - return self.proxy.remove_distributed_object_listener(registration_id) + return self._listener_service.deregister_listener(registration_id) def get_distributed_objects(self): """ @@ -277,50 +272,54 @@ def get_distributed_objects(self): :return:(Sequence), List of instances created by Hazelcast. """ request = client_get_distributed_objects_codec.encode_request() - to_object = self.serialization_service.to_object - future = self.invoker.invoke_on_random_target(request) - response = client_get_distributed_objects_codec.decode_response(future.result(), to_object)["response"] + invocation = Invocation(request, response_handler=lambda m: m) + self._invocation_service.invoke(invocation) + response = client_get_distributed_objects_codec.decode_response(invocation.future.result()) - distributed_objects = self.proxy.get_distributed_objects() + distributed_objects = self._proxy_manager.get_distributed_objects() local_distributed_object_infos = set() for dist_obj in distributed_objects: - local_distributed_object_infos.add(DistributedObjectInfo(dist_obj.name, dist_obj.service_name)) + local_distributed_object_infos.add(DistributedObjectInfo(dist_obj.service_name, dist_obj.name)) for dist_obj_info in response: local_distributed_object_infos.discard(dist_obj_info) - self.proxy.get_or_create(dist_obj_info.service_name, dist_obj_info.name, create_on_remote=False) + self._proxy_manager.get_or_create(dist_obj_info.service_name, dist_obj_info.name, create_on_remote=False) for dist_obj_info in local_distributed_object_infos: - self.proxy.destroy_proxy(dist_obj_info.service_name, dist_obj_info.name, destroy_on_remote=False) + self._proxy_manager.destroy_proxy(dist_obj_info.service_name, dist_obj_info.name, destroy_on_remote=False) - return self.proxy.get_distributed_objects() + return self._proxy_manager.get_distributed_objects() def shutdown(self): """ Shuts down this HazelcastClient. """ - if self.lifecycle.is_live: - self.lifecycle.fire_lifecycle_event(LIFECYCLE_STATE_SHUTTING_DOWN) - self.near_cache_manager.destroy_all_near_caches() - self.statistics.shutdown() - self.partition_service.shutdown() - self.heartbeat.shutdown() - self.cluster.shutdown() - self.reactor.shutdown() - self.lifecycle.fire_lifecycle_event(LIFECYCLE_STATE_SHUTDOWN) - self.logger.info("Client shutdown.", extra=self._logger_extras) - - def _create_address_providers(self): - network_config = self.config.network_config - address_providers = [] - - cloud_config = network_config.cloud_config + with self._shutdown_lock: + if self._internal_lifecycle_service.running: + self._internal_lifecycle_service.fire_lifecycle_event(LifecycleState.SHUTTING_DOWN) + self._internal_lifecycle_service.shutdown() + self._near_cache_manager.destroy_near_caches() + self._connection_manager.shutdown() + self._invocation_service.shutdown() + self._statistics.shutdown() + self._reactor.shutdown() + self._internal_lifecycle_service.fire_lifecycle_event(LifecycleState.SHUTDOWN) + + def _create_address_provider(self): + network_config = self.config.network + address_list_provided = len(network_config.addresses) != 0 + cloud_config = network_config.cloud + cloud_enabled = cloud_config.enabled or cloud_config.discovery_token != "" + if address_list_provided and cloud_enabled: + raise IllegalStateError("Only one discovery method can be enabled at a time. " + "Cluster members given explicitly: %s, Hazelcast Cloud enabled: %s" + % (address_list_provided, cloud_enabled)) + cloud_address_provider = self._init_cloud_address_provider(cloud_config) if cloud_address_provider: - address_providers.append(cloud_address_provider) + return cloud_address_provider - address_providers.append(DefaultAddressProvider(network_config)) - return address_providers + return DefaultAddressProvider(network_config.addresses) def _init_cloud_address_provider(self, cloud_config): if cloud_config.enabled: @@ -335,55 +334,18 @@ def _init_cloud_address_provider(self, cloud_config): return None - def _create_address_translator(self): - network_config = self.config.network_config - cloud_config = network_config.cloud_config - cloud_discovery_token = self.properties.get(self.properties.HAZELCAST_CLOUD_DISCOVERY_TOKEN) - - address_list_provided = len(network_config.addresses) != 0 - if cloud_discovery_token != "" and cloud_config.enabled: - raise HazelcastIllegalStateError("Ambiguous Hazelcast.cloud configuration. " - "Both property based and client configuration based settings are provided " - "for Hazelcast cloud discovery together. Use only one.") - - hazelcast_cloud_enabled = cloud_discovery_token != "" or cloud_config.enabled - self._is_discovery_configuration_consistent(address_list_provided, hazelcast_cloud_enabled) - - if hazelcast_cloud_enabled: - if cloud_config.enabled: - discovery_token = cloud_config.discovery_token - else: - discovery_token = cloud_discovery_token - host, url = HazelcastCloudDiscovery.get_host_and_url(self.config.get_properties(), discovery_token) - return HazelcastCloudAddressTranslator(host, url, self._get_connection_timeout(), self._logger_extras) - - return DefaultAddressTranslator() - def _get_connection_timeout(self): - network_config = self.config.network_config + network_config = self.config.network conn_timeout = network_config.connection_timeout return sys.maxsize if conn_timeout == 0 else conn_timeout - def _is_discovery_configuration_consistent(self, address_list_provided, hazelcast_cloud_enabled): - count = 0 - if address_list_provided: - count += 1 - if hazelcast_cloud_enabled: - count += 1 - - if count > 1: - raise HazelcastIllegalStateError("Only one discovery method can be enabled at a time. " - "Cluster members given explicitly: {}" - ", Hazelcast.cloud enabled: {}".format(address_list_provided, - hazelcast_cloud_enabled)) - def _create_client_name(self): if self.config.client_name: return self.config.client_name - return "hz.client_" + str(self.id) + return "hz.client_" + str(self._id) def _init_logger(self): - logger_config = self.config.logger_config + logger_config = self.config.logger if logger_config.config_file is not None: with open(logger_config.config_file, "r") as f: json_config = json.loads(f.read()) @@ -392,10 +354,44 @@ def _init_logger(self): logging.config.dictConfig(DEFAULT_LOGGING) self.logger.setLevel(logger_config.level) - def _log_group_password_info(self): - if self.config.group_config.password: - self.logger.info("A non-empty group password is configured for the Hazelcast client. " - "Starting with Hazelcast IMDG version 3.11, clients with the same group name, " - "but with different group passwords (that do not use authentication) will be " - "accepted to a cluster. The group password configuration will be removed " - "completely in a future release.", extra=self._logger_extras) + @staticmethod + def _init_load_balancer(config): + load_balancer = config.load_balancer + if not load_balancer: + load_balancer = RoundRobinLB() + return load_balancer + + +class _ClientContext(object): + """ + Context holding all the required services, managers and the configuration for a Hazelcast client. + """ + + def __init__(self): + self.config = None + self.invocation_service = None + self.partition_service = None + self.cluster_service = None + self.connection_manager = None + self.serialization_service = None + self.listener_service = None + self.proxy_manager = None + self.near_cache_manager = None + self.lock_reference_id_generator = None + self.logger_extras = None + + def init_context(self, config, invocation_service, partition_service, + cluster_service, connection_manager, serialization_service, + listener_service, proxy_manager, near_cache_manager, + lock_reference_id_generator, logger_extras): + self.config = config + self.invocation_service = invocation_service + self.partition_service = partition_service + self.cluster_service = cluster_service + self.connection_manager = connection_manager + self.serialization_service = serialization_service + self.listener_service = listener_service + self.proxy_manager = proxy_manager + self.near_cache_manager = near_cache_manager + self.lock_reference_id_generator = lock_reference_id_generator + self.logger_extras = logger_extras diff --git a/hazelcast/cluster.py b/hazelcast/cluster.py index cb95b9b0a7..daff7de985 100644 --- a/hazelcast/cluster.py +++ b/hazelcast/cluster.py @@ -1,55 +1,143 @@ import logging import random import threading -import time import uuid +from collections import OrderedDict -from hazelcast.exception import HazelcastError, AuthenticationError, TargetDisconnectedError -from hazelcast.lifecycle import LIFECYCLE_STATE_CONNECTED, LIFECYCLE_STATE_DISCONNECTED -from hazelcast.protocol.codec import client_add_membership_listener_codec, client_authentication_codec -from hazelcast.util import get_possible_addresses, get_provider_addresses, calculate_version -from hazelcast.version import CLIENT_TYPE, CLIENT_VERSION, SERIALIZATION_VERSION +from hazelcast import six +from hazelcast.errors import TargetDisconnectedError, IllegalStateError +from hazelcast.util import check_not_none -# Membership Event Types -MEMBER_ADDED = 1 -MEMBER_REMOVED = 2 + +class _MemberListSnapshot(object): + __slots__ = ("version", "members") + + def __init__(self, version, members): + self.version = version + self.members = members + + +class ClientInfo(object): + """ + Local information of the client. + """ + + __slots__ = ("uuid", "address", "name", "labels") + + def __init__(self, client_uuid, address, name, labels): + self.uuid = client_uuid + """Unique id of this client instance.""" + + self.address = address + """Local address that is used to communicate with cluster.""" + + self.name = name + """Name of the client.""" + + self.labels = labels + """Read-only set of all labels of this client.""" + + def __repr__(self): + return "ClientInfo(uuid=%s, address=%s, name=%s, labels=%s)" % (self.uuid, self.address, self.name, self.labels) + + +_EMPTY_SNAPSHOT = _MemberListSnapshot(-1, OrderedDict()) +_INITIAL_MEMBERS_TIMEOUT_SECONDS = 120 class ClusterService(object): """ - Hazelcast cluster service. It provides access to the members in the cluster and the client can register for changes - in the cluster members. + Cluster service for Hazelcast clients. - All the methods on the Cluster are thread-safe. + It provides access to the members in the cluster + and one can register for changes in the cluster members. """ + + def __init__(self, internal_cluster_service): + self._service = internal_cluster_service + + def add_listener(self, member_added=None, member_removed=None, fire_for_existing=False): + """ + Adds a membership listener to listen for membership updates. + + It will be notified when a member is added to cluster or removed from cluster. + There is no check for duplicate registrations, so if you register the listener + twice, it will get events twice. + + :param member_added: Function to be called when a member is added to the cluster. + :type member_added: function + :param member_removed: Function to be called when a member is removed from the cluster. + :type member_removed: function + :param fire_for_existing: Whether or not fire member_added for existing members. + :type fire_for_existing: bool + + :return: Registration id of the listener which will be used for removing this listener. + :rtype: str + """ + return self._service.add_listener(member_added, member_removed, fire_for_existing) + + def remove_listener(self, registration_id): + """ + Removes the specified membership listener. + + :param registration_id: Registration id of the listener to be removed. + :type registration_id: str + + :return: ``True`` if the registration is removed, ``False`` otherwise. + :rtype: bool + """ + return self._service.remove_listener(registration_id) + + def get_members(self, member_selector=None): + """ + Lists the current members in the cluster. + + Every member in the cluster returns the members in the same order. + To obtain the oldest member in the cluster, you can retrieve the first item in the list. + + :param member_selector: Function to filter members to return. + If not provided, the returned list will contain all the available cluster members. + :type member_selector: function + + :return: Current members in the cluster + :rtype: list[:class:`~hazelcast.core.MemberInfo`] + """ + return self._service.get_members(member_selector) + + +class _InternalClusterService(object): logger = logging.getLogger("HazelcastClient.ClusterService") - def __init__(self, config, client, address_providers): - self._config = config + def __init__(self, client, logger_extras): self._client = client - self._logger_extras = {"client_name": client.name, "group_name": config.group_config.name} - self._members = {} - self.owner_connection_address = None - self.owner_uuid = None - self.uuid = None - self.listeners = {} - - for listener in config.membership_listeners: + self._connection_manager = None + self._logger_extras = logger_extras + config = client.config + self._labels = frozenset(config.labels) + self._listeners = {} + self._member_list_snapshot = _EMPTY_SNAPSHOT + self._initial_list_fetched = threading.Event() + + def start(self, connection_manager, membership_listeners): + self._connection_manager = connection_manager + for listener in membership_listeners: self.add_listener(*listener) - self._address_providers = address_providers - self._initial_list_fetched = threading.Event() - self._client.connection_manager.add_listener(on_connection_closed=self._connection_closed) - self._client.heartbeat.add_listener(on_heartbeat_stopped=self._heartbeat_stopped) + def get_member(self, member_uuid): + check_not_none(uuid, "UUID must not be null") + snapshot = self._member_list_snapshot + return snapshot.members.get(member_uuid, None) - def start(self): - """ - Connects to cluster. - """ - self._connect_to_cluster() + def get_members(self, member_selector=None): + snapshot = self._member_list_snapshot + if not member_selector: + return list(snapshot.members.values()) - def shutdown(self): - pass + members = [] + for member in six.itervalues(snapshot.members): + if member_selector(member): + members.append(member) + return members def size(self): """ @@ -57,25 +145,27 @@ def size(self): :return: (int), size of the cluster. """ - return len(self._members) + snapshot = self._member_list_snapshot + return len(snapshot.members) - def add_listener(self, member_added=None, member_removed=None, fire_for_existing=False): + def get_local_client(self): """ - Adds a membership listener to listen for membership updates, it will be notified when a member is added to - cluster or removed from cluster. There is no check for duplicate registrations, so if you register the listener - twice, it will get events twice. - + Returns the info representing the local client. - :param member_added: (Function), function to be called when a member is added to the cluster (optional). - :param member_removed: (Function), function to be called when a member is removed to the cluster (optional). - :param fire_for_existing: (bool), (optional). - :return: (str), registration id of the listener which will be used for removing this listener. + :return: (:class: `~hazelcast.cluster.ClientInfo`), client info """ + connection_manager = self._connection_manager + connection = connection_manager.get_random_connection() + local_address = None if not connection else connection.local_address + return ClientInfo(connection_manager.client_uuid, local_address, self._client.name, self._labels) + + def add_listener(self, member_added=None, member_removed=None, fire_for_existing=False): registration_id = str(uuid.uuid4()) - self.listeners[registration_id] = (member_added, member_removed) + self._listeners[registration_id] = (member_added, member_removed) - if fire_for_existing: - for member in self.get_member_list(): + if fire_for_existing and member_added: + snapshot = self._member_list_snapshot + for member in six.itervalues(snapshot.members): member_added(member) return registration_id @@ -88,227 +178,167 @@ def remove_listener(self, registration_id): :return: (bool), if the registration is removed, ``false`` otherwise. """ try: - self.listeners.pop(registration_id) + self._listeners.pop(registration_id) return True except KeyError: return False - @property - def members(self): + def wait_initial_member_list_fetched(self): """ - Returns the members in the cluster. - :return: (list), List of members. - """ - return self.get_member_list() + Blocks until the initial member list is fetched from the cluster. + If it is not received within the timeout, an error is raised. - def _reconnect(self): - try: - self.logger.warning("Connection closed to owner node. Trying to reconnect.", extra=self._logger_extras) - self._connect_to_cluster() - except: - self.logger.exception("Could not reconnect to cluster. Shutting down client.", extra=self._logger_extras) - self._client.shutdown() - - def _connect_to_cluster(self): - current_attempt = 1 - attempt_limit = self._config.network_config.connection_attempt_limit - retry_delay = self._config.network_config.connection_attempt_period - while current_attempt <= attempt_limit: - provider_addresses = get_provider_addresses(self._address_providers) - addresses = get_possible_addresses(provider_addresses, self.get_member_list()) - - for address in addresses: - try: - self.logger.info("Connecting to %s", address, extra=self._logger_extras) - self._connect_to_address(address) - return - except: - self.logger.warning("Error connecting to %s ", address, exc_info=True, extra=self._logger_extras) - - if current_attempt >= attempt_limit: - self.logger.warning( - "Unable to get alive cluster connection, attempt %d of %d", - current_attempt, attempt_limit, extra=self._logger_extras) - break - - self.logger.warning( - "Unable to get alive cluster connection, attempt %d of %d, trying again in %d seconds", - current_attempt, attempt_limit, retry_delay, extra=self._logger_extras) - current_attempt += 1 - time.sleep(retry_delay) - - error_msg = "Could not connect to any of %s after %d tries" % (addresses, attempt_limit) - raise HazelcastError(error_msg) - - def _authenticate_manager(self, connection): - request = client_authentication_codec.encode_request( - username=self._config.group_config.name, - password=self._config.group_config.password, - uuid=self.uuid, - owner_uuid=self.owner_uuid, - is_owner_connection=True, - client_type=CLIENT_TYPE, - serialization_version=SERIALIZATION_VERSION, - client_hazelcast_version=CLIENT_VERSION) - - def callback(f): - parameters = client_authentication_codec.decode_response(f.result()) - if parameters["status"] != 0: # TODO: handle other statuses - raise AuthenticationError("Authentication failed.") - connection.endpoint = parameters["address"] - connection.is_owner = True - self.owner_uuid = parameters["owner_uuid"] - self.uuid = parameters["uuid"] - connection.server_version_str = parameters.get("server_hazelcast_version", "") - connection.server_version = calculate_version(connection.server_version_str) - return connection - - return self._client.invoker.invoke_on_connection(request, connection).continue_with(callback) - - def _connect_to_address(self, address): - f = self._client.connection_manager.get_or_connect(address, self._authenticate_manager) - connection = f.result() - if not connection.is_owner: - self._authenticate_manager(connection).result() - self.owner_connection_address = connection.endpoint - self._init_membership_listener(connection) - self._client.lifecycle.fire_lifecycle_event(LIFECYCLE_STATE_CONNECTED) - - def _init_membership_listener(self, connection): - request = client_add_membership_listener_codec.encode_request(False) - - def handler(m): - client_add_membership_listener_codec.handle(m, self._handle_member, self._handle_member_list) - - response = self._client.invoker.invoke_on_connection(request, connection, True, handler).result() - registration_id = client_add_membership_listener_codec.decode_response(response)["response"] - self.logger.debug("Registered membership listener with ID " + registration_id, extra=self._logger_extras) - self._initial_list_fetched.wait() - - def _handle_member(self, member, event_type): - self.logger.debug("Got member event: %s, %s", member, event_type, extra=self._logger_extras) - if event_type == MEMBER_ADDED: - self._member_added(member) - elif event_type == MEMBER_REMOVED: - self._member_removed(member) - - self._log_member_list() - self._client.partition_service.refresh() - - def _handle_member_list(self, members): - self.logger.debug("Got initial member list: %s", members, extra=self._logger_extras) - - for m in self.get_member_list(): - try: - members.remove(m) - except ValueError: - self._member_removed(m) - for m in members: - self._member_added(m) - - self._log_member_list() - self._client.partition_service.refresh() - self._initial_list_fetched.set() - - def _member_added(self, member): - self._members[member.address] = member - for added, _ in list(self.listeners.values()): - if added: - try: - added(member) - except: - self.logger.exception("Exception in membership listener", extra=self._logger_extras) - - def _member_removed(self, member): - self._members.pop(member.address, None) - self._client.connection_manager.close_connection(member.address, TargetDisconnectedError( - "%s is no longer a member of the cluster" % member)) - for _, removed in list(self.listeners.values()): - if removed: - try: - removed(member) - except: - self.logger.exception("Exception in membership listener", extra=self._logger_extras) - - def _log_member_list(self): - self.logger.info("New member list:\n\nMembers [%d] {\n%s\n}\n", self.size(), - "\n".join(["\t" + str(x) for x in self.get_member_list()]), extra=self._logger_extras) - - def _connection_closed(self, connection, _): - if connection.endpoint and connection.endpoint == self.owner_connection_address \ - and self._client.lifecycle.is_live: - self._client.lifecycle.fire_lifecycle_event(LIFECYCLE_STATE_DISCONNECTED) - self.owner_connection_address = None - - # try to reconnect, on new thread - reconnect_thread = threading.Thread(target=self._reconnect, - name="hazelcast-cluster-reconnect-{:.4}".format(str(uuid.uuid4()))) - reconnect_thread.daemon = True - reconnect_thread.start() - - def _heartbeat_stopped(self, connection): - if connection.endpoint == self.owner_connection_address: - self._client.connection_manager.close_connection(connection.endpoint, TargetDisconnectedError( - "%s stopped heart beating." % connection)) - - def get_member_by_uuid(self, member_uuid): + :raises IllegalStateError: If the member list could not be fetched """ - Returns the member with specified member uuid. + fetched = self._initial_list_fetched.wait(_INITIAL_MEMBERS_TIMEOUT_SECONDS) + if not fetched: + raise IllegalStateError("Could not get initial member list from cluster!") + + def clear_member_list_version(self): + if self.logger.isEnabledFor(logging.DEBUG): + self.logger.debug("Resetting the member list version", extra=self._logger_extras) + + current = self._member_list_snapshot + if current is not _EMPTY_SNAPSHOT: + self._member_list_snapshot = _MemberListSnapshot(0, current.members) + + def handle_members_view_event(self, version, member_infos): + snapshot = self._create_snapshot(version, member_infos) + if self.logger.isEnabledFor(logging.DEBUG): + self.logger.debug("Handling new snapshot with membership version: %s, member string: %s" + % (version, self._members_string(snapshot)), extra=self._logger_extras) + + current = self._member_list_snapshot + if version >= current.version: + self._apply_new_state_and_fire_events(current, snapshot) + + if current is _EMPTY_SNAPSHOT: + self._initial_list_fetched.set() + + def _apply_new_state_and_fire_events(self, current, snapshot): + self._member_list_snapshot = snapshot + removals, additions = self._detect_membership_events(current, snapshot) + + # Removal events should be fired first + for removed_member in removals: + for _, handler in six.itervalues(self._listeners): + if handler: + try: + handler(removed_member) + except: + self.logger.exception("Exception in membership lister", extra=self._logger_extras) + + for added_member in additions: + for handler, _ in six.itervalues(self._listeners): + if handler: + try: + handler(added_member) + except: + self.logger.exception("Exception in membership lister", extra=self._logger_extras) + + def _detect_membership_events(self, old, new): + new_members = [] + dead_members = set(six.itervalues(old.members)) + for member in six.itervalues(new.members): + try: + dead_members.remove(member) + except KeyError: + new_members.append(member) + + for dead_member in dead_members: + connection = self._connection_manager.get_connection(dead_member.uuid) + if connection: + connection.close(None, TargetDisconnectedError("The client has closed the connection to this member, " + "after receiving a member left event from the cluster. " + "%s" % connection)) + + if (len(new_members) + len(dead_members)) > 0: + if len(new.members) > 0: + self.logger.info(self._members_string(new), extra=self._logger_extras) + + return dead_members, new_members + + @staticmethod + def _members_string(snapshot): + members = snapshot.members + n = len(members) + return "\n\nMembers [%s] {\n\t%s\n}\n" % (n, "\n\t".join(map(str, six.itervalues(members)))) + + @staticmethod + def _create_snapshot(version, member_infos): + new_members = OrderedDict() + for member_info in member_infos: + new_members[member_info.uuid] = member_info + return _MemberListSnapshot(version, new_members) + + +class AbstractLoadBalancer(object): + """Load balancer allows you to send operations to one of a number of endpoints (Members). + It is up to the implementation to use different load balancing policies. + + If the client is configured with smart routing, + only the operations that are not key based will be routed to the endpoint + returned by the load balancer. If it is not, the load balancer will not be used. + """ + def __init__(self): + self._cluster_service = None + self._members = [] - :param member_uuid: (int), uuid of the desired member. - :return: (:class:`~hazelcast.core.Member`), the corresponding member. + def init(self, cluster_service, config): """ - for member in self.get_member_list(): - if member.uuid == member_uuid: - return member + Initializes the load balancer. - def get_member_by_address(self, address): + :param cluster_service: (:class:`~hazelcast.cluster.ClusterService`), The cluster service to select members from + :param config: (:class:`~hazelcast.config.ClientConfig`), The client config + :return: """ - Returns the member with the specified address if it is in the - cluster, None otherwise. + self._cluster_service = cluster_service + cluster_service.add_listener(self._listener, self._listener, True) - :param address: (:class:`~hazelcast.core.Address`), address of the desired member. - :return: (:class:`~hazelcast.core.Member`), the corresponding member. + def next(self): """ - return self._members.get(address, None) - - def get_members(self, selector): + Returns the next member to route to. + :return: (:class:`~hazelcast.core.Member`), Returns the next member or None if no member is available """ - Returns the members that satisfy the given selector. + raise NotImplementedError("next") - :param selector: (:class:`~hazelcast.core.MemberSelector`), Selector to be applied to the members. - :return: (List), List of members. - """ - members = [] - for member in self.get_member_list(): - if selector.select(member): - members.append(member) + def _listener(self, _): + self._members = self._cluster_service.get_members() - return members - def get_member_list(self): - """ - Returns all the members as a list. +class RoundRobinLB(AbstractLoadBalancer): + """A load balancer implementation that relies on using round robin + to a next member to send a request to. - :return: (List), List of members. - """ - return list(self._members.values()) + Round robin is done based on best effort basis, the order of members for concurrent calls to + the next() is not guaranteed. + """ + def __init__(self): + super(RoundRobinLB, self).__init__() + self._idx = 0 -class RandomLoadBalancer(object): - """ - RandomLoadBalancer make the Client send operations randomly on members not to increase the load on a specific - member. - """ + def next(self): + members = self._members + if not members: + return None - def __init__(self, cluster): - self._cluster = cluster + n = len(members) + idx = self._idx % n + self._idx += 1 + return members[idx] - def next_address(self): - try: - return random.choice(self._cluster.get_member_list()).address - except IndexError: + +class RandomLB(AbstractLoadBalancer): + """A load balancer that selects a random member to route to. + """ + + def next(self): + members = self._members + if not members: return None + idx = random.randrange(0, len(members)) + return members[idx] class VectorClock(object): diff --git a/hazelcast/config.py b/hazelcast/config.py index cba15d9e34..0ebe61d36f 100644 --- a/hazelcast/config.py +++ b/hazelcast/config.py @@ -4,19 +4,10 @@ """ import logging import os +import re from hazelcast.serialization.api import StreamSerializer -from hazelcast.util import validate_type, validate_serializer, enum, TimeUnit - -DEFAULT_GROUP_NAME = "dev" -""" -Default group name of the connected Hazelcast cluster -""" - -DEFAULT_GROUP_PASSWORD = "dev-pass" -""" -Default password of connected Hazelcast cluster -""" +from hazelcast.util import validate_type, validate_serializer, enum, TimeUnit, check_not_none INTEGER_TYPE = enum(VAR=0, BYTE=1, SHORT=2, INT=3, LONG=4, BIG_INT=5) """ @@ -28,14 +19,13 @@ * INT: Python int will be interpreted as a four byte int * LONG: Python int will be interpreted as an eight byte int * BIG_INT: Python int will be interpreted as Java BigInteger. This option can handle python long values with "bit_length > 64" - """ EVICTION_POLICY = enum(NONE=0, LRU=1, LFU=2, RANDOM=3) """ Near Cache eviction policy options -* NONE : No evcition +* NONE : No eviction * LRU : Least Recently Used items will be evicted * LFU : Least frequently Used items will be evicted * RANDOM : Items will be evicted randomly @@ -46,9 +36,8 @@ """ Near Cache in memory format of the values. -* BINARY : Binary format, hazelcast serializated bytearray format +* BINARY : Binary format, hazelcast serialized bytearray format * OBJECT : The actual objects used - """ PROTOCOL = enum(SSLv2=0, SSLv3=1, SSL=2, TLSv1=3, TLSv1_1=4, TLSv1_2=5, TLSv1_3=6, TLS=7) @@ -67,11 +56,41 @@ * TLSv1_3 requires at least Python 2.7.15 or Python 3.7 build with OpenSSL 1.1.1+ """ -DEFAULT_MAX_ENTRY_COUNT = 10000 -DEFAULT_SAMPLING_COUNT = 8 -DEFAULT_SAMPLING_POOL_SIZE = 16 +QUERY_CONSTANTS = enum(KEY_ATTRIBUTE_NAME="__key", THIS_ATTRIBUTE_NAME="this") +""" +Contains constants for Query. +* KEY_ATTRIBUTE_NAME : Attribute name of the key. +* THIS_ATTRIBUTE_NAME : Attribute name of the "this" +""" + +UNIQUE_KEY_TRANSFORMATION = enum(OBJECT=0, LONG=1, RAW=2) +""" +Defines an assortment of transformations which can be applied to +BitmapIndexOptions#getUniqueKey() unique key values. +* OBJECT : Extracted unique key value is interpreted as an object value. + Non-negative unique ID is assigned to every distinct object value. +* LONG : Extracted unique key value is interpreted as a whole integer value of byte, short, int or long type. + The extracted value is upcasted to long (if necessary) and unique non-negative ID is assigned + to every distinct value. +* RAW : Extracted unique key value is interpreted as a whole integer value of byte, short, int or long type. + The extracted value is upcasted to long (if necessary) and the resulting value is used directly as an ID. +""" + +INDEX_TYPE = enum(SORTED=0, HASH=1, BITMAP=2) +""" +Type of the index. +* SORTED : Sorted index. Can be used with equality and range predicates. +* HASH : Hash index. Can be used with equality predicates. +* BITMAP : Bitmap index. Can be used with equality predicates. +""" + +_DEFAULT_CLUSTER_NAME = "dev" -MAXIMUM_PREFETCH_COUNT = 100000 +_DEFAULT_MAX_ENTRY_COUNT = 10000 +_DEFAULT_SAMPLING_COUNT = 8 +_DEFAULT_SAMPLING_POOL_SIZE = 16 + +_MAXIMUM_PREFETCH_COUNT = 100000 class ClientConfig(object): @@ -83,15 +102,27 @@ class ClientConfig(object): """ def __init__(self): - self._properties = {} - """Config properties""" + self.client_name = None + """Name of the client""" - self.group_config = GroupConfig() - """The group configuration""" + self.cluster_name = _DEFAULT_CLUSTER_NAME + """Name of the cluster to connect to. By default, set to `dev`.""" - self.network_config = ClientNetworkConfig() + self.network = ClientNetworkConfig() """The network configuration for addresses to connect, smart-routing, socket-options...""" + self.connection_strategy = ConnectionStrategyConfig() + """Connection strategy config of the client""" + + self.serialization = SerializationConfig() + """Hazelcast serialization configuration""" + + self.near_caches = {} # map_name:NearCacheConfig + """Near Cache configuration which maps "map-name : NearCacheConfig""" + + self._properties = {} + """Config properties""" + self.load_balancer = None """Custom load balancer used to distribute the operations to multiple Endpoints.""" @@ -101,20 +132,14 @@ def __init__(self): self.lifecycle_listeners = [] """ Lifecycle Listeners, an array of Functions of f(state)""" - self.near_cache_configs = {} # map_name:NearCacheConfig - """Near Cache configuration which maps "map-name : NearCacheConfig""" - - self.flake_id_generator_configs = {} + self.flake_id_generators = {} """Flake ID generator configuration which maps "config-name" : FlakeIdGeneratorConfig """ - self.serialization_config = SerializationConfig() - """Hazelcast serialization configuration""" - - self.logger_config = LoggerConfig() + self.logger = LoggerConfig() """Logger configuration.""" - self.client_name = "" - """Name of the client""" + self.labels = set() + """Labels for the client to be sent to the cluster.""" def add_membership_listener(self, member_added=None, member_removed=None, fire_for_existing=False): """ @@ -149,7 +174,7 @@ def add_near_cache_config(self, near_cache_config): :param near_cache_config: (NearCacheConfig), the near_cache config to add. :return: `self` for cascading configuration. """ - self.near_cache_configs[near_cache_config.name] = near_cache_config + self.near_caches[near_cache_config.name] = near_cache_config return self def add_flake_id_generator_config(self, flake_id_generator_config): @@ -159,7 +184,7 @@ def add_flake_id_generator_config(self, flake_id_generator_config): :param flake_id_generator_config: (FlakeIdGeneratorConfig), the configuration to add :return: `self` for cascading configuration """ - self.flake_id_generator_configs[flake_id_generator_config.name] = flake_id_generator_config + self.flake_id_generators[flake_id_generator_config.name] = flake_id_generator_config return self def get_property_or_default(self, key, default): @@ -195,18 +220,6 @@ def set_property(self, key, value): return self -class GroupConfig(object): - """ - The Group Configuration is the container class for name and password of the cluster. - """ - - def __init__(self): - self.name = DEFAULT_GROUP_NAME - """The group name of the cluster""" - self.password = DEFAULT_GROUP_PASSWORD - """The password of the cluster""" - - class ClientNetworkConfig(object): """ Network related configuration parameters. @@ -214,22 +227,18 @@ class ClientNetworkConfig(object): def __init__(self): self.addresses = [] - """The candidate address list that client will use to establish initial connection""" - """Example usage: addresses.append("127.0.0.1:5701") """ - self.connection_attempt_limit = 2 - """ - While client is trying to connect initially to one of the members in the addressList, all might be not - available. Instead of giving up, throwing Error and stopping client, it will attempt to retry as much as defined - by this parameter. + """The candidate address list that client will use to establish initial connection + + >>> addresses.append("127.0.0.1:5701") """ - self.connection_attempt_period = 3 - """Period for the next attempt to find a member to connect""" + self.connection_timeout = 5.0 """ Socket connection timeout is a float, giving in seconds, or None. Setting a timeout of None disables the timeout feature and is equivalent to block the socket until it connects. Setting a timeout of zero is the same as disables blocking on connect. """ + self.socket_options = [] """ Array of Unix socket options. @@ -243,6 +252,7 @@ def __init__(self): Please see the Unix manual for level and option. Level and option constant are in python std lib socket module """ + self.redo_operation = False """ If true, client will redo the operations that were executing on the server and client lost the connection. @@ -250,15 +260,18 @@ def __init__(self): application is performed or not. For idempotent operations this is harmless, but for non idempotent ones retrying can cause to undesirable effects. Note that the redo can perform on any member. """ + self.smart_routing = True """ If true, client will route the key based operations to owner of the key at the best effort. Note that it uses a cached value of partition count and doesn't guarantee that the operation will always be executed on the owner. The cached table is updated every 10 seconds. """ - self.ssl_config = SSLConfig() + + self.ssl = SSLConfig() """SSL configurations for the client.""" - self.cloud_config = ClientCloudConfig() + + self.cloud = ClientCloudConfig() """Hazelcast Cloud configuration to let the client connect the cluster via Hazelcast.cloud""" @@ -272,8 +285,10 @@ class SocketOption(object): def __init__(self, level, option, value): self.level = level """Option level. See the Unix manual for detail.""" + self.option = option """The actual socket option. The actual socket option.""" + self.value = value """Socket option value. The value argument can either be an integer or a string""" @@ -289,6 +304,7 @@ def __init__(self): Portable version will be used to differentiate two versions of the same class that have changes on the class, like adding/removing a field or changing a type of a field. """ + self.data_serializable_factories = {} """ Dictionary of factory-id and corresponding IdentifiedDataserializable factories. A Factory is a simple @@ -300,6 +316,7 @@ def __init__(self): >>> serialization_config.data_serializable_factories[FACTORY_ID] = my_factory """ + self.portable_factories = {} """ Dictionary of factory-id and corresponding portable factories. A Factory is a simple dictionary with entries of @@ -310,20 +327,25 @@ def __init__(self): >>> portable_factory = {PortableClass_0.CLASS_ID : PortableClass_0, PortableClass_1.CLASS_ID : PortableClass_1} >>> serialization_config.portable_factories[FACTORY_ID] = portable_factory """ + self.class_definitions = set() """ Set of all Portable class definitions. """ + self.check_class_def_errors = True """Configured Portable Class definitions should be validated for errors or not.""" + self.is_big_endian = True """Hazelcast Serialization is big endian or not.""" + self.default_integer_type = INTEGER_TYPE.INT """ Python has variable length int/long type. In order to match this with static fixed length Java server, this option defines the length of the int/long. One of the values of :const:`INTEGER_TYPE` can be assigned. Please see :const:`INTEGER_TYPE` documentation for details of the options. """ + self._global_serializer = None self._custom_serializers = {} @@ -395,14 +417,17 @@ class NearCacheConfig(object): def __init__(self, name="default"): self._name = name self.invalidate_on_change = True - """Should a value is invalidated and removed in case of any map data updating operations such as replace, remove etc.""" + """Should a value is invalidated and removed in case of any map data + updating operations such as replace, remove etc. + """ + self._in_memory_format = IN_MEMORY_FORMAT.BINARY self._time_to_live_seconds = None self._max_idle_seconds = None - self._eviction_policy = EVICTION_POLICY.NONE - self._eviction_max_size = DEFAULT_MAX_ENTRY_COUNT - self._eviction_sampling_count = DEFAULT_SAMPLING_COUNT - self._eviction_sampling_pool_size = DEFAULT_SAMPLING_POOL_SIZE + self._eviction_policy = EVICTION_POLICY.LRU + self._eviction_max_size = _DEFAULT_MAX_ENTRY_COUNT + self._eviction_sampling_count = _DEFAULT_SAMPLING_COUNT + self._eviction_sampling_pool_size = _DEFAULT_SAMPLING_POOL_SIZE @property def name(self): @@ -494,6 +519,74 @@ def eviction_sampling_pool_size(self, eviction_sampling_pool_size): self._eviction_sampling_pool_size = eviction_sampling_pool_size +RECONNECT_MODE = enum(OFF=0, ON=1, ASYNC=2) +""" +* OFF : Prevent reconnect to cluster after a disconnect. +* ON : Reconnect to cluster by blocking invocations. +* ASYNC : Reconnect to cluster without blocking invocations. Invocations will receive ClientOfflineError +""" + + +class ConnectionStrategyConfig(object): + """Connection strategy configuration is used for setting custom strategies and configuring strategy parameters.""" + + def __init__(self): + self.async_start = False + """Enables non-blocking start mode of HazelcastClient. When set to True, the client + creation will not wait to connect to cluster. The client instance will throw exceptions + until it connects to cluster and becomes ready. If set to False, HazelcastClient will block + until a cluster connection established and it is ready to use the client instance. + By default, set to False. + """ + + self.reconnect_mode = RECONNECT_MODE.ON + """Defines how a client reconnects to cluster after a disconnect.""" + + self.connection_retry = ConnectionRetryConfig() + """Connection retry config to be used by the client.""" + + +_DEFAULT_INITIAL_BACKOFF = 1 +_DEFAULT_MAX_BACKOFF = 30 +_DEFAULT_CLUSTER_CONNECT_TIMEOUT = 20 +_DEFAULT_MULTIPLIER = 1 +_DEFAULT_JITTER = 0 + + +class ConnectionRetryConfig(object): + """Connection retry config controls the period among connection establish retries + and defines when the client should give up retrying. Supports exponential behaviour + with jitter for wait periods. + """ + + def __init__(self): + self.initial_backoff = _DEFAULT_INITIAL_BACKOFF + """Defines wait period in seconds after the first failure before retrying. + Must be non-negative. By default, set to 1. + """ + + self.max_backoff = _DEFAULT_MAX_BACKOFF + """Defines an upper bound for the backoff interval in seconds. Must be non-negative. + By default, set to 30 seconds. + """ + + self.cluster_connect_timeout = _DEFAULT_CLUSTER_CONNECT_TIMEOUT + """Defines timeout value in seconds for the client to give up a connection + attempt to the cluster. Must be non-negative. By default, set to 20 seconds. + """ + + self.multiplier = _DEFAULT_MULTIPLIER + """Defines the factor with which to multiply backoff after a failed retry. + Must be greater than or equal to 1. By default, set to 1. + """ + + self.jitter = _DEFAULT_JITTER + """Defines how much to randomize backoffs. At each iteration the calculated + back-off is randomized via following method in pseudo-code + Random(-jitter * current_backoff, jitter * current_backoff). + Must be in range [0.0, 1.0]. By default, set to `0` (no randomization).""" + + class SSLConfig(object): """ SSL configuration. @@ -580,8 +673,8 @@ def prefetch_count(self): @prefetch_count.setter def prefetch_count(self, prefetch_count): - if not (0 < prefetch_count <= MAXIMUM_PREFETCH_COUNT): - raise ValueError("Prefetch count must be 1..{}, not {}".format(MAXIMUM_PREFETCH_COUNT, prefetch_count)) + if not (0 < prefetch_count <= _MAXIMUM_PREFETCH_COUNT): + raise ValueError("Prefetch count must be 1..{}, not {}".format(_MAXIMUM_PREFETCH_COUNT, prefetch_count)) self._prefetch_count = prefetch_count @property @@ -613,6 +706,7 @@ class ClientCloudConfig(object): def __init__(self): self.enabled = False """Enables/disables cloud config.""" + self.discovery_token = "" """Hazelcast Cloud Discovery token of your cluster.""" @@ -632,6 +726,7 @@ def __init__(self): ``Configuration dictionary schema`` described in the logging module of the standard library. """ + self.level = logging.INFO """ Sets the logging level for the default logging @@ -643,6 +738,162 @@ def __init__(self): """ +class BitmapIndexOptions(object): + """ + Configures indexing options specific to bitmap indexes + """ + + def __init__(self, unique_key=QUERY_CONSTANTS.KEY_ATTRIBUTE_NAME, + unique_key_transformation=UNIQUE_KEY_TRANSFORMATION.OBJECT): + self.unique_key = unique_key + """ + Source of values which uniquely identify each entry being inserted into an index. + """ + + self.unique_key_transformation = unique_key_transformation + """ + Unique key transformation configured in this index. The transformation is + applied to every value extracted from unique key attribute + """ + + def __repr__(self): + return "BitmapIndexOptions(unique_key=%s, unique_key_transformation=%s)" \ + % (self.unique_key, self.unique_key_transformation) + + +class IndexConfig(object): + """ + Configuration of an index. Hazelcast support two types of indexes: sorted index and hash index. + Sorted indexes could be used with equality and range predicates and have logarithmic search time. + Hash indexes could be used with equality predicates and have constant search time assuming the hash + function of the indexed field disperses the elements properly. + Index could be created on one or more attributes. + """ + + def __init__(self, name=None, type=INDEX_TYPE.SORTED, attributes=None, bitmap_index_options=None): + self.name = name + """Name of the index""" + + self.type = type + """Type of the index""" + + self.attributes = attributes or [] + """Indexed attributes""" + + self.bitmap_index_options = bitmap_index_options or BitmapIndexOptions() + """Bitmap index options""" + + def add_attribute(self, attribute): + _IndexUtil.validate_attribute(attribute) + self.attributes.append(attribute) + + def __repr__(self): + return "IndexConfig(name=%s, type=%s, attributes=%s, bitmap_index_options=%s)" \ + % (self.name, self.type, self.attributes, self.bitmap_index_options) + + +class _IndexUtil(object): + _MAX_ATTRIBUTES = 255 + """Maximum number of attributes allowed in the index.""" + + _THIS_PATTERN = re.compile(r"^this\.") + """Pattern to stripe away "this." prefix.""" + + @staticmethod + def validate_attribute(attribute): + check_not_none(attribute, "Attribute name cannot be None") + + stripped_attribute = attribute.strip() + if not stripped_attribute: + raise ValueError("Attribute name cannot be empty") + + if stripped_attribute.endswith("."): + raise ValueError("Attribute name cannot end with dot: %s" % attribute) + + @staticmethod + def validate_and_normalize(map_name, index_config): + original_attributes = index_config.attributes + if not original_attributes: + raise ValueError("Index must have at least one attribute: %s" % index_config) + + if len(original_attributes) > _IndexUtil._MAX_ATTRIBUTES: + raise ValueError("Index cannot have more than %s attributes %s" % (_IndexUtil._MAX_ATTRIBUTES, index_config)) + + if index_config.type == INDEX_TYPE.BITMAP and len(original_attributes) > 1: + raise ValueError("Composite bitmap indexes are not supported: %s" % index_config) + + normalized_attributes = [] + for original_attribute in original_attributes: + _IndexUtil.validate_attribute(original_attribute) + + original_attribute = original_attribute.strip() + normalized_attribute = _IndexUtil.canonicalize_attribute(original_attribute) + + try: + idx = normalized_attributes.index(normalized_attribute) + except ValueError: + pass + else: + duplicate_original_attribute = original_attributes[idx] + if duplicate_original_attribute == original_attribute: + raise ValueError("Duplicate attribute name [attribute_name=%s, index_config=%s]" + % (original_attribute, index_config)) + else: + raise ValueError("Duplicate attribute names [attribute_name1=%s, attribute_name2=%s, " + "index_config=%s]" + % (duplicate_original_attribute, original_attribute, index_config)) + + normalized_attributes.append(normalized_attribute) + + name = index_config.name + if name and not name.strip(): + name = None + + normalized_config = _IndexUtil.build_normalized_config(map_name, index_config.type, name, + normalized_attributes) + if index_config.type == INDEX_TYPE.BITMAP: + unique_key = index_config.bitmap_index_options.unique_key + unique_key_transformation = index_config.bitmap_index_options.unique_key_transformation + _IndexUtil.validate_attribute(unique_key) + unique_key = _IndexUtil.canonicalize_attribute(unique_key) + normalized_config.bitmap_index_options.unique_key = unique_key + normalized_config.bitmap_index_options.unique_key_transformation = unique_key_transformation + + return normalized_config + + @staticmethod + def canonicalize_attribute(attribute): + return re.sub(_IndexUtil._THIS_PATTERN, "", attribute) + + @staticmethod + def build_normalized_config(map_name, index_type, index_name, normalized_attributes): + new_config = IndexConfig() + new_config.type = index_type + + name = map_name + "_" + _IndexUtil._index_type_to_name(index_type) if index_name is None else None + for normalized_attribute in normalized_attributes: + new_config.add_attribute(normalized_attribute) + if name: + name += "_" + normalized_attribute + + if name: + index_name = name + + new_config.name = index_name + return new_config + + @staticmethod + def _index_type_to_name(index_type): + if index_type == INDEX_TYPE.SORTED: + return "sorted" + elif index_type == INDEX_TYPE.HASH: + return "hash" + elif index_type == INDEX_TYPE.BITMAP: + return "bitmap" + else: + raise ValueError("Unsupported index type %s" % index_type) + + class ClientProperty(object): """ Client property holds the name, default value and time unit of Hazelcast client properties. @@ -706,10 +957,11 @@ class ClientProperties(object): Period in seconds to collect statistics. """ - SERIALIZATION_INPUT_RETURNS_BYTEARRAY = ClientProperty("hazelcast.serialization.input.returns.bytearray", False) + SHUFFLE_MEMBER_LIST = ClientProperty("hazelcast.client.shuffle.member.list", True) """ - Input#read_byte_array returns a List if property is False, otherwise it will return a byte-array. - Changing this to True, gives a considerable performance benefit. + Client shuffles the given member list to prevent all clients to connect to the same node when + this property is set to true. When it is set to false, the client tries to connect to the nodes + in the given order. """ def __init__(self, properties): diff --git a/hazelcast/connection.py b/hazelcast/connection.py index 357587d097..6d86716524 100644 --- a/hazelcast/connection.py +++ b/hazelcast/connection.py @@ -1,20 +1,72 @@ import logging +import random import struct import sys import threading import time - -from hazelcast.exception import AuthenticationError +import io +import uuid +from collections import OrderedDict + +from hazelcast.config import RECONNECT_MODE +from hazelcast.core import AddressHelper +from hazelcast.errors import AuthenticationError, TargetDisconnectedError, HazelcastClientNotActiveError, \ + InvalidConfigurationError, ClientNotAllowedInClusterError, IllegalStateError, ClientOfflineError from hazelcast.future import ImmediateFuture, ImmediateExceptionFuture -from hazelcast.protocol.client_message import BEGIN_END_FLAG, ClientMessage, ClientMessageBuilder +from hazelcast.invocation import Invocation +from hazelcast.lifecycle import LifecycleState +from hazelcast.protocol.client_message import SIZE_OF_FRAME_LENGTH_AND_FLAGS, Frame, InboundMessage, \ + ClientMessageBuilder from hazelcast.protocol.codec import client_authentication_codec, client_ping_codec -from hazelcast.serialization import INT_SIZE_IN_BYTES, FMT_LE_INT -from hazelcast.util import AtomicInteger, parse_addresses, calculate_version +from hazelcast.util import AtomicInteger, calculate_version, UNKNOWN_VERSION, enum from hazelcast.version import CLIENT_TYPE, CLIENT_VERSION, SERIALIZATION_VERSION from hazelcast import six -BUFFER_SIZE = 128000 -PROTOCOL_VERSION = 1 + +class _WaitStrategy(object): + logger = logging.getLogger("HazelcastClient.WaitStrategy") + + def __init__(self, initial_backoff, max_backoff, multiplier, + cluster_connect_timeout, jitter, logger_extras): + self._initial_backoff = initial_backoff + self._max_backoff = max_backoff + self._multiplier = multiplier + self._cluster_connect_timeout = cluster_connect_timeout + self._jitter = jitter + self._attempt = None + self._cluster_connect_attempt_begin = None + self._current_backoff = None + self._logger_extras = logger_extras + + def reset(self): + self._attempt = 0 + self._cluster_connect_attempt_begin = time.time() + self._current_backoff = min(self._max_backoff, self._initial_backoff) + + def sleep(self): + self._attempt += 1 + now = time.time() + time_passed = now - self._cluster_connect_attempt_begin + if time_passed > self._cluster_connect_timeout: + self.logger.warning("Unable to get live cluster connection, cluster connect timeout (%d) is reached. " + "Attempt %d." % (self._cluster_connect_timeout, self._attempt), + extra=self._logger_extras) + return False + + # random between (-jitter * current_backoff, jitter * current_backoff) + sleep_time = self._current_backoff + self._current_backoff * self._jitter * (2 * random.random() - 1) + sleep_time = min(sleep_time, self._cluster_connect_timeout - time_passed) + self.logger.warning("Unable to get live cluster connection, retry in %ds, attempt: %d, " + "cluster connect timeout: %ds, max backoff: %ds" + % (sleep_time, self._attempt, self._cluster_connect_timeout, self._max_backoff), + extra=self._logger_extras) + time.sleep(sleep_time) + self._current_backoff = min(self._current_backoff * self._multiplier, self._max_backoff) + return True + + +_AuthenticationStatus = enum(AUTHENTICATED=0, CREDENTIALS_FAILED=1, + SERIALIZATION_VERSION_MISMATCH=2, NOT_ALLOWED_IN_CLUSTER=3) class ConnectionManager(object): @@ -23,17 +75,39 @@ class ConnectionManager(object): """ logger = logging.getLogger("HazelcastClient.ConnectionManager") - def __init__(self, client, new_connection_func, address_translator): - self._new_connection_mutex = threading.RLock() - self._io_thread = None + def __init__(self, client, reactor, address_provider, lifecycle_service, + partition_service, cluster_service, invocation_service, + near_cache_manager, logger_extras): + self.live = False + self.active_connections = dict() + self.client_uuid = uuid.uuid4() + self._client = client - self.connections = {} - self._pending_connections = {} - self._socket_map = {} - self._new_connection_func = new_connection_func + self._reactor = reactor + self._address_provider = address_provider + self._lifecycle_service = lifecycle_service + self._partition_service = partition_service + self._cluster_service = cluster_service + self._invocation_service = invocation_service + self._near_cache_manager = near_cache_manager + self._logger_extras = logger_extras + config = self._client.config + self._smart_routing_enabled = config.network.smart_routing + self._wait_strategy = self._init_wait_strategy(config) + self._reconnect_mode = config.connection_strategy.reconnect_mode + self._heartbeat_manager = _HeartbeatManager(self, self._client, reactor, invocation_service, logger_extras) self._connection_listeners = [] - self._address_translator = address_translator - self._logger_extras = {"client_name": client.name, "group_name": client.config.group_config.name} + self._connect_all_members_timer = None + self._async_start = config.connection_strategy.async_start + self._connect_to_cluster_thread_running = False + self._pending_connections = dict() + props = self._client.properties + self._shuffle_member_list = props.get_bool(props.SHUFFLE_MEMBER_LIST) + self._lock = threading.RLock() + self._connection_id_generator = AtomicInteger() + self._labels = config.labels + self._cluster_id = None + self._load_balancer = None def add_listener(self, on_connection_opened=None, on_connection_closed=None): """ @@ -45,167 +119,406 @@ def add_listener(self, on_connection_opened=None, on_connection_closed=None): """ self._connection_listeners.append((on_connection_opened, on_connection_closed)) - def get_connection(self, address): - """ - Gets the existing connection for a given address or connects. This call is silent. + def get_connection(self, member_uuid): + return self.active_connections.get(member_uuid, None) - :param address: (:class:`~hazelcast.core.Address`), the address to connect to. - :return: (:class:`~hazelcast.connection.Connection`), the found connection, or None if no connection exists. - """ - try: - return self.connections[address] - except KeyError: - return None + def get_connection_from_address(self, address): + for connection in six.itervalues(self.active_connections): + if address == connection.remote_address: + return connection + return None + + def get_random_connection(self): + if self._smart_routing_enabled: + member = self._load_balancer.next() + if member: + connection = self.get_connection(member.uuid) + if connection: + return connection - def _cluster_authenticator(self, connection): - uuid = self._client.cluster.uuid - owner_uuid = self._client.cluster.owner_uuid - - request = client_authentication_codec.encode_request( - username=self._client.config.group_config.name, - password=self._client.config.group_config.password, - uuid=uuid, - owner_uuid=owner_uuid, - is_owner_connection=False, - client_type=CLIENT_TYPE, - serialization_version=SERIALIZATION_VERSION, - client_hazelcast_version=CLIENT_VERSION) - - def callback(f): - parameters = client_authentication_codec.decode_response(f.result()) - if parameters["status"] != 0: - raise AuthenticationError("Authentication failed.") - connection.endpoint = parameters["address"] - self.owner_uuid = parameters["owner_uuid"] - self.uuid = parameters["uuid"] - connection.server_version_str = parameters.get("server_hazelcast_version", "") - connection.server_version = calculate_version(connection.server_version_str) + for connection in six.itervalues(self.active_connections): return connection - return self._client.invoker.invoke_on_connection(request, connection).continue_with(callback) + return None - def get_or_connect(self, address, authenticator=None): - """ - Gets the existing connection for a given address. If it does not exist, the system will try to connect - asynchronously. In this case, it returns a Future. When the connection is established at some point in time, it - can be retrieved by using the get_connection(:class:`~hazelcast.core.Address`) or from Future. + def start(self, load_balancer): + if self.live: + return - :param address: (:class:`~hazelcast.core.Address`), the address to connect to. - :param authenticator: (Function), function to be used for authentication (optional). - :return: (:class:`~hazelcast.connection.Connection`), the existing connection or it returns a Future which includes asynchronously. - """ - if address in self.connections: - return ImmediateFuture(self.connections[address]) + self.live = True + self._load_balancer = load_balancer + self._heartbeat_manager.start() + self._connect_to_cluster() + if self._smart_routing_enabled: + self._start_connect_all_members_timer() + + def shutdown(self): + if not self.live: + return + + self.live = False + if self._connect_all_members_timer: + self._connect_all_members_timer.cancel() + + self._heartbeat_manager.shutdown() + for connection_future in six.itervalues(self._pending_connections): + connection_future.set_exception(HazelcastClientNotActiveError("Hazelcast client is shutting down")) + + # Need to create copy of connection values to avoid modification errors on runtime + for connection in list(six.itervalues(self.active_connections)): + connection.close("Hazelcast client is shutting down", None) + + self._connection_listeners = [] + self.active_connections.clear() + self._pending_connections.clear() + + def connect_to_all_cluster_members(self): + if not self._smart_routing_enabled: + return + + for member in self._cluster_service.get_members(): + try: + self._get_or_connect(member.address).result() + except: + pass + + def on_connection_close(self, closed_connection, cause): + connected_address = closed_connection.connected_address + remote_uuid = closed_connection.remote_uuid + + if not connected_address: + self.logger.debug("Destroying %s, but it has no remote address, hence nothing is " + "removed from the connection dictionary" % closed_connection, extra=self._logger_extras) + + with self._lock: + pending = self._pending_connections.pop(connected_address, None) + connection = self.active_connections.pop(remote_uuid, None) + + if pending: + pending.set_exception(cause) + + if connection: + self.logger.info("Removed connection to %s:%s, connection: %s" + % (connected_address, remote_uuid, connection), + extra=self._logger_extras) + if not self.active_connections: + self._lifecycle_service.fire_lifecycle_event(LifecycleState.DISCONNECTED) + self._trigger_cluster_reconnection() + + if connection: + for _, on_connection_closed in self._connection_listeners: + if on_connection_closed: + try: + on_connection_closed(connection, cause) + except: + self.logger.exception("Exception in connection listener", extra=self._logger_extras) else: - with self._new_connection_mutex: - if address in self._pending_connections: - return self._pending_connections[address] + if remote_uuid: + self.logger.debug("Destroying %s, but there is no mapping for %s in the connection dictionary" + % (closed_connection, remote_uuid), extra=self._logger_extras) + + def check_invocation_allowed(self): + if self.active_connections: + return + + if self._async_start or self._reconnect_mode == RECONNECT_MODE.ASYNC: + raise ClientOfflineError() + else: + raise IOError("No connection found to cluster") + + def _trigger_cluster_reconnection(self): + if self._reconnect_mode == RECONNECT_MODE.OFF: + self.logger.info("Reconnect mode is OFF. Shutting down the client", extra=self._logger_extras) + self._shutdown_client() + return + + if self._lifecycle_service.running: + self._start_connect_to_cluster_thread() + + def _init_wait_strategy(self, config): + retry_config = config.connection_strategy.connection_retry + return _WaitStrategy(retry_config.initial_backoff, retry_config.max_backoff, retry_config.multiplier, + retry_config.cluster_connect_timeout, retry_config.jitter, self._logger_extras) + + def _start_connect_all_members_timer(self): + connecting_addresses = set() + + def run(): + if not self._lifecycle_service.running: + return + + for member in self._cluster_service.get_members(): + address = member.address + + if not self.get_connection_from_address(address) and address not in connecting_addresses: + connecting_addresses.add(address) + if not self._lifecycle_service.running: + break + + if not self.get_connection(member.uuid): + self._get_or_connect(address).add_done_callback(lambda f: connecting_addresses.discard(address)) + + self._connect_all_members_timer = self._reactor.add_timer(1, run) + + self._connect_all_members_timer = self._reactor.add_timer(1, run) + + def _connect_to_cluster(self): + if self._async_start: + self._start_connect_to_cluster_thread() + else: + self._sync_connect_to_cluster() + + def _start_connect_to_cluster_thread(self): + with self._lock: + if self._connect_to_cluster_thread_running: + return + + self._connect_to_cluster_thread_running = True + + def run(): + try: + while True: + self._sync_connect_to_cluster() + with self._lock: + if self.active_connections: + self._connect_to_cluster_thread_running = False + return + except: + self.logger.exception("Could not connect to any cluster, shutting down the client", + extra=self._logger_extras) + self._shutdown_client() + + t = threading.Thread(target=run, name='hazelcast_async_connection') + t.daemon = True + t.start() + + def _shutdown_client(self): + try: + self._client.shutdown() + except: + self.logger.exception("Exception during client shutdown", extra=self._logger_extras) + + def _sync_connect_to_cluster(self): + tried_addresses = set() + self._wait_strategy.reset() + try: + while True: + for address in self._get_possible_addresses(): + self._check_client_active() + tried_addresses.add(address) + connection = self._connect(address) + if connection: + return + # If the address providers load no addresses (which seems to be possible), + # then the above loop is not entered and the lifecycle check is missing, + # hence we need to repeat the same check at this point. + self._check_client_active() + if not self._wait_strategy.sleep(): + break + except (ClientNotAllowedInClusterError, InvalidConfigurationError): + cluster_name = self._client.config.cluster_name + self.logger.exception("Stopped trying on cluster %s" % cluster_name, extra=self._logger_extras) + + cluster_name = self._client.config.cluster_name + self.logger.info("Unable to connect to any address from the cluster with name: %s. " + "The following addresses were tried: %s" % (cluster_name, tried_addresses), + extra=self._logger_extras) + if self._lifecycle_service.running: + msg = "Unable to connect to any cluster" + else: + msg = "Client is being shutdown" + raise IllegalStateError(msg) + + def _connect(self, address): + self.logger.info("Trying to connect to %s" % address, extra=self._logger_extras) + try: + return self._get_or_connect(address).result() + except (ClientNotAllowedInClusterError, InvalidConfigurationError) as e: + self.logger.warning("Error during initial connection to %s: %s" % (address, e), extra=self._logger_extras) + raise e + except Exception as e: + self.logger.warning("Error during initial connection to %s: %s" % (address, e), extra=self._logger_extras) + return None + + def _get_or_connect(self, address): + connection = self.get_connection_from_address(address) + if connection: + return ImmediateFuture(connection) + + with self._lock: + connection = self.get_connection_from_address(address) + if connection: + return ImmediateFuture(connection) + else: + pending = self._pending_connections.get(address, None) + if pending: + return pending else: - authenticator = authenticator or self._cluster_authenticator try: - translated_address = self._address_translator.translate(address) - if translated_address is None: - raise ValueError("Address translator could not translate address: {}".format(address)) - connection = self._new_connection_func(translated_address, - self._client.config.network_config.connection_timeout, - self._client.config.network_config.socket_options, - connection_closed_callback=self._connection_closed, - message_callback=self._client.invoker._handle_client_message, - network_config=self._client.config.network_config) + translated = self._address_provider.translate(address) + if not translated: + return ImmediateExceptionFuture( + ValueError("Address translator could not translate address %s" % address)) + + factory = self._reactor.connection_factory + connection = factory(self, self._connection_id_generator.get_and_increment(), + translated, self._client.config.network, + self._invocation_service.handle_client_message) except IOError: return ImmediateExceptionFuture(sys.exc_info()[1], sys.exc_info()[2]) - future = authenticator(connection).continue_with(self.on_auth, connection, address) - if not future.done(): - self._pending_connections[address] = future + future = self._authenticate(connection).continue_with(self._on_auth, connection, address) + self._pending_connections[address] = future return future - def on_auth(self, f, connection, address): - """ - Checks for authentication of a connection. + def _authenticate(self, connection): + client = self._client + cluster_name = client.config.cluster_name + client_name = client.name + request = client_authentication_codec.encode_request(cluster_name, None, None, self.client_uuid, + CLIENT_TYPE, SERIALIZATION_VERSION, CLIENT_VERSION, + client_name, self._labels) + + invocation = Invocation(request, connection=connection, urgent=True, response_handler=lambda m: m) + self._invocation_service.invoke(invocation) + return invocation.future + + def _on_auth(self, response, connection, address): + if response.is_success(): + response = client_authentication_codec.decode_response(response.result()) + status = response["status"] + if status == _AuthenticationStatus.AUTHENTICATED: + return self._handle_successful_auth(response, connection, address) + + if status == _AuthenticationStatus.CREDENTIALS_FAILED: + err = AuthenticationError("Authentication failed. The configured cluster name on " + "the client does not match the one configured in the cluster.") + elif status == _AuthenticationStatus.NOT_ALLOWED_IN_CLUSTER: + err = ClientNotAllowedInClusterError("Client is not allowed in the cluster") + elif status == _AuthenticationStatus.SERIALIZATION_VERSION_MISMATCH: + err = IllegalStateError("Server serialization version does not match to client") + else: + err = AuthenticationError("Authentication status code not supported. status: %s" % status) - :param f: (:class:`~hazelcast.future.Future`), future that contains the result of authentication. - :param connection: (:class:`~hazelcast.connection.Connection`), newly established connection. - :param address: (:class:`~hazelcast.core.Address`), the adress of new connection. - :return: Result of authentication. - """ - if f.is_success(): - self.logger.info("Authenticated with %s", f.result(), extra=self._logger_extras) - with self._new_connection_mutex: - self.connections[connection.endpoint] = f.result() - try: - self._pending_connections.pop(address) - except KeyError: - pass - for on_connection_opened, _ in self._connection_listeners: - if on_connection_opened: - on_connection_opened(f.result()) - return f.result() + connection.close("Failed to authenticate connection", err) + raise err else: - self.logger.debug("Error opening %s", connection, extra=self._logger_extras) - with self._new_connection_mutex: + e = response.exception() + connection.close("Failed to authenticate connection", e) + self._pending_connections.pop(address, None) + six.reraise(e.__class__, e, response.traceback()) + + def _handle_successful_auth(self, response, connection, address): + self._check_partition_count(response["partition_count"]) + + server_version_str = response["server_hazelcast_version"] + remote_address = response["address"] + remote_uuid = response["member_uuid"] + + connection.remote_address = remote_address + connection.server_version = calculate_version(server_version_str) + connection.remote_uuid = remote_uuid + + new_cluster_id = response["cluster_id"] + + is_initial_connection = not self.active_connections + changed_cluster = is_initial_connection and self._cluster_id is not None and self._cluster_id != new_cluster_id + if changed_cluster: + self.logger.warning("Switching from current cluster: %s to new cluster: %s" + % (self._cluster_id, new_cluster_id), + extra=self._logger_extras) + self._on_cluster_restart() + + with self._lock: + self.active_connections[response["member_uuid"]] = connection + self._pending_connections.pop(address, None) + + if is_initial_connection: + self._cluster_id = new_cluster_id + self._lifecycle_service.fire_lifecycle_event(LifecycleState.CONNECTED) + + self.logger.info("Authenticated with server %s:%s, server version: %s, local address: %s" + % (remote_address, remote_uuid, server_version_str, connection.local_address), + extra=self._logger_extras) + + for on_connection_opened, _ in self._connection_listeners: + if on_connection_opened: try: - self._pending_connections.pop(address) - except KeyError: - pass - six.reraise(f.exception().__class__, f.exception(), f.traceback()) - - def _connection_closed(self, connection, cause): - # if connection was authenticated, fire event - if connection.endpoint: - try: - self.connections.pop(connection.endpoint) - except KeyError: - pass - for _, on_connection_closed in self._connection_listeners: - if on_connection_closed: - on_connection_closed(connection, cause) - else: - # clean-up unauthenticated connection - self._client.invoker.cleanup_connection(connection, cause) + on_connection_opened(connection) + except: + self.logger.exception("Exception in connection listener", extra=self._logger_extras) - def close_connection(self, address, cause): - """ - Closes the connection with given address. + if not connection.live: + self.on_connection_close(connection, None) - :param address: (:class:`~hazelcast.core.Address`), address of the connection to be closed. - :param cause: (Exception), the cause for closing the connection. - :return: (bool), ``true`` if the connection is closed, ``false`` otherwise. - """ - try: - connection = self.connections[address] - connection.close(cause) - except KeyError: - self.logger.warning("No connection with %s was found to close.", address, extra=self._logger_extras) - return False + return connection + def _on_cluster_restart(self): + self._near_cache_manager.clear_near_caches() + self._cluster_service.clear_member_list_version() -class Heartbeat(object): - """ - HeartBeat Service. - """ + def _check_partition_count(self, partition_count): + if not self._partition_service.check_and_set_partition_count(partition_count): + raise ClientNotAllowedInClusterError("Client can not work with this cluster because it has a " + "different partition count. Expected partition count: %d, " + "Member partition count: %d" + % (self._partition_service.partition_count, partition_count)) + + def _check_client_active(self): + if not self._lifecycle_service.running: + raise HazelcastClientNotActiveError() + + def _get_possible_addresses(self): + member_addresses = list(map(lambda m: (m.address, None), self._cluster_service.get_members())) + + if self._shuffle_member_list: + random.shuffle(member_addresses) + + addresses = OrderedDict(member_addresses) + primaries, secondaries = self._address_provider.load_addresses() + if self._shuffle_member_list: + random.shuffle(primaries) + random.shuffle(secondaries) + + for address in primaries: + addresses[address] = None + + for address in secondaries: + addresses[address] = None + + return six.iterkeys(addresses) + + +class _HeartbeatManager(object): _heartbeat_timer = None - logger = logging.getLogger("HazelcastClient.HeartbeatService") + logger = logging.getLogger("HazelcastClient.HeartbeatManager") - def __init__(self, client): + def __init__(self, connection_manager, client, reactor, invocation_service, logger_extras): + self._connection_manager = connection_manager self._client = client - self._listeners = [] - self._logger_extras = {"client_name": client.name, "group_name": client.config.group_config.name} + self._reactor = reactor + self._invocation_service = invocation_service + self._logger_extras = logger_extras - self._heartbeat_timeout = client.properties.get_seconds_positive_or_default(client.properties.HEARTBEAT_TIMEOUT) - self._heartbeat_interval = client.properties.get_seconds_positive_or_default(client.properties.HEARTBEAT_INTERVAL) + props = client.properties + self._heartbeat_timeout = props.get_seconds_positive_or_default(props.HEARTBEAT_TIMEOUT) + self._heartbeat_interval = props.get_seconds_positive_or_default(props.HEARTBEAT_INTERVAL) def start(self): """ Starts sending periodic HeartBeat operations. """ + def _heartbeat(): - if not self._client.lifecycle.is_live: + if not self._connection_manager.live: return - self._heartbeat() - self._heartbeat_timer = self._client.reactor.add_timer(self._heartbeat_interval, _heartbeat) - self._heartbeat_timer = self._client.reactor.add_timer(self._heartbeat_interval, _heartbeat) + now = time.time() + for connection in list(self._connection_manager.active_connections.values()): + self._check_connection(now, connection) + self._heartbeat_timer = self._reactor.add_timer(self._heartbeat_interval, _heartbeat) + + self._heartbeat_timer = self._reactor.add_timer(self._heartbeat_interval, _heartbeat) def shutdown(self): """ @@ -214,155 +527,215 @@ def shutdown(self): if self._heartbeat_timer: self._heartbeat_timer.cancel() - def add_listener(self, on_heartbeat_restored=None, on_heartbeat_stopped=None): - """ - Registers a HeartBeat listener. Listener is invoked when a HeartBeat related event occurs. + def _check_connection(self, now, connection): + if not connection.live: + return - :param on_heartbeat_restored: (Function), function to be called when a HeartBeat is restored (optional). - :param on_heartbeat_stopped: (Function), function to be called when a HeartBeat is stopped (optional). - """ - self._listeners.append((on_heartbeat_restored, on_heartbeat_stopped)) + if (now - connection.last_read_time) > self._heartbeat_timeout: + if connection.live: + self.logger.warning("Heartbeat failed over the connection: %s" % connection, extra=self._logger_extras) + connection.close("Heartbeat timed out", + TargetDisconnectedError("Heartbeat timed out to connection %s" % connection)) - def _heartbeat(self): - now = time.time() - for connection in list(self._client.connection_manager.connections.values()): - time_since_last_read = now - connection.last_read_in_seconds - time_since_last_write = now - connection.last_write_in_seconds - if time_since_last_read > self._heartbeat_timeout: - if connection.heartbeating: - self.logger.warning( - "Heartbeat: Did not hear back after %ss from %s" % (time_since_last_read, connection), - extra=self._logger_extras) - self._on_heartbeat_stopped(connection) + if (now - connection.last_write_time) > self._heartbeat_interval: + request = client_ping_codec.encode_request() + invocation = Invocation(request, connection=connection, urgent=True) + self._invocation_service.invoke(invocation) + + +_frame_header = struct.Struct(' self._heartbeat_interval: - request = client_ping_codec.encode_request() - self._client.invoker.invoke_on_connection(request, connection, ignore_heartbeat=True) + if self._frame_size == 0: + self._read_frame_size_and_flags() + + if n < self._frame_size: + return False + + self._buf.seek(self._bytes_read) + size = self._frame_size - SIZE_OF_FRAME_LENGTH_AND_FLAGS + data = self._buf.read(size) + self._bytes_read += size + self._frame_size = 0 + # No need to reset flags since it will be overwritten on the next read_frame_size_and_flags call + frame = Frame(data, self._frame_flags) + if not self._message: + self._message = InboundMessage(frame) + else: + self._message.add_frame(frame) + return True - def _on_heartbeat_restored(self, connection): - self.logger.info("Heartbeat: Heartbeat restored for connection %s" % connection, extra=self._logger_extras) - connection.heartbeating = True - for callback, _ in self._listeners: - if callback: - callback(connection) + def _read_frame_size_and_flags(self): + self._buf.seek(self._bytes_read) + header_data = self._buf.read(SIZE_OF_FRAME_LENGTH_AND_FLAGS) + self._frame_size, self._frame_flags = _frame_header.unpack_from(header_data, 0) + self._bytes_read += SIZE_OF_FRAME_LENGTH_AND_FLAGS - def _on_heartbeat_stopped(self, connection): - connection.heartbeating = False - for _, callback in self._listeners: - if callback: - callback(connection) + def _reset(self): + if self._bytes_written == self._bytes_read: + self._buf.seek(0) + self._buf.truncate() + self._bytes_written = 0 + self._bytes_read = 0 + self._message = None + + @property + def length(self): + return self._bytes_written - self._bytes_read class Connection(object): """ Connection object which stores connection related information and operations. """ - _closed = False - endpoint = None - heartbeating = True - is_owner = False - counter = AtomicInteger() - - def __init__(self, address, connection_closed_callback, message_callback, logger_extras=None): - self._address = (address.host, address.port) + + def __init__(self, connection_manager, connection_id, message_callback, logger_extras=None): + self.remote_address = None + self.remote_uuid = None + self.connected_address = None + self.local_address = None + self.last_read_time = 0 + self.last_write_time = 0 + self.start_time = 0 + self.server_version = UNKNOWN_VERSION + self.live = True + self.close_cause = None + self.logger = logging.getLogger("HazelcastClient.Connection[%s]" % connection_id) + + self._connection_manager = connection_manager self._logger_extras = logger_extras - self.id = self.counter.get_and_increment() - self.logger = logging.getLogger("HazelcastClient.Connection[%s](%s:%d)" % (self.id, address.host, address.port)) - self._connection_closed_callback = connection_closed_callback + self._id = connection_id self._builder = ClientMessageBuilder(message_callback) - self._read_buffer = bytearray() - self.last_read_in_seconds = 0 - self.last_write_in_seconds = 0 - self.start_time_in_seconds = 0 - self.server_version_str = "" - self.server_version = 0 - - def live(self): - """ - Determines whether this connection is live or not. - - :return: (bool), ``true`` if the connection is live, ``false`` otherwise. - """ - return not self._closed + self._reader = _Reader(self._builder) def send_message(self, message): """ Sends a message to this connection. :param message: (Message), message to be sent to this connection. + :return: (bool), """ - if not self.live(): - raise IOError("Connection is not live.") + if not self.live: + return False - message.add_flag(BEGIN_END_FLAG) - self.write(message.buffer) + self._write(message.buf) + return True - def receive_message(self): + def close(self, reason, cause): """ - Receives a message from this connection. - """ - # split frames - while len(self._read_buffer) >= INT_SIZE_IN_BYTES: - frame_length = struct.unpack_from(FMT_LE_INT, self._read_buffer, 0)[0] - if frame_length > len(self._read_buffer): - return - message = ClientMessage(memoryview(self._read_buffer)[:frame_length]) - self._read_buffer = self._read_buffer[frame_length:] - self._builder.on_message(message) + Closes the connection. - def write(self, data): + :param reason: (str), The reason this connection is going to be closed. Is allowed to be None. + :param cause: (Exception), The exception responsible for closing this connection. Is allowed to be None. """ - Writes data to this connection when sending messages. + if not self.live: + return - :param data: (Data), data to be written to connection. - """ - # must be implemented by subclass - raise NotImplementedError + self.live = False + self._log_close(reason, cause) + try: + self._inner_close() + except: + self.logger.exception("Error while closing the the connection %s" % self, extra=self._logger_extras) + self._connection_manager.on_connection_close(self, cause) + + def _log_close(self, reason, cause): + msg = "%s closed. Reason: %s" + if reason: + r = reason + elif cause: + r = cause + else: + r = "Socket explicitly closed" - def close(self, cause): - """ - Closes the connection. + if self._connection_manager.live: + self.logger.info(msg % (self, r), extra=self._logger_extras) + else: + self.logger.debug(msg % (self, r), extra=self._logger_extras) - :param cause: (Exception), the cause of closing the connection. - """ - raise NotImplementedError + def _inner_close(self): + raise NotImplementedError() - def __repr__(self): - return "Connection(address=%s, id=%s)" % (self._address, self.id) + def _write(self, buf): + raise NotImplementedError() + + def __eq__(self, other): + return isinstance(other, Connection) and self._id == other._id + + def __ne__(self, other): + return not self.__eq__(other) def __hash__(self): - return self.id + return self._id class DefaultAddressProvider(object): """ Provides initial addresses for client to find and connect to a node. - Loads addresses from the Hazelcast configuration. + It also provides a no-op translator. """ - def __init__(self, network_config): - self._network_config = network_config + + def __init__(self, addresses): + self._addresses = addresses def load_addresses(self): """ - :return: (Sequence), The possible member addresses to connect to. + :return: (Tuple), The possible primary and secondary member addresses to connect to. """ - return parse_addresses(self._network_config.addresses) + configured_addresses = self._addresses + if not configured_addresses: + configured_addresses = ["127.0.0.1"] + + primaries = [] + secondaries = [] + for address in configured_addresses: + p, s = AddressHelper.get_possible_addresses(address) + primaries.extend(p) + secondaries.extend(s) + + return primaries, secondaries -class DefaultAddressTranslator(object): - """ - DefaultAddressTranslator is a no-op. It always returns the given address. - """ def translate(self, address): """ - :param address: (:class:`~hazelcast.core.Address`), address to be translated. - :return: (:class:`~hazelcast.core.Address`), translated address. + No-op address translator. It is there to provide the same API + with other address providers. """ return address - - def refresh(self): - """Refreshes the internal lookup table if necessary.""" - pass diff --git a/hazelcast/core.py b/hazelcast/core.py index 9c38069dfe..f01c23723c 100644 --- a/hazelcast/core.py +++ b/hazelcast/core.py @@ -6,31 +6,42 @@ from hazelcast.util import enum -class Member(object): +class MemberInfo(object): + __slots__ = ("address", "uuid", "attributes", "lite_member", "version") + """ - Represents a member in the cluster with its address, uuid, lite member status and attributes. + Represents a member in the cluster with its address, uuid, lite member status, attributes and version. """ - def __init__(self, address, uuid, is_lite_member=False, attributes={}): + + def __init__(self, address, uuid, attributes, lite_member, version, *args): self.address = address self.uuid = uuid - self.is_lite_member = is_lite_member self.attributes = attributes + self.lite_member = lite_member + self.version = version def __str__(self): - return "Member [{}]:{} - {}".format(self.address.host, self.address.port, self.uuid) + return "Member [%s]:%s - %s" % (self.address.host, self.address.port, self.uuid) def __repr__(self): - return "Member(host={}, port={}, uuid={}, liteMember={}, attributes={})" \ - .format(self.address.host, self.address.port, self.uuid, self.is_lite_member, self.attributes) + return "Member(address=%s, uuid=%s, attributes=%s, lite_member=%s, version=%s)" \ + % (self.address, self.uuid, self.attributes, self.lite_member, self.version) + + def __hash__(self): + return hash((self.address, self.uuid)) def __eq__(self, other): - return isinstance(other, self.__class__) and self.address == other.address and self.uuid == other.uuid + return isinstance(other, MemberInfo) and self.address == other.address and self.uuid == other.uuid + + def __ne__(self, other): + return not self.__eq__(other) class Address(object): """ Represents an address of a member in the cluster. """ + def __init__(self, host, port): self.host = host self.port = port @@ -42,19 +53,63 @@ def __hash__(self): return hash((self.host, self.port)) def __eq__(self, other): - return isinstance(other, self.__class__) and (self.host, self.port) == (other.host, other.port) + return isinstance(other, Address) and self.host == other.host and self.port == other.port + + def __ne__(self, other): + return not self.__eq__(other) + + +class AddressHelper(object): + @staticmethod + def get_possible_addresses(address): + address = AddressHelper.address_from_str(address) + possible_port = address.port + port_try_count = 1 + if possible_port == -1: + port_try_count = 3 + possible_port = 5701 + + addresses = [] + for i in range(port_try_count): + addresses.append(Address(address.host, possible_port + i)) + + # primary, secondary + return [addresses.pop(0)], addresses + + @staticmethod + def address_from_str(address, port=-1): + bracket_start_idx = address.find("[") + bracket_end_idx = address.find("]", bracket_start_idx) + colon_idx = address.find(":") + last_colon_idx = address.rfind(":") + + if -1 < colon_idx < last_colon_idx: + # IPv6 + if bracket_start_idx == 0 and bracket_end_idx > bracket_start_idx: + host = address[bracket_start_idx + 1: bracket_end_idx] + if last_colon_idx == (bracket_end_idx + 1): + port = int(address[last_colon_idx + 1:]) + else: + host = address + elif colon_idx > 0 and colon_idx == last_colon_idx: + host = address[:colon_idx] + port = int(address[colon_idx + 1:]) + else: + host = address + return Address(host, port) class DistributedObjectInfo(object): """ Represents name of the Distributed Object and the name of service which it belongs to. """ - def __init__(self, name, service_name): - self.name = name + + def __init__(self, service_name, name): self.service_name = service_name + self.name = name def __repr__(self): - return "DistributedObjectInfo(name={}, serviceName={})".format(self.name, self.service_name) + return "DistributedObjectInfo(serviceName=%s, name=%s)" % (self.service_name, self.name) def __hash__(self): return hash((self.name, self.service_name)) @@ -79,76 +134,90 @@ class DistributedObjectEvent(object): Distributed Object Event """ - def __init__(self, name, service_name, event_type): + def __init__(self, name, service_name, event_type, source): self.name = name self.service_name = service_name self.event_type = DistributedObjectEventType.reverse.get(event_type, None) + self.source = source def __repr__(self): - return "DistributedObjectEvent[name={}, " \ - "service_name={}, " \ - "event_type={}]".format(self.name, self.service_name, self.event_type) + return "DistributedObjectEvent(name=%s, service_name=%s, event_type=%s, source=%s)" \ + % (self.name, self.service_name, self.event_type, self.source) -class EntryView(object): +class SimpleEntryView(object): """ EntryView represents a readonly view of a map entry. """ - key = None - """ - The key of the entry. - """ - value = None - """ - The value of the entry. - """ - cost = None - """ - The cost in bytes of the entry. - """ - creation_time = None - """ - The creation time of the entry. - """ - expiration_time = None - """ - The expiration time of the entry. - """ - hits = None - """ - Number of hits of the entry. - """ - last_access_time = None - """ - The last access time for the entry. - """ - last_stored_time = None - """ - The last store time for the value. - """ - last_update_time = None - """ - The last time the value was updated. - """ - version = None - """ - The version of the entry. - """ - eviction_criteria_number = None - """ - The criteria number for eviction. - """ - ttl = None - """ - The last set time to live second. - """ + def __init__(self, key, value, cost, creation_time, expiration_time, hits, last_access_time, + last_stored_time, last_update_time, version, ttl, max_idle): + self.key = key + """ + The key of the entry. + """ + + self.value = value + """ + The value of the entry. + """ + + self.cost = cost + """ + The cost in bytes of the entry. + """ + + self.creation_time = creation_time + """ + The creation time of the entry. + """ + + self.expiration_time = expiration_time + """ + The expiration time of the entry. + """ + + self.hits = hits + """ + Number of hits of the entry. + """ + + self.last_access_time = last_access_time + """ + The last access time for the entry. + """ + + self.last_stored_time = last_stored_time + """ + The last store time for the value. + """ + + self.last_update_time = last_update_time + """ + The last time the value was updated. + """ + + self.version = version + """ + The version of the entry. + """ + + self.ttl = ttl + """ + The last set time to live milliseconds. + """ + + self.max_idle = max_idle + """ + The last set max idle time in milliseconds. + """ def __repr__(self): - return "EntryView(key=%s, value=%s, cost=%s, creation_time=%s, expiration_time=%s, hits=%s, last_access_time=%s, " \ - "last_stored_time=%s, last_update_time=%s, version=%s, eviction_criteria_number=%s, ttl=%s" % ( - self.key, self.value, self.cost, self.creation_time, self.expiration_time, self.hits, - self.last_access_time, self.last_stored_time, self.last_update_time, self.version, - self.eviction_criteria_number, self.ttl) + return "SimpleEntryView(key=%s, value=%s, cost=%s, creation_time=%s, " \ + "expiration_time=%s, hits=%s, last_access_time=%s, last_stored_time=%s, " \ + "last_update_time=%s, version=%s, eviction_criteria_number=%s, ttl=%s" \ + % (self.key, self.value, self.cost, self.creation_time, self.expiration_time, self.hits, + self.last_access_time, self.last_stored_time, self.last_update_time, self.version, + self.eviction_criteria_number, self.ttl) class MemberSelector(object): @@ -159,6 +228,7 @@ class MemberSelector(object): member in the cluster and it is up to the implementation to decide if the member is going to be used or not. """ + def select(self, member): """ Decides if the given member will be part of an operation or not. @@ -198,6 +268,7 @@ class HazelcastJsonValue(object): None values are not allowed. """ + def __init__(self, value): util.check_not_none(value, "JSON string or the object cannot be None.") if isinstance(value, six.string_types): @@ -221,3 +292,12 @@ def loads(self): :return: (object), Python object represented by the original string """ return json.loads(self._json_string) + + +class MemberVersion(object): + __slots__ = ("major", "minor", "patch") + + def __init__(self, major, minor, patch): + self.major = major + self.minor = minor + self.patch = patch diff --git a/hazelcast/discovery.py b/hazelcast/discovery.py index 33afb6e450..31f4867460 100644 --- a/hazelcast/discovery.py +++ b/hazelcast/discovery.py @@ -1,9 +1,8 @@ import json import logging -from hazelcast.exception import HazelcastCertificationError -from hazelcast.util import _parse_address -from hazelcast.core import Address +from hazelcast.errors import HazelcastCertificationError +from hazelcast.core import AddressHelper from hazelcast.config import ClientProperty from hazelcast.six.moves import http_client @@ -15,58 +14,48 @@ class HazelcastCloudAddressProvider(object): """ - Provides initial addresses for client to find and connect to a node. + Provides initial addresses for client to find and connect to a node + and resolves private IP addresses of Hazelcast Cloud service. """ - logger = logging.getLogger("HazelcastClient.HazelcastCloudAddressProvider") def __init__(self, host, url, connection_timeout, logger_extras=None): self.cloud_discovery = HazelcastCloudDiscovery(host, url, connection_timeout) + self._private_to_public = dict() self._logger_extras = logger_extras def load_addresses(self): """ - Loads member addresses from Hazelcast.cloud endpoint. + Loads member addresses from Hazelcast Cloud endpoint. - :return: (Sequence), The possible member addresses to connect to. + :return: (Tuple), The possible member addresses as primary addresses to connect to. """ try: - return list(self.cloud_discovery.discover_nodes().keys()) + nodes = self.cloud_discovery.discover_nodes() + # Every private address is primary + return list(nodes.keys()), [] except Exception as ex: - self.logger.warning("Failed to load addresses from Hazelcast.cloud: {}".format(ex.args[0]), + self.logger.warning("Failed to load addresses from Hazelcast Cloud: %s" % ex.args[0], extra=self._logger_extras) - return [] - - -class HazelcastCloudAddressTranslator(object): - """ - Resolves private IP addresses of Hazelcast.cloud service. - """ - - logger = logging.getLogger("HazelcastClient.HazelcastCloudAddressTranslator") - - def __init__(self, host, url, connection_timeout, logger_extras=None): - self.cloud_discovery = HazelcastCloudDiscovery(host, url, connection_timeout) - self._private_to_public = dict() - self._logger_extras = logger_extras + return [], [] def translate(self, address): """ Translates the given address to another address specific to network or service. :param address: (:class:`~hazelcast.core.Address`), private address to be translated - :return: (:class:`~hazelcast.core.Address`), new address if given address is known, otherwise returns null + :return: (:class:`~hazelcast.core.Address`), new address if given address is known, otherwise returns None """ if address is None: return None - public_address = self._private_to_public.get(address) + public_address = self._private_to_public.get(address, None) if public_address: return public_address self.refresh() - return self._private_to_public.get(address) + return self._private_to_public.get(address, None) def refresh(self): """ @@ -107,12 +96,6 @@ def discover_nodes(self): :return: (dict), Dictionary that maps private addresses to public addresses. """ - try: - return self._call_service() - except Exception as ex: - raise ex - - def _call_service(self): try: https_connection = http_client.HTTPSConnection(host=self._host, timeout=self._connection_timeout, @@ -137,21 +120,13 @@ def _parse_response(self, https_response): private_address = value[self._PRIVATE_ADDRESS_PROPERTY] public_address = value[self._PUBLIC_ADDRESS_PROPERTY] - public_addr = self._parse_address(public_address) - private_addr = self._parse_address(private_address) - if private_addr.port == -1: - # If not explicitly given, set the port of the private address to port of the public address - private_addr.port = public_addr.port + public_addr = AddressHelper.address_from_str(public_address) + # If not explicitly given, create the private address with the public addresses port + private_addr = AddressHelper.address_from_str(private_address, public_addr.port) private_to_public_addresses[private_addr] = public_addr return private_to_public_addresses - def _parse_address(self, address): - if ':' in address: - host, port = address.split(':') - return Address(host, int(port)) - return Address(address, -1) - @staticmethod def get_host_and_url(properties, cloud_token): """ diff --git a/hazelcast/errors.py b/hazelcast/errors.py new file mode 100644 index 0000000000..82585b6a36 --- /dev/null +++ b/hazelcast/errors.py @@ -0,0 +1,646 @@ +import socket + +EXCEPTION_MESSAGE_TYPE = 0 + + +def retryable(cls): + """ + Makes the given error retryable. + + :param cls: (:class:`~hazelcast.exception.HazelcastError`), the given error. + :return: (:class:`~hazelcast.exception.HazelcastError`), the given error with retryable property. + """ + cls.retryable = True + return cls + + +class HazelcastError(Exception): + """ + General HazelcastError class. + """ + def __init__(self, message=None, cause=None): + super(HazelcastError, self).__init__(message, cause) + + def __str__(self): + message, cause = self.args + if cause: + return "%s\nCaused by: %s" % (message, str(cause)) + return message + + +class ArrayIndexOutOfBoundsError(HazelcastError): + pass + + +class ArrayStoreError(HazelcastError): + pass + + +class AuthenticationError(HazelcastError): + pass + + +class CacheNotExistsError(HazelcastError): + pass + + +@retryable +class CallerNotMemberError(HazelcastError): + pass + + +class CancellationError(HazelcastError): + pass + + +class ClassCastError(HazelcastError): + pass + + +class ClassNotFoundError(HazelcastError): + pass + + +class ConcurrentModificationError(HazelcastError): + pass + + +class ConfigMismatchError(HazelcastError): + pass + + +class ConfigurationError(HazelcastError): + pass + + +class DistributedObjectDestroyedError(HazelcastError): + pass + + +class DuplicateInstanceNameError(HazelcastError): + pass + + +class HazelcastEOFError(HazelcastError): + pass + + +class ExecutionError(HazelcastError): + pass + + +@retryable +class HazelcastInstanceNotActiveError(HazelcastError): + pass + + +class HazelcastOverloadError(HazelcastError): + pass + + +class HazelcastSerializationError(HazelcastError): + pass + + +class HazelcastIOError(HazelcastError): + pass + + +class IllegalArgumentError(HazelcastError): + pass + + +class IllegalAccessException(HazelcastError): + pass + + +class IllegalAccessError(HazelcastError): + pass + + +class IllegalMonitorStateError(HazelcastError): + pass + + +class IllegalStateError(HazelcastError): + pass + + +class IllegalThreadStateError(HazelcastError): + pass + + +class IndexOutOfBoundsError(HazelcastError): + pass + + +class HazelcastInterruptedError(HazelcastError): + pass + + +class InvalidAddressError(HazelcastError): + pass + + +class InvalidConfigurationError(HazelcastError): + pass + + +@retryable +class MemberLeftError(HazelcastError): + pass + + +class NegativeArraySizeError(HazelcastError): + pass + + +class NoSuchElementError(HazelcastError): + pass + + +class NotSerializableError(HazelcastError): + pass + + +class NullPointerError(HazelcastError): + pass + + +class OperationTimeoutError(HazelcastError): + pass + + +@retryable +class PartitionMigratingError(HazelcastError): + pass + + +class QueryError(HazelcastError): + pass + + +class QueryResultSizeExceededError(HazelcastError): + pass + + +class SplitBrainProtectionError(HazelcastError): + pass + + +class ReachedMaxSizeError(HazelcastError): + pass + + +class RejectedExecutionError(HazelcastError): + pass + + +class ResponseAlreadySentError(HazelcastError): + pass + + +@retryable +class RetryableHazelcastError(HazelcastError): + pass + + +@retryable +class RetryableIOError(HazelcastError): + pass + + +class HazelcastRuntimeError(HazelcastError): + pass + + +class SecurityError(HazelcastError): + pass + + +class SocketError(HazelcastError): + pass + + +class StaleSequenceError(HazelcastError): + pass + + +class TargetDisconnectedError(HazelcastError): + pass + + +@retryable +class TargetNotMemberError(HazelcastError): + pass + + +class HazelcastTimeoutError(HazelcastError): + pass + + +class TopicOverloadError(HazelcastError): + pass + + +class TransactionError(HazelcastError): + pass + + +class TransactionNotActiveError(HazelcastError): + pass + + +class TransactionTimedOutError(HazelcastError): + pass + + +class URISyntaxError(HazelcastError): + pass + + +class UTFDataFormatError(HazelcastError): + pass + + +class UnsupportedOperationError(HazelcastError): + pass + + +@retryable +class WrongTargetError(HazelcastError): + pass + + +class XAError(HazelcastError): + pass + + +class AccessControlError(HazelcastError): + pass + + +class LoginError(HazelcastError): + pass + + +class UnsupportedCallbackError(HazelcastError): + pass + + +class NoDataMemberInClusterError(HazelcastError): + pass + + +class ReplicatedMapCantBeCreatedOnLiteMemberError(HazelcastError): + pass + + +class MaxMessageSizeExceededError(HazelcastError): + pass + + +class WANReplicationQueueFullError(HazelcastError): + pass + + +class HazelcastAssertionError(HazelcastError): + pass + + +class OutOfMemoryError(HazelcastError): + pass + + +class StackOverflowError(HazelcastError): + pass + + +class NativeOutOfMemoryError(HazelcastError): + pass + + +class ServiceNotFoundError(HazelcastError): + pass + + +class StaleTaskIdError(HazelcastError): + pass + + +class DuplicateTaskError(HazelcastError): + pass + + +class StaleTaskError(HazelcastError): + pass + + +class LocalMemberResetError(HazelcastError): + pass + + +class IndeterminateOperationStateError(HazelcastError): + pass + + +class NodeIdOutOfRangeError(HazelcastError): + pass + + +@retryable +class TargetNotReplicaError(HazelcastError): + pass + + +class MutationDisallowedError(HazelcastError): + pass + + +class ConsistencyLostError(HazelcastError): + pass + + +class HazelcastClientNotActiveError(ValueError): + def __init__(self, message="Client is not active"): + super(HazelcastClientNotActiveError, self).__init__(message) + + +class HazelcastCertificationError(HazelcastError): + pass + + +class ClientOfflineError(HazelcastError): + def __init__(self): + super(ClientOfflineError, self).__init__("No connection found to cluster") + + +class ClientNotAllowedInClusterError(HazelcastError): + pass + + +class VersionMismatchError(HazelcastError): + pass + + +class NoSuchMethodError(HazelcastError): + pass + + +class NoSuchMethodException(HazelcastError): + pass + + +class NoSuchFieldError(HazelcastError): + pass + + +class NoSuchFieldException(HazelcastError): + pass + + +class NoClassDefFoundError(HazelcastError): + pass + + +class UndefinedErrorCodeError(HazelcastError): + pass + + +# Error Codes +_UNDEFINED = 0 +_ARRAY_INDEX_OUT_OF_BOUNDS = 1 +_ARRAY_STORE = 2 +_AUTHENTICATION = 3 +_CACHE = 4 +_CACHE_LOADER = 5 +_CACHE_NOT_EXISTS = 6 +_CACHE_WRITER = 7 +_CALLER_NOT_MEMBER = 8 +_CANCELLATION = 9 +_CLASS_CAST = 10 +_CLASS_NOT_FOUND = 11 +_CONCURRENT_MODIFICATION = 12 +_CONFIG_MISMATCH = 13 +_DISTRIBUTED_OBJECT_DESTROYED = 14 +_EOF = 15 +_ENTRY_PROCESSOR = 16 +_EXECUTION = 17 +_HAZELCAST = 18 +_HAZELCAST_INSTANCE_NOT_ACTIVE = 19 +_HAZELCAST_OVERLOAD = 20 +_HAZELCAST_SERIALIZATION = 21 +_IO = 22 +_ILLEGAL_ARGUMENT = 23 +_ILLEGAL_ACCESS_EXCEPTION = 24 +_ILLEGAL_ACCESS_ERROR = 25 +_ILLEGAL_MONITOR_STATE = 26 +_ILLEGAL_STATE = 27 +_ILLEGAL_THREAD_STATE = 28 +_INDEX_OUT_OF_BOUNDS = 29 +_INTERRUPTED = 30 +_INVALID_ADDRESS = 31 +_INVALID_CONFIGURATION = 32 +_MEMBER_LEFT = 33 +_NEGATIVE_ARRAY_SIZE = 34 +_NO_SUCH_ELEMENT = 35 +_NOT_SERIALIZABLE = 36 +_NULL_POINTER = 37 +_OPERATION_TIMEOUT = 38 +_PARTITION_MIGRATING = 39 +_QUERY = 40 +_QUERY_RESULT_SIZE_EXCEEDED = 41 +_SPLIT_BRAIN_PROTECTION = 42 +_REACHED_MAX_SIZE = 43 +_REJECTED_EXECUTION = 44 +_RESPONSE_ALREADY_SENT = 45 +_RETRYABLE_HAZELCAST = 46 +_RETRYABLE_IO = 47 +_RUNTIME = 48 +_SECURITY = 49 +_SOCKET = 50 +_STALE_SEQUENCE = 51 +_TARGET_DISCONNECTED = 52 +_TARGET_NOT_MEMBER = 53 +_TIMEOUT = 54 +_TOPIC_OVERLOAD = 55 +_TRANSACTION = 56 +_TRANSACTION_NOT_ACTIVE = 57 +_TRANSACTION_TIMED_OUT = 58 +_URI_SYNTAX = 59 +_UTF_DATA_FORMAT = 60 +_UNSUPPORTED_OPERATION = 61 +_WRONG_TARGET = 62 +_XA = 63 +_ACCESS_CONTROL = 64 +_LOGIN = 65 +_UNSUPPORTED_CALLBACK = 66 +_NO_DATA_MEMBER = 67 +_REPLICATED_MAP_CANT_BE_CREATED = 68 +_MAX_MESSAGE_SIZE_EXCEEDED = 69 +_WAN_REPLICATION_QUEUE_FULL = 70 +_ASSERTION_ERROR = 71 +_OUT_OF_MEMORY_ERROR = 72 +_STACK_OVERFLOW_ERROR = 73 +_NATIVE_OUT_OF_MEMORY_ERROR = 74 +_SERVICE_NOT_FOUND = 75 +_STALE_TASK_ID = 76 +_DUPLICATE_TASK = 77 +_STALE_TASK = 78 +_LOCAL_MEMBER_RESET = 79 +_INDETERMINATE_OPERATION_STATE = 80 +_FLAKE_ID_NODE_ID_OUT_OF_RANGE_EXCEPTION = 81 +_TARGET_NOT_REPLICA_EXCEPTION = 82 +_MUTATION_DISALLOWED_EXCEPTION = 83 +_CONSISTENCY_LOST_EXCEPTION = 84 +_SESSION_EXPIRED_EXCEPTION = 85 +_WAIT_KEY_CANCELLED_EXCEPTION = 86 +_LOCK_ACQUIRE_LIMIT_REACHED_EXCEPTION = 87 +_LOCK_OWNERSHIP_LOST_EXCEPTION = 88 +_CP_GROUP_DESTROYED_EXCEPTION = 89 +_CANNOT_REPLICATE_EXCEPTION = 90 +_LEADER_DEMOTED_EXCEPTION = 91 +_STALE_APPEND_REQUEST_EXCEPTION = 92 +_NOT_LEADER_EXCEPTION = 93 +_VERSION_MISMATCH_EXCEPTION = 94 +_NO_SUCH_METHOD_ERROR = 95 +_NO_SUCH_METHOD_EXCEPTION = 96 +_NO_SUCH_FIELD_ERROR = 97 +_NO_SUCH_FIELD_EXCEPTION = 98 +_NO_CLASS_DEF_FOUND_ERROR = 99 + +_ERROR_CODE_TO_ERROR = { + _ARRAY_INDEX_OUT_OF_BOUNDS: ArrayIndexOutOfBoundsError, + _ARRAY_STORE: ArrayStoreError, + _AUTHENTICATION: AuthenticationError, + _CACHE_NOT_EXISTS: CacheNotExistsError, + _CALLER_NOT_MEMBER: CallerNotMemberError, + _CANCELLATION: CancellationError, + _CLASS_CAST: ClassCastError, + _CLASS_NOT_FOUND: ClassNotFoundError, + _CONCURRENT_MODIFICATION: ConcurrentModificationError, + _CONFIG_MISMATCH: ConfigMismatchError, + _DISTRIBUTED_OBJECT_DESTROYED: DistributedObjectDestroyedError, + _EOF: HazelcastEOFError, + _EXECUTION: ExecutionError, + _HAZELCAST: HazelcastError, + _HAZELCAST_INSTANCE_NOT_ACTIVE: HazelcastInstanceNotActiveError, + _HAZELCAST_OVERLOAD: HazelcastOverloadError, + _HAZELCAST_SERIALIZATION: HazelcastSerializationError, + _IO: HazelcastIOError, + _ILLEGAL_ARGUMENT: IllegalArgumentError, + _ILLEGAL_ACCESS_EXCEPTION: IllegalAccessException, + _ILLEGAL_ACCESS_ERROR: IllegalAccessError, + _ILLEGAL_MONITOR_STATE: IllegalMonitorStateError, + _ILLEGAL_STATE: IllegalStateError, + _ILLEGAL_THREAD_STATE: IllegalThreadStateError, + _INDEX_OUT_OF_BOUNDS: IndexOutOfBoundsError, + _INTERRUPTED: HazelcastInterruptedError, + _INVALID_ADDRESS: InvalidAddressError, + _INVALID_CONFIGURATION: InvalidConfigurationError, + _MEMBER_LEFT: MemberLeftError, + _NEGATIVE_ARRAY_SIZE: NegativeArraySizeError, + _NO_SUCH_ELEMENT: NoSuchElementError, + _NOT_SERIALIZABLE: NotSerializableError, + _NULL_POINTER: NullPointerError, + _OPERATION_TIMEOUT: OperationTimeoutError, + _PARTITION_MIGRATING: PartitionMigratingError, + _QUERY: QueryError, + _QUERY_RESULT_SIZE_EXCEEDED: QueryResultSizeExceededError, + _SPLIT_BRAIN_PROTECTION: SplitBrainProtectionError, + _REACHED_MAX_SIZE: ReachedMaxSizeError, + _REJECTED_EXECUTION: RejectedExecutionError, + _RESPONSE_ALREADY_SENT: ResponseAlreadySentError, + _RETRYABLE_HAZELCAST: RetryableHazelcastError, + _RETRYABLE_IO: RetryableIOError, + _RUNTIME: HazelcastRuntimeError, + _SECURITY: SecurityError, + _SOCKET: socket.error, + _STALE_SEQUENCE: StaleSequenceError, + _TARGET_DISCONNECTED: TargetDisconnectedError, + _TARGET_NOT_MEMBER: TargetNotMemberError, + _TIMEOUT: HazelcastTimeoutError, + _TOPIC_OVERLOAD: TopicOverloadError, + _TRANSACTION: TransactionError, + _TRANSACTION_NOT_ACTIVE: TransactionNotActiveError, + _TRANSACTION_TIMED_OUT: TransactionTimedOutError, + _URI_SYNTAX: URISyntaxError, + _UTF_DATA_FORMAT: UTFDataFormatError, + _UNSUPPORTED_OPERATION: UnsupportedOperationError, + _WRONG_TARGET: WrongTargetError, + _XA: XAError, + _ACCESS_CONTROL: AccessControlError, + _LOGIN: LoginError, + _UNSUPPORTED_CALLBACK: UnsupportedCallbackError, + _NO_DATA_MEMBER: NoDataMemberInClusterError, + _REPLICATED_MAP_CANT_BE_CREATED: ReplicatedMapCantBeCreatedOnLiteMemberError, + _MAX_MESSAGE_SIZE_EXCEEDED: MaxMessageSizeExceededError, + _WAN_REPLICATION_QUEUE_FULL: WANReplicationQueueFullError, + _ASSERTION_ERROR: HazelcastAssertionError, + _OUT_OF_MEMORY_ERROR: OutOfMemoryError, + _STACK_OVERFLOW_ERROR: StackOverflowError, + _NATIVE_OUT_OF_MEMORY_ERROR: NativeOutOfMemoryError, + _SERVICE_NOT_FOUND: ServiceNotFoundError, + _STALE_TASK_ID: StaleTaskIdError, + _DUPLICATE_TASK: DuplicateTaskError, + _STALE_TASK: StaleTaskError, + _LOCAL_MEMBER_RESET: LocalMemberResetError, + _INDETERMINATE_OPERATION_STATE: IndeterminateOperationStateError, + _FLAKE_ID_NODE_ID_OUT_OF_RANGE_EXCEPTION: NodeIdOutOfRangeError, + _TARGET_NOT_REPLICA_EXCEPTION: TargetNotReplicaError, + _MUTATION_DISALLOWED_EXCEPTION: MutationDisallowedError, + _CONSISTENCY_LOST_EXCEPTION: ConsistencyLostError, + _VERSION_MISMATCH_EXCEPTION: VersionMismatchError, + _NO_SUCH_METHOD_ERROR: NoSuchMethodError, + _NO_SUCH_METHOD_EXCEPTION: NoSuchMethodException, + _NO_SUCH_FIELD_ERROR: NoSuchFieldError, + _NO_SUCH_FIELD_EXCEPTION: NoSuchFieldException, + _NO_CLASS_DEF_FOUND_ERROR: NoClassDefFoundError, +} + +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.codec.custom.error_holder_codec import ErrorHolderCodec + + +class _ErrorsCodec(object): + @staticmethod + def decode(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, ErrorHolderCodec.decode) + + +def create_error_from_message(error_message): + """ + Creates an exception with given error codec. + + :param error_message: (ClientMessage), error message which includes the class name, message and exception trace. + :return: (Exception), the created exception. + """ + error_holders = _ErrorsCodec.decode(error_message) + return _create_error(error_holders, 0) + + +def _create_error(error_holders, idx): + if idx == len(error_holders): + return None + + error_holder = error_holders[idx] + error_class = _ERROR_CODE_TO_ERROR.get(error_holder.error_code, None) + + stack_trace = "\n".join( + ["\tat %s.%s(%s:%s)" % (x.class_name, x.method_name, x.file_name, x.line_number) for x in + error_holder.stack_trace_elements]) + message = "Exception from server: %s: %s\n %s" % (error_holder.class_name, error_holder.message, stack_trace) + if error_class: + return error_class(message, _create_error(error_holders, idx + 1)) + else: + return UndefinedErrorCodeError(message, error_holder.class_name) + + +def is_retryable_error(error): + """ + Determines whether the given error is retryable or not. + :param error: (:class:`~hazelcast.exception.HazelcastError`), the given error. + :return: (bool), ``true`` if the given error is retryable, ``false`` otherwise. + """ + return hasattr(error, 'retryable') diff --git a/hazelcast/exception.py b/hazelcast/exception.py deleted file mode 100644 index 92ebff81f7..0000000000 --- a/hazelcast/exception.py +++ /dev/null @@ -1,484 +0,0 @@ -from hazelcast.protocol.error_codes import * - - -def retryable(cls): - """ - Makes the given error retryable. - - :param cls: (:class:`~hazelcast.exception.HazelcastError`), the given error. - :return: (:class:`~hazelcast.exception.HazelcastError`), the given error with retryable property. - """ - cls.retryable = True - return cls - - -class HazelcastError(Exception): - """ - General HazelcastError class. - """ - pass - - -class ArrayIndexOutOfBoundsError(HazelcastError): - pass - - -class ArrayStoreError(HazelcastError): - pass - - -class AuthenticationError(HazelcastError): - pass - - -class CacheNotExistsError(HazelcastError): - pass - - -@retryable -class CallerNotMemberError(HazelcastError): - pass - - -class CancellationError(HazelcastError): - pass - - -class ClassCastError(HazelcastError): - pass - - -class ClassNotFoundError(HazelcastError): - pass - - -class ConcurrentModificationError(HazelcastError): - pass - - -class ConfigMismatchError(HazelcastError): - pass - - -class ConfigurationError(HazelcastError): - pass - - -class DistributedObjectDestroyedError(HazelcastError): - pass - - -class DuplicateInstanceNameError(HazelcastError): - pass - - -class HazelcastEOFError(HazelcastError): - pass - - -class ExecutionError(HazelcastError): - pass - - -@retryable -class HazelcastInstanceNotActiveError(HazelcastError): - pass - - -class HazelcastOverloadError(HazelcastError): - pass - - -class HazelcastSerializationError(HazelcastError): - pass - - -class HazelcastIOError(HazelcastError): - pass - - -class IllegalArgumentError(HazelcastError): - pass - - -class IllegalAccessException(HazelcastError): - pass - - -class IllegalAccessError(HazelcastError): - pass - - -class IllegalMonitorStateError(HazelcastError): - pass - - -class HazelcastIllegalStateError(HazelcastError): - pass - - -class IllegalThreadStateError(HazelcastError): - pass - - -class IndexOutOfBoundsError(HazelcastError): - pass - - -class HazelcastInterruptedError(HazelcastError): - pass - - -class InvalidAddressError(HazelcastError): - pass - - -class InvalidConfigurationError(HazelcastError): - pass - - -@retryable -class MemberLeftError(HazelcastError): - pass - - -class NegativeArraySizeError(HazelcastError): - pass - - -class NoSuchElementError(HazelcastError): - pass - - -class NotSerializableError(HazelcastError): - pass - - -class NullPointerError(HazelcastError): - pass - - -class OperationTimeoutError(HazelcastError): - pass - - -@retryable -class PartitionMigratingError(HazelcastError): - pass - - -class QueryError(HazelcastError): - pass - - -class QueryResultSizeExceededError(HazelcastError): - pass - - -class QuorumError(HazelcastError): - pass - - -class ReachedMaxSizeError(HazelcastError): - pass - - -class RejectedExecutionError(HazelcastError): - pass - - -class RemoteMapReduceError(HazelcastError): - pass - - -class ResponseAlreadySentError(HazelcastError): - pass - - -@retryable -class RetryableHazelcastError(HazelcastError): - pass - - -@retryable -class RetryableIOError(HazelcastError): - pass - - -class HazelcastRuntimeError(HazelcastError): - pass - - -class SecurityError(HazelcastError): - pass - - -class SocketError(HazelcastError): - pass - - -class StaleSequenceError(HazelcastError): - pass - - -class TargetDisconnectedError(HazelcastError): - pass - - -@retryable -class TargetNotMemberError(HazelcastError): - pass - - -class TimeoutError(HazelcastError): - pass - - -class TopicOverloadError(HazelcastError): - pass - - -class TopologyChangedError(HazelcastError): - pass - - -class TransactionError(HazelcastError): - pass - - -class TransactionNotActiveError(HazelcastError): - pass - - -class TransactionTimedOutError(HazelcastError): - pass - - -class URISyntaxError(HazelcastError): - pass - - -class UTFDataFormatError(HazelcastError): - pass - - -class UnsupportedOperationError(HazelcastError): - pass - - -@retryable -class WrongTargetError(HazelcastError): - pass - - -class XAError(HazelcastError): - pass - - -class AccessControlError(HazelcastError): - pass - - -class LoginError(HazelcastError): - pass - - -class UnsupportedCallbackError(HazelcastError): - pass - - -class NoDataMemberInClusterError(HazelcastError): - pass - - -class ReplicatedMapCantBeCreatedOnLiteMemberError(HazelcastError): - pass - - -class MaxMessageSizeExceededError(HazelcastError): - pass - - -class WANReplicationQueueFullError(HazelcastError): - pass - - -class HazelcastAssertionError(HazelcastError): - pass - - -class OutOfMemoryError(HazelcastError): - pass - - -class StackOverflowError(HazelcastError): - pass - - -class NativeOutOfMemoryError(HazelcastError): - pass - - -class ServiceNotFoundError(HazelcastError): - pass - - -class StaleTaskIdError(HazelcastError): - pass - - -class DuplicateTaskError(HazelcastError): - pass - - -class StaleTaskError(HazelcastError): - pass - - -class LocalMemberResetError(HazelcastError): - pass - - -class IndeterminateOperationStateError(HazelcastError): - pass - - -class NodeIdOutOfRangeError(HazelcastError): - pass - - -@retryable -class TargetNotReplicaError(HazelcastError): - pass - - -class MutationDisallowedError(HazelcastError): - pass - - -class ConsistencyLostError(HazelcastError): - pass - - -class HazelcastClientNotActiveException(ValueError): - pass - - -class HazelcastCertificationError(HazelcastError): - pass - - -ERROR_CODE_TO_ERROR = { - ARRAY_INDEX_OUT_OF_BOUNDS: ArrayIndexOutOfBoundsError, - ARRAY_STORE: ArrayStoreError, - AUTHENTICATION: AuthenticationError, - CACHE_NOT_EXISTS: CacheNotExistsError, - CALLER_NOT_MEMBER: CallerNotMemberError, - CANCELLATION: CancellationError, - CLASS_CAST: ClassCastError, - CLASS_NOT_FOUND: ClassNotFoundError, - CONCURRENT_MODIFICATION: ConcurrentModificationError, - CONFIG_MISMATCH: ConfigMismatchError, - CONFIGURATION: ConfigurationError, - DISTRIBUTED_OBJECT_DESTROYED: DistributedObjectDestroyedError, - DUPLICATE_INSTANCE_NAME: DuplicateInstanceNameError, - EOF: HazelcastEOFError, - EXECUTION: ExecutionError, - HAZELCAST: HazelcastError, - HAZELCAST_INSTANCE_NOT_ACTIVE: HazelcastInstanceNotActiveError, - HAZELCAST_OVERLOAD: HazelcastOverloadError, - HAZELCAST_SERIALIZATION: HazelcastSerializationError, - IO: HazelcastIOError, - ILLEGAL_ARGUMENT: IllegalArgumentError, - ILLEGAL_ACCESS_EXCEPTION: IllegalAccessException, - ILLEGAL_ACCESS_ERROR: IllegalAccessError, - ILLEGAL_MONITOR_STATE: IllegalMonitorStateError, - ILLEGAL_STATE: HazelcastIllegalStateError, - ILLEGAL_THREAD_STATE: IllegalThreadStateError, - INDEX_OUT_OF_BOUNDS: IndexOutOfBoundsError, - INTERRUPTED: HazelcastInterruptedError, - INVALID_ADDRESS: InvalidAddressError, - INVALID_CONFIGURATION: InvalidConfigurationError, - MEMBER_LEFT: MemberLeftError, - NEGATIVE_ARRAY_SIZE: NegativeArraySizeError, - NO_SUCH_ELEMENT: NoSuchElementError, - NOT_SERIALIZABLE: NotSerializableError, - NULL_POINTER: NullPointerError, - OPERATION_TIMEOUT: OperationTimeoutError, - PARTITION_MIGRATING: PartitionMigratingError, - QUERY: QueryError, - QUERY_RESULT_SIZE_EXCEEDED: QueryResultSizeExceededError, - QUORUM: QuorumError, - REACHED_MAX_SIZE: ReachedMaxSizeError, - REJECTED_EXECUTION: RejectedExecutionError, - REMOTE_MAP_REDUCE: RemoteMapReduceError, - RESPONSE_ALREADY_SENT: ResponseAlreadySentError, - RETRYABLE_HAZELCAST: RetryableHazelcastError, - RETRYABLE_IO: RetryableIOError, - RUNTIME: HazelcastRuntimeError, - SECURITY: SecurityError, - SOCKET: SocketError, - STALE_SEQUENCE: StaleSequenceError, - TARGET_DISCONNECTED: TargetDisconnectedError, - TARGET_NOT_MEMBER: TargetNotMemberError, - TIMEOUT: TimeoutError, - TOPIC_OVERLOAD: TopicOverloadError, - TOPOLOGY_CHANGED: TopologyChangedError, - TRANSACTION: TransactionError, - TRANSACTION_NOT_ACTIVE: TransactionNotActiveError, - TRANSACTION_TIMED_OUT: TransactionTimedOutError, - URI_SYNTAX: URISyntaxError, - UTF_DATA_FORMAT: UTFDataFormatError, - UNSUPPORTED_OPERATION: UnsupportedOperationError, - WRONG_TARGET: WrongTargetError, - XA: XAError, - ACCESS_CONTROL: AccessControlError, - LOGIN: LoginError, - UNSUPPORTED_CALLBACK: UnsupportedCallbackError, - NO_DATA_MEMBER: NoDataMemberInClusterError, - REPLICATED_MAP_CANT_BE_CREATED: ReplicatedMapCantBeCreatedOnLiteMemberError, - MAX_MESSAGE_SIZE_EXCEEDED: MaxMessageSizeExceededError, - WAN_REPLICATION_QUEUE_FULL: WANReplicationQueueFullError, - ASSERTION_ERROR: HazelcastAssertionError, - OUT_OF_MEMORY_ERROR: OutOfMemoryError, - STACK_OVERFLOW_ERROR: StackOverflowError, - NATIVE_OUT_OF_MEMORY_ERROR: NativeOutOfMemoryError, - SERVICE_NOT_FOUND: ServiceNotFoundError, - STALE_TASK_ID: StaleTaskIdError, - DUPLICATE_TASK: DuplicateTaskError, - STALE_TASK: StaleTaskError, - LOCAL_MEMBER_RESET: LocalMemberResetError, - INDETERMINATE_OPERATION_STATE: IndeterminateOperationStateError, - FLAKE_ID_NODE_ID_OUT_OF_RANGE_EXCEPTION: NodeIdOutOfRangeError, - TARGET_NOT_REPLICA_EXCEPTION: TargetNotReplicaError, - MUTATION_DISALLOWED_EXCEPTION: MutationDisallowedError, - CONSISTENCY_LOST_EXCEPTION: ConsistencyLostError, -} - - -def create_exception(error_codec): - """ - Creates an exception with given error codec. - - :param error_codec: (Error Codec), error codec which includes the class name, message and exception trace. - :return: (Exception), the created exception. - """ - if error_codec.error_code in ERROR_CODE_TO_ERROR: - return ERROR_CODE_TO_ERROR[error_codec.error_code](error_codec.message) - - stack_trace = "\n".join( - ["\tat %s.%s(%s:%s)" % (x.declaring_class, x.method_name, x.file_name, x.line_number) for x in - error_codec.stack_trace]) - message = "Got exception from server:\n %s: %s\n %s" % (error_codec.class_name, - error_codec.message, - stack_trace) - return HazelcastError(message) - - -def is_retryable_error(error): - """ - Determines whether the given error is retryable or not. - :param error: (:class:`~hazelcast.exception.HazelcastError`), the given error. - :return: (bool), ``true`` if the given error is retryable, ``false`` otherwise. - """ - return hasattr(error, 'retryable') diff --git a/hazelcast/invocation.py b/hazelcast/invocation.py index 7385160171..21c67452a0 100644 --- a/hazelcast/invocation.py +++ b/hazelcast/invocation.py @@ -1,141 +1,171 @@ import logging -import threading import time import functools -from hazelcast.exception import create_exception, HazelcastInstanceNotActiveError, is_retryable_error, TimeoutError, \ - TargetDisconnectedError, HazelcastClientNotActiveException, TargetNotMemberError +from hazelcast.errors import create_error_from_message, HazelcastInstanceNotActiveError, is_retryable_error, \ + HazelcastTimeoutError, TargetDisconnectedError, HazelcastClientNotActiveError, TargetNotMemberError, \ + EXCEPTION_MESSAGE_TYPE from hazelcast.future import Future -from hazelcast.lifecycle import LIFECYCLE_STATE_CONNECTED -from hazelcast.protocol.client_message import LISTENER_FLAG -from hazelcast.protocol.custom_codec import EXCEPTION_MESSAGE_TYPE, ErrorCodec from hazelcast.util import AtomicInteger -from hazelcast.six.moves import queue from hazelcast import six -class Invocation(object): - sent_connection = None - timer = None - - def __init__(self, invocation_service, request, partition_id=-1, address=None, connection=None, event_handler=None): - self._event = threading.Event() - self._invocation_timeout = invocation_service.invocation_timeout - self.timeout = self._invocation_timeout + time.time() - self.address = address - self.connection = connection - self.partition_id = partition_id - self.request = request - self.future = Future() - self.event_handler = event_handler +def _no_op_response_handler(_): + pass - def has_connection(self): - return self.connection is not None - def has_partition_id(self): - return self.partition_id >= 0 +class Invocation(object): + __slots__ = ("request", "timeout", "partition_id", "uuid", "connection", "event_handler", + "future", "sent_connection", "urgent", "response_handler") - def has_address(self): - return self.address is not None + def __init__(self, request, partition_id=-1, uuid=None, connection=None, + event_handler=None, urgent=False, timeout=None, response_handler=_no_op_response_handler): + self.request = request + self.partition_id = partition_id + self.uuid = uuid + self.connection = connection + self.event_handler = event_handler + self.urgent = urgent + self.timeout = timeout + self.future = Future() + self.timeout = None + self.sent_connection = None + self.response_handler = response_handler def set_response(self, response): - if self.timer: - self.timer.cancel() - self.future.set_result(response) + try: + result = self.response_handler(response) + self.future.set_result(result) + except Exception as e: + self.future.set_exception(e) def set_exception(self, exception, traceback=None): - if self.timer: - self.timer.cancel() self.future.set_exception(exception, traceback) - def set_timeout(self, timeout): - self._invocation_timeout = timeout - self.timeout = self._invocation_timeout + time.time() - - def on_timeout(self): - self.set_exception(TimeoutError("Request timed out after %d seconds." % self._invocation_timeout)) - class InvocationService(object): logger = logging.getLogger("HazelcastClient.InvocationService") - def __init__(self, client): - self._pending = {} - self._next_correlation_id = AtomicInteger(1) + def __init__(self, client, reactor, logger_extras): + config = client.config + if config.network.smart_routing: + self.invoke = self._invoke_smart + else: + self.invoke = self._invoke_non_smart + self._client = client - self._logger_extras = {"client_name": client.name, "group_name": client.config.group_config.name} - self._event_queue = queue.Queue() - self._is_redo_operation = client.config.network_config.redo_operation - self.invocation_retry_pause = self._init_invocation_retry_pause() - self.invocation_timeout = self._init_invocation_timeout() + self._reactor = reactor + self._logger_extras = logger_extras + self._partition_service = None + self._connection_manager = None self._listener_service = None + self._check_invocation_allowed_fn = None + self._pending = {} + self._next_correlation_id = AtomicInteger(1) + self._is_redo_operation = config.network.redo_operation + self._invocation_timeout = self._init_invocation_timeout() + self._invocation_retry_pause = self._init_invocation_retry_pause() + self._shutdown = False + + def start(self, partition_service, connection_manager, listener_service): + self._partition_service = partition_service + self._connection_manager = connection_manager + self._listener_service = listener_service + self._check_invocation_allowed_fn = connection_manager.check_invocation_allowed + + def handle_client_message(self, message): + correlation_id = message.get_correlation_id() - if client.config.network_config.smart_routing: - self.invoke = self.invoke_smart - else: - self.invoke = self.invoke_non_smart - - self._client.connection_manager.add_listener(on_connection_closed=self.cleanup_connection) - client.heartbeat.add_listener(on_heartbeat_stopped=self._heartbeat_stopped) - - def start(self): - self._listener_service = self._client.listener - - def invoke_on_connection(self, message, connection, ignore_heartbeat=False, event_handler=None): - return self.invoke(Invocation(self, message, connection=connection, event_handler=event_handler), - ignore_heartbeat) - - def invoke_on_partition(self, message, partition_id, invocation_timeout=None): - invocation = Invocation(self, message, partition_id=partition_id) - if invocation_timeout: - invocation.set_timeout(invocation_timeout) - return self.invoke(invocation) - - def invoke_on_random_target(self, message): - return self.invoke(Invocation(self, message)) - - def invoke_on_target(self, message, address): - return self.invoke(Invocation(self, message, address=address)) - - def invoke_smart(self, invocation, ignore_heartbeat=False): - if invocation.has_connection(): - self._send(invocation, invocation.connection, ignore_heartbeat) - elif invocation.has_partition_id(): - addr = self._client.partition_service.get_partition_owner(invocation.partition_id) - if addr is None: - self._handle_exception(invocation, IOError("Partition does not have an owner. " - "partition Id: ".format(invocation.partition_id))) - elif not self._is_member(addr): - self._handle_exception(invocation, TargetNotMemberError("Partition owner '{}' " - "is not a member.".format(addr))) - else: - self._send_to_address(invocation, addr) - elif invocation.has_address(): - if not self._is_member(invocation.address): - self._handle_exception(invocation, TargetNotMemberError("Target '{}' is not a member.".format - (invocation.address))) - else: - self._send_to_address(invocation, invocation.address) - else: # send to random address - addr = self._client.load_balancer.next_address() - if addr is None: - self._handle_exception(invocation, IOError("No address found to invoke")) + if message.start_frame.has_event_flag(): + self._listener_service.handle_client_message(message, correlation_id) + return + + invocation = self._pending.pop(correlation_id, None) + if not invocation: + self.logger.warning("Got message with unknown correlation id: %s", message, extra=self._logger_extras) + return + + if message.get_message_type() == EXCEPTION_MESSAGE_TYPE: + error = create_error_from_message(message) + return self._handle_exception(invocation, error) + + invocation.set_response(message) + + def shutdown(self): + self._shutdown = True + for invocation in list(six.itervalues(self._pending)): + self._handle_exception(invocation, HazelcastClientNotActiveError()) + + def _invoke_on_partition_owner(self, invocation, partition_id): + owner_uuid = self._partition_service.get_partition_owner(partition_id) + if not owner_uuid: + self.logger.debug("Partition owner is not assigned yet", extra=self._logger_extras) + return False + return self._invoke_on_target(invocation, owner_uuid) + + def _invoke_on_target(self, invocation, owner_uuid): + connection = self._connection_manager.get_connection(owner_uuid) + if not connection: + self.logger.debug("Client is not connected to target: %s" % owner_uuid, extra=self._logger_extras) + return False + return self._send(invocation, connection) + + def _invoke_on_random_connection(self, invocation): + connection = self._connection_manager.get_random_connection() + if not connection: + self.logger.debug("No connection found to invoke", extra=self._logger_extras) + return False + return self._send(invocation, connection) + + def _invoke_smart(self, invocation): + if not invocation.timeout: + invocation.timeout = self._invocation_timeout + time.time() + + try: + if not invocation.urgent: + self._check_invocation_allowed_fn() + + connection = invocation.connection + if connection: + invoked = self._send(invocation, connection) + if not invoked: + self._handle_exception(invocation, IOError("Could not invoke on connection %s" % connection)) + return + + if invocation.partition_id != -1: + invoked = self._invoke_on_partition_owner(invocation, invocation.partition_id) + elif invocation.uuid: + invoked = self._invoke_on_target(invocation, invocation.uuid) else: - self._send_to_address(invocation, addr) - return invocation.future + invoked = self._invoke_on_random_connection(invocation) - def invoke_non_smart(self, invocation, ignore_heartbeat=False): - if invocation.has_connection(): - self._send(invocation, invocation.connection, ignore_heartbeat) - else: - addr = self._client.cluster.owner_connection_address - self._send_to_address(invocation, addr) - return invocation.future + if not invoked: + invoked = self._invoke_on_random_connection(invocation) - def cleanup_connection(self, connection, cause): - for correlation_id, invocation in six.iteritems(dict(self._pending)): - if invocation.sent_connection == connection: - self._handle_exception(invocation, cause) + if not invoked: + self._handle_exception(invocation, IOError("No connection found to invoke")) + except Exception as e: + self._handle_exception(invocation, e) + + def _invoke_non_smart(self, invocation): + if not invocation.timeout: + invocation.timeout = self._invocation_timeout + time.time() + + try: + if not invocation.urgent: + self._check_invocation_allowed_fn() + + connection = invocation.connection + if connection: + invoked = self._send(invocation, connection) + if not invoked: + self._handle_exception(invocation, IOError("Could not invoke on connection %s" % connection)) + return + + if not self._invoke_on_random_connection(invocation): + self._handle_exception(invocation, IOError("No connection found to invoke")) + except Exception as e: + self._handle_exception(invocation, e) def _init_invocation_retry_pause(self): invocation_retry_pause = self._client.properties.get_seconds_positive_or_default( @@ -147,123 +177,64 @@ def _init_invocation_timeout(self): self._client.properties.INVOCATION_TIMEOUT_SECONDS) return invocation_timeout - def _heartbeat_stopped(self, connection): - for correlation_id, invocation in six.iteritems(dict(self._pending)): - if invocation.sent_connection == connection: - self._handle_exception(invocation, - TargetDisconnectedError("%s has stopped heart beating." % connection)) - - def _send_to_address(self, invocation, address, ignore_heartbeat=False): - try: - conn = self._client.connection_manager.connections[address] - self._send(invocation, conn, ignore_heartbeat) - except KeyError: - if self._client.lifecycle.state != LIFECYCLE_STATE_CONNECTED: - self._handle_exception(invocation, IOError("Client is not in connected state")) - else: - self._client.connection_manager.get_or_connect(address).continue_with(self.on_connect, invocation, - ignore_heartbeat) - - def on_connect(self, f, invocation, ignore_heartbeat): - if f.is_success(): - self._send(invocation, f.result(), ignore_heartbeat) - else: - self._handle_exception(invocation, f.exception(), f.traceback()) + def _send(self, invocation, connection): + if self._shutdown: + raise HazelcastClientNotActiveError() - def _send(self, invocation, connection, ignore_heartbeat): correlation_id = self._next_correlation_id.get_and_increment() message = invocation.request message.set_correlation_id(correlation_id) message.set_partition_id(invocation.partition_id) self._pending[correlation_id] = invocation - if not invocation.timer: - invocation.timer = self._client.reactor.add_timer_absolute(invocation.timeout, invocation.on_timeout) - if invocation.event_handler is not None: + if invocation.event_handler: self._listener_service.add_event_handler(correlation_id, invocation.event_handler) self.logger.debug("Sending %s to %s", message, connection, extra=self._logger_extras) - if not ignore_heartbeat and not connection.heartbeating: - self._handle_exception(invocation, TargetDisconnectedError("%s has stopped heart beating." % connection)) - return - - invocation.sent_connection = connection - try: - connection.send_message(message) - except IOError as e: - if invocation.event_handler is not None: + if not connection.send_message(message): + if invocation.event_handler: self._listener_service.remove_event_handler(correlation_id) - self._handle_exception(invocation, e) - - def _handle_client_message(self, message): - correlation_id = message.get_correlation_id() - if message.has_flags(LISTENER_FLAG): - self._listener_service.handle_client_message(message) - return - if correlation_id not in self._pending: - self.logger.warning("Got message with unknown correlation id: %s", message, extra=self._logger_extras) - return - invocation = self._pending.pop(correlation_id) - - if message.get_message_type() == EXCEPTION_MESSAGE_TYPE: - error = create_exception(ErrorCodec(message)) - return self._handle_exception(invocation, error) - - invocation.set_response(message) - - def _handle_event(self, invocation, message): - try: - invocation.event_handler(message) - except: - self.logger.warning("Error handling event %s", message, exc_info=True, extra=self._logger_extras) + return False + return True def _handle_exception(self, invocation, error, traceback=None): if self.logger.isEnabledFor(logging.DEBUG): - self.logger.debug("Got exception for request %s: %s: %s", invocation.request, type(error).__name__, error, + self.logger.debug("Got exception for request %s, error: %s" % (invocation.request, error), extra=self._logger_extras) - if not self._client.lifecycle.is_live: - invocation.set_exception(HazelcastClientNotActiveException(error.args[0]), traceback) - return - - if self._is_not_allowed_to_retry_on_selection(invocation, error): - invocation.set_exception(error, traceback) + if not self._client.lifecycle_service.is_running(): + invocation.set_exception(HazelcastClientNotActiveError(), traceback) + self._pending.pop(invocation.request.get_correlation_id(), None) return if not self._should_retry(invocation, error): invocation.set_exception(error, traceback) + self._pending.pop(invocation.request.get_correlation_id(), None) return if invocation.timeout < time.time(): - if self.logger.isEnabledFor(logging.DEBUG): - self.logger.debug('Error will not be retried because invocation timed out: %s', error, - extra=self._logger_extras) - invocation.set_exception(TimeoutError( - '%s timed out because an error occurred after invocation timeout: %s' % (invocation.request, error), - traceback)) + self.logger.debug("Error will not be retried because invocation timed out: %s", error, + extra=self._logger_extras) + invocation.set_exception(HazelcastTimeoutError("Request timed out because an error occurred after " + "invocation timeout: %s" % error, traceback)) + self._pending.pop(invocation.request.get_correlation_id(), None) return invoke_func = functools.partial(self.invoke, invocation) - self._client.reactor.add_timer(self.invocation_retry_pause, invoke_func) + self._reactor.add_timer(self._invocation_retry_pause, invoke_func) def _should_retry(self, invocation, error): - if isinstance(error, (IOError, HazelcastInstanceNotActiveError)) or is_retryable_error(error): + if invocation.connection and isinstance(error, (IOError, TargetDisconnectedError)): return True - if isinstance(error, TargetDisconnectedError): - return invocation.request.is_retryable() or self._is_redo_operation + if invocation.uuid and isinstance(error, TargetNotMemberError): + return False - return False - - def _is_not_allowed_to_retry_on_selection(self, invocation, error): - if invocation.connection is not None and isinstance(error, IOError): + if isinstance(error, (IOError, HazelcastInstanceNotActiveError)) or is_retryable_error(error): return True - # When invocation is sent over an address,error is the TargetNotMemberError and the - # member is not in the member list, we should not retry - return invocation.address is not None and isinstance(error, TargetNotMemberError) \ - and not self._is_member(invocation.address) + if isinstance(error, TargetDisconnectedError): + return invocation.request.retryable or self._is_redo_operation - def _is_member(self, address): - return self._client.cluster.get_member_by_address(address) is not None + return False diff --git a/hazelcast/lifecycle.py b/hazelcast/lifecycle.py index 75276c0dbb..2b943800bf 100644 --- a/hazelcast/lifecycle.py +++ b/hazelcast/lifecycle.py @@ -1,51 +1,93 @@ import logging import uuid -from hazelcast.util import create_git_info +from hazelcast import six +from hazelcast.util import create_git_info, enum -LIFECYCLE_STATE_STARTING = "STARTING" -LIFECYCLE_STATE_CONNECTED = "CONNECTED" -LIFECYCLE_STATE_DISCONNECTED = "DISCONNECTED" -LIFECYCLE_STATE_SHUTTING_DOWN = "SHUTTING_DOWN" -LIFECYCLE_STATE_SHUTDOWN = "SHUTDOWN" +LifecycleState = enum( + STARTING="STARTING", + STARTED="STARTED", + SHUTTING_DOWN="SHUTTING_DOWN", + SHUTDOWN="SHUTDOWN", + CONNECTED="CONNECTED", + DISCONNECTED="DISCONNECTED", +) class LifecycleService(object): """ - LifecycleService allows you to shutdown, terminate, and listen to LifecycleEvent's on HazelcastInstances. + Lifecycle service for the Hazelcast client. Allows to determine + state of the client and add or remove lifecycle listeners. """ - logger = logging.getLogger("HazelcastClient.LifecycleService") - state = None - def __init__(self, config, logger_extras=None): - self._listeners = {} - self._logger_extras = logger_extras + def __init__(self, internal_lifecycle_service): + self._service = internal_lifecycle_service - for listener in config.lifecycle_listeners: - self.add_listener(listener) + def is_running(self): + """ + Checks whether or not the instance is running. - self._git_info = create_git_info() - self.is_live = True - self.fire_lifecycle_event(LIFECYCLE_STATE_STARTING) + :return: ``True``, if the client is active and running, ``False`` otherwise. + :rtype: bool + """ + return self._service.running - def add_listener(self, on_lifecycle_change): + def add_listener(self, on_state_change): """ - Add a listener object to listen for lifecycle events. + Adds a listener to listen for lifecycle events. + + :param on_state_change: Function to be called when lifecycle state is changed. + :type on_state_change: function - :param on_lifecycle_change: (Function), function to be called when LifeCycle state is changed. - :return: (str), id of the listener. + :return: Registration id of the listener + :rtype: str """ - id = str(uuid.uuid4()) - self._listeners[id] = on_lifecycle_change - return id + return self._service.add_listener(on_state_change) def remove_listener(self, registration_id): """ Removes a lifecycle listener. - :param registration_id: (str), the id of the listener to be removed. - :return: (bool), ``true`` if the listener is removed successfully, ``false`` otherwise. + :param registration_id: The id of the listener to be removed. + :type registration_id: str + + :return: ``True`` if the listener is removed successfully, ``False`` otherwise. + :rtype: bool """ + self._service.remove_listener(registration_id) + + +class _InternalLifecycleService(object): + logger = logging.getLogger("HazelcastClient.LifecycleService") + + def __init__(self, client, logger_extras): + self._client = client + self._logger_extras = logger_extras + self.running = False + self._listeners = {} + + for listener in client.config.lifecycle_listeners: + self.add_listener(listener) + + self._git_info = create_git_info() + + def start(self): + if self.running: + return + + self.fire_lifecycle_event(LifecycleState.STARTING) + self.running = True + self.fire_lifecycle_event(LifecycleState.STARTED) + + def shutdown(self): + self.running = False + + def add_listener(self, on_state_change): + listener_id = str(uuid.uuid4()) + self._listeners[listener_id] = on_state_change + return listener_id + + def remove_listener(self, registration_id): try: self._listeners.pop(registration_id) return True @@ -53,18 +95,10 @@ def remove_listener(self, registration_id): return False def fire_lifecycle_event(self, new_state): - """ - Called when instance's state changes. - - :param new_state: (Lifecycle State), the new state of the instance. - """ - if new_state == LIFECYCLE_STATE_SHUTTING_DOWN: - self.is_live = False - - self.state = new_state self.logger.info(self._git_info + "HazelcastClient is %s", new_state, extra=self._logger_extras) - for listener in list(self._listeners.values()): - try: - listener(new_state) - except: - self.logger.exception("Exception in lifecycle listener", extra=self._logger_extras) + for on_state_change in six.itervalues(self._listeners): + if on_state_change: + try: + on_state_change(new_state) + except: + self.logger.exception("Exception in lifecycle listener", extra=self._logger_extras) diff --git a/hazelcast/listener.py b/hazelcast/listener.py index 49eeb0963d..7569f2acbb 100644 --- a/hazelcast/listener.py +++ b/hazelcast/listener.py @@ -2,12 +2,18 @@ import threading from uuid import uuid4 -from hazelcast.exception import OperationTimeoutError, HazelcastError -from hazelcast.util import current_time_in_millis, check_not_none -from time import sleep +from hazelcast import six +from hazelcast.errors import HazelcastError +from hazelcast.future import combine_futures +from hazelcast.invocation import Invocation +from hazelcast.protocol.codec import client_add_cluster_view_listener_codec +from hazelcast.util import check_not_none -class ListenerRegistration(object): +class _ListenerRegistration(object): + __slots__ = ("registration_request", "decode_register_response", "encode_deregister_request", + "handler", "connection_registrations") + def __init__(self, registration_request, decode_register_response, encode_deregister_request, handler): self.registration_request = registration_request self.decode_register_response = decode_register_response @@ -16,7 +22,9 @@ def __init__(self, registration_request, decode_register_response, encode_deregi self.connection_registrations = {} # Dict of Connection, EventRegistration -class EventRegistration(object): +class _EventRegistration(object): + __slots__ = ("server_registration_id", "correlation_id") + def __init__(self, server_registration_id, correlation_id): self.server_registration_id = server_registration_id self.correlation_id = correlation_id @@ -25,141 +33,175 @@ def __init__(self, server_registration_id, correlation_id): class ListenerService(object): logger = logging.getLogger("HazelcastClient.ListenerService") - def __init__(self, client): + def __init__(self, client, connection_manager, invocation_service, logger_extras): self._client = client - self._invocation_service = client.invoker - self.is_smart = client.config.network_config.smart_routing - self._logger_extras = {"client_name": client.name, "group_name": client.config.group_config.name} + self._connection_manager = connection_manager + self._invocation_service = invocation_service + self._logger_extras = logger_extras + self._is_smart = client.config.network.smart_routing self._active_registrations = {} # Dict of user_registration_id, ListenerRegistration self._registration_lock = threading.RLock() self._event_handlers = {} - def try_sync_connect_to_all_members(self): - cluster_service = self._client.cluster - start_millis = current_time_in_millis() - while True: - last_failed_member = None - last_exception = None - for member in cluster_service.members: - try: - self._client.connection_manager.get_or_connect(member.address).result() - except Exception as e: - last_failed_member = member - last_exception = e - if last_exception is None: - break - self.time_out_or_sleep_before_next_try(start_millis, last_failed_member, last_exception) - if not self._client.lifecycle.is_live(): - break - - def time_out_or_sleep_before_next_try(self, start_millis, last_failed_member, last_exception): - now_in_millis = current_time_in_millis() - elapsed_millis = now_in_millis - start_millis - invocation_time_out_millis = self._invocation_service.invocation_timeout * 1000 - timed_out = elapsed_millis > invocation_time_out_millis - if timed_out: - raise OperationTimeoutError\ - ("Registering listeners is timed out. Last failed member: %s, Current time: %s, Start time: %s, " - "Client invocation timeout: %s, Elapsed time: %s ms, Cause: %s", last_failed_member, now_in_millis, - start_millis, invocation_time_out_millis, elapsed_millis, last_exception.args[0]) - else: - sleep(self._invocation_service.invocation_retry_pause) # sleep before next try + def start(self): + self._connection_manager.add_listener(self._connection_added, self._connection_removed) def register_listener(self, registration_request, decode_register_response, encode_deregister_request, handler): - if self.is_smart: - self.try_sync_connect_to_all_members() - with self._registration_lock: - user_registration_id = str(uuid4()) - listener_registration = ListenerRegistration(registration_request, decode_register_response, - encode_deregister_request, handler) - self._active_registrations[user_registration_id] = listener_registration - - active_connections = self._client.connection_manager.connections - for connection in active_connections.values(): - try: - self.register_listener_on_connection_async(user_registration_id, listener_registration, connection)\ - .result() - except: - if connection.live(): - self.deregister_listener(user_registration_id) - raise HazelcastError("Listener cannot be added ") - return user_registration_id + registration_id = str(uuid4()) + registration = _ListenerRegistration(registration_request, decode_register_response, + encode_deregister_request, handler) + self._active_registrations[registration_id] = registration - def register_listener_on_connection_async(self, user_registration_id, listener_registration, connection): - registration_map = listener_registration.connection_registrations - - if connection in registration_map: - return - - registration_request = listener_registration.registration_request.clone() - future = self._invocation_service.invoke_on_connection(registration_request, connection, - event_handler=listener_registration.handler) + futures = [] + for connection in six.itervalues(self._connection_manager.active_connections): + future = self._register_on_connection_async(registration_id, registration, connection) + futures.append(future) - def callback(f): try: - response = f.result() - server_registration_id = listener_registration.decode_register_response(response) - correlation_id = registration_request.get_correlation_id() - registration = EventRegistration(server_registration_id, correlation_id) - registration_map[connection] = registration - except Exception as e: - if connection.live(): - self.logger.warning("Listener %s can not be added to a new connection: %s, reason: %s", - user_registration_id, connection, e.args[0], extra=self._logger_extras) - raise e + combine_futures(*futures).result() + except: + self.deregister_listener(registration_id) + raise HazelcastError("Listener cannot be added") - return future.continue_with(callback) + return registration_id def deregister_listener(self, user_registration_id): - check_not_none(user_registration_id, "None userRegistrationId is not allowed!") + check_not_none(user_registration_id, "None user_registration_id is not allowed!") with self._registration_lock: listener_registration = self._active_registrations.get(user_registration_id) - if listener_registration is None: + if not listener_registration: return False + successful = True - for connection, event_registration in list(listener_registration.connection_registrations.items()): + # Need to copy items to avoid getting runtime modification errors + for connection, event_registration in list(six.iteritems(listener_registration.connection_registrations)): try: server_registration_id = event_registration.server_registration_id deregister_request = listener_registration.encode_deregister_request(server_registration_id) - self._invocation_service.invoke_on_connection(deregister_request, connection).result() + invocation = Invocation(deregister_request, connection=connection) + self._invocation_service.invoke(invocation) + invocation.future.result() self.remove_event_handler(event_registration.correlation_id) listener_registration.connection_registrations.pop(connection) except: - if connection.live(): + if connection.live: successful = False self.logger.warning("Deregistration for listener with ID %s has failed to address %s ", user_registration_id, "address", exc_info=True, extra=self._logger_extras) if successful: self._active_registrations.pop(user_registration_id) + return successful - def connection_added(self, connection): + def handle_client_message(self, message, correlation_id): + handler = self._event_handlers.get(correlation_id, None) + if handler: + handler(message) + else: + self.logger.warning("Got event message with unknown correlation id: %s", message, extra=self._logger_extras) + + def add_event_handler(self, correlation_id, event_handler): + self._event_handlers[correlation_id] = event_handler + + def remove_event_handler(self, correlation_id): + self._event_handlers.pop(correlation_id, None) + + def _register_on_connection_async(self, user_registration_id, listener_registration, connection): + registration_map = listener_registration.connection_registrations + + if connection in registration_map: + return + + registration_request = listener_registration.registration_request.copy() + invocation = Invocation(registration_request, connection=connection, + event_handler=listener_registration.handler, response_handler=lambda m: m) + self._invocation_service.invoke(invocation) + + def callback(f): + try: + response = f.result() + server_registration_id = listener_registration.decode_register_response(response) + correlation_id = registration_request.get_correlation_id() + registration = _EventRegistration(server_registration_id, correlation_id) + registration_map[connection] = registration + except Exception as e: + if connection.live: + self.logger.exception("Listener %s can not be added to a new connection: %s", + user_registration_id, connection, extra=self._logger_extras) + raise e + + return invocation.future.continue_with(callback) + + def _connection_added(self, connection): with self._registration_lock: - for user_reg_id, listener_registration in self._active_registrations.items(): - self.register_listener_on_connection_async(user_reg_id, listener_registration, connection) + for user_reg_id, listener_registration in six.iteritems(self._active_registrations): + self._register_on_connection_async(user_reg_id, listener_registration, connection) - def connection_removed(self, connection, _): + def _connection_removed(self, connection, _): with self._registration_lock: - for listener_registration in self._active_registrations.values(): + for listener_registration in six.itervalues(self._active_registrations): event_registration = listener_registration.connection_registrations.pop(connection, None) - if event_registration is not None: + if event_registration: self.remove_event_handler(event_registration.correlation_id) + +class ClusterViewListenerService(object): + def __init__(self, client, connection_manager, partition_service, cluster_service, invocation_service): + self._client = client + self._partition_service = partition_service + self._connection_manager = connection_manager + self._cluster_service = cluster_service + self._invocation_service = invocation_service + self._listener_added_connection = None + def start(self): - self._client.connection_manager.add_listener(self.connection_added, self.connection_removed) + self._connection_manager.add_listener(self._connection_added, self._connection_removed) - def handle_client_message(self, message): - correlation_id = message.get_correlation_id() - if correlation_id not in self._event_handlers: - self.logger.warning("Got event message with unknown correlation id: %s", message, extra=self._logger_extras) - else: - event_handler = self._event_handlers.get(correlation_id) - event_handler(message) + def _connection_added(self, connection): + self._try_register(connection) + + def _connection_removed(self, connection, _): + self._try_register_to_random_connection(connection) + + def _try_register_to_random_connection(self, old_connection): + if self._listener_added_connection is not old_connection: + return + self._listener_added_connection = None + new_connection = self._connection_manager.get_random_connection() + if new_connection: + self._try_register(new_connection) + + def _try_register(self, connection): + if self._listener_added_connection: + return + + self._cluster_service.clear_member_list_version() + self._listener_added_connection = connection + request = client_add_cluster_view_listener_codec.encode_request() + invocation = Invocation(request, connection=connection, event_handler=self._handler(connection), urgent=True) + self._invocation_service.invoke(invocation) + + def callback(f): + try: + f.result() + except: + self._try_register_to_random_connection(connection) + + invocation.future.add_done_callback(callback) + + def _handler(self, connection): + def handle_partitions_view_event(version, partitions): + self._partition_service.handle_partitions_view_event(connection, partitions, version) + + def handle_members_view_event(member_list_version, member_infos): + self._cluster_service.handle_members_view_event(member_list_version, member_infos) + + def inner(message): + client_add_cluster_view_listener_codec.handle(message, handle_members_view_event, + handle_partitions_view_event) + + return inner - def add_event_handler(self, correlation_id, event_handler): - self._event_handlers[correlation_id] = event_handler - def remove_event_handler(self, correlation_id): - self._event_handlers.pop(correlation_id, None) \ No newline at end of file diff --git a/hazelcast/near_cache.py b/hazelcast/near_cache.py index 261d058fcc..8ac097a90d 100644 --- a/hazelcast/near_cache.py +++ b/hazelcast/near_cache.py @@ -1,6 +1,6 @@ -import logging import random +from hazelcast import six from hazelcast.config import EVICTION_POLICY, IN_MEMORY_FORMAT from hazelcast.util import current_time from hazelcast.six.moves import range @@ -241,19 +241,20 @@ def __repr__(self): class NearCacheManager(object): - def __init__(self, client): + def __init__(self, client, serialization_service): self._client = client + self._serialization_service = serialization_service self._caches = {} def get_or_create_near_cache(self, name): near_cache = self._caches.get(name, None) if not near_cache: - near_cache_config = self._client.config.near_cache_configs.get(name, None) + near_cache_config = self._client.config.near_caches.get(name, None) if not near_cache_config: raise ValueError("Cannot find a near cache configuration with the name '{}'".format(name)) near_cache = NearCache(near_cache_config.name, - self._client.serialization_service, + self._serialization_service, near_cache_config.in_memory_format, near_cache_config.time_to_live_seconds, near_cache_config.max_idle_seconds, @@ -267,6 +268,10 @@ def get_or_create_near_cache(self, name): return near_cache + def clear_near_caches(self): + for cache in six.itervalues(self._caches): + cache._clear() + def destroy_near_cache(self, name): try: near_cache = self._caches.pop(name) @@ -274,9 +279,9 @@ def destroy_near_cache(self, name): except KeyError: pass - def destroy_all_near_caches(self): + def destroy_near_caches(self): for key in list(self._caches.keys()): self.destroy_near_cache(key) - def list_all_near_caches(self): + def list_near_caches(self): return list(self._caches.values()) diff --git a/hazelcast/partition.py b/hazelcast/partition.py index 043c31cc06..37c3dc697d 100644 --- a/hazelcast/partition.py +++ b/hazelcast/partition.py @@ -1,119 +1,149 @@ import logging -import threading +from hazelcast.errors import ClientOfflineError from hazelcast.hash import hash_to_index -from hazelcast.protocol.codec import client_get_partitions_codec -from hazelcast import six -PARTITION_UPDATE_INTERVAL = 10 + +class _PartitionTable(object): + __slots__ = ("connection", "version", "partitions") + + def __init__(self, connection, version, partitions): + self.connection = connection + self.version = version + self.partitions = partitions + + def __repr__(self): + return "PartitionTable(connection=%s, version=%s)" % (self.connection, self.version) class PartitionService(object): """ - An SPI service for accessing partition related information. + Allows to retrieve information about the partition count, the partition owner or the partitionId of a key. """ - logger = logging.getLogger("HazelcastClient.PartitionService") - timer = None - def __init__(self, client): - self._client = client - self._logger_extras = {"client_name": client.name, "group_name": client.config.group_config.name} - self.partitions = {} + def __init__(self, internal_partition_service): + self._service = internal_partition_service - def start(self): - """ - Starts the partition service. + def get_partition_owner(self, partition_id): """ - self.logger.debug("Starting partition service", extra=self._logger_extras) + Returns the owner of the partition if it's set, ``None`` otherwise. - def partition_updater(): - self._do_refresh() - self.timer = self._client.reactor.add_timer(PARTITION_UPDATE_INTERVAL, partition_updater) + :param partition_id: The partition id. + :type partition_id: int - self.timer = self._client.reactor.add_timer(PARTITION_UPDATE_INTERVAL, partition_updater) - - def shutdown(self): - """ - Shutdowns the partition service. + :return: Owner of partition + :rtype: :class:`uuid.UUID` """ - if self.timer: - self.timer.cancel() + return self._service.get_partition_owner(partition_id) - def refresh(self): + def get_partition_id(self, key_data): """ - Refreshes the partition service. - """ - self._client.reactor.add_timer(0, self._do_refresh) + Returns the partition id for a key data. - def get_partition_owner(self, partition_id): - """ - Gets the owner of the partition if it's set. Otherwise it will trigger partition assignment. + :param key_data: The key data. + :type key_data: :class:`~hazelcast.serialization.data.Data` - :param partition_id: (int), the partition id. - :return: (:class:`~hazelcast.core.Address`), owner of partition or ``None`` if it's not set yet. + :return: The partition id. + :rtype: int """ - if partition_id not in self.partitions: - self._do_refresh() - return self.partitions.get(partition_id, None) + return self._service.get_partition_id(key_data) - def get_partition_id(self, key): + def get_partition_count(self): """ - Returns the partition id for a Data key. + Returns partition count of the connected cluster. - :param key: (object), the data key. - :return: (int), the partition id. - """ - data = self._client.serialization_service.to_data(key) - count = self.get_partition_count() - if count <= 0: - return 0 - return hash_to_index(data.get_partition_hash(), count) + If partition table is not fetched yet, this method returns ``0``. - def get_partition_count(self): + :return: The partition count + :rtype: int """ - Returns the number of partitions. + return self._service.partition_count + + +class _InternalPartitionService(object): + logger = logging.getLogger("HazelcastClient.PartitionService") + + def __init__(self, client, logger_extras): + self.partition_count = 0 + self._client = client + self._logger_extras = logger_extras + self._partition_table = _PartitionTable(None, -1, dict()) - :return: (int), the number of partitions. + def handle_partitions_view_event(self, connection, partitions, version): + """Handles the incoming partition view event and updates the partition table + if it is not empty, coming from a new connection or not stale. """ - if not self.partitions: - self._get_partition_count_blocking() - return len(self.partitions) - - def _get_partition_count_blocking(self): - event = threading.Event() - while not event.isSet(): - self._do_refresh(callback=lambda: event.set()) - event.wait(timeout=1) - - def _do_refresh(self, callback=None): - self.logger.debug("Start updating partitions", extra=self._logger_extras) - address = self._client.cluster.owner_connection_address - connection = self._client.connection_manager.get_connection(address) - if connection is None: - self.logger.debug("Could not update partition thread as owner connection is not available.", + should_log = self.logger.isEnabledFor(logging.DEBUG) + if should_log: + self.logger.debug("Handling new partition table with version: %s" % version, extra=self._logger_extras) - if callback: - callback() + + table = self._partition_table + if not self._should_be_applied(connection, partitions, version, table, should_log): return - request = client_get_partitions_codec.encode_request() - def cb(f): - if f.is_success(): - self.process_partition_response(f.result()) - if callback: - callback() + new_partitions = self._prepare_partitions(partitions) + new_table = _PartitionTable(connection, version, new_partitions) + self._partition_table = new_table + + def get_partition_owner(self, partition_id): + table = self._partition_table + return table.partitions.get(partition_id, None) - future = self._client.invoker.invoke_on_connection(request, connection) - future.add_done_callback(cb) + def get_partition_id(self, key): + count = self.partition_count + if count == 0: + # Partition count can not be zero for the SYNC mode. + # On the SYNC mode, we are waiting for the first connection to be established. + # We are initializing the partition count with the value coming from the server with authentication. + # This error is used only for ASYNC mode client. + raise ClientOfflineError() - def process_partition_response(self, message): - partitions = client_get_partitions_codec.decode_response(message)["partitions"] - partitions_dict = {} - for addr, partition_list in six.iteritems(partitions): + return hash_to_index(key.get_partition_hash(), count) + + def check_and_set_partition_count(self, partition_count): + """ + :param partition_count: (int) + :return: (bool), True if partition count can be set for the first time, + or it is equal to one that is already available, returns False otherwise + """ + if self.partition_count == 0: + self.partition_count = partition_count + return True + return self.partition_count == partition_count + + def _should_be_applied(self, connection, partitions, version, current, should_log): + if not partitions: + if should_log: + self.logger.debug("Partition view will not be applied since response is empty. " + "Sending connection: %s, version: %s, current table: %s" + % (connection, version, current), + extra=self._logger_extras) + return False + + if connection != current.connection: + if should_log: + self.logger.debug("Partition view event coming from a new connection. Old: %s, new: %s" + % (current.connection, connection), extra=self._logger_extras) + return True + + if version <= current.version: + if should_log: + self.logger.debug("Partition view will not be applied since response state version is older. " + "Sending connection: %s, version: %s, current table: %s" + % (connection, version, current), + extra=self._logger_extras) + return False + + return True + + @staticmethod + def _prepare_partitions(partitions): + new_partitions = dict() + for uuid, partition_list in partitions: for partition in partition_list: - partitions_dict[partition] = addr - self.partitions.update(partitions_dict) - self.logger.debug("Finished updating partitions", extra=self._logger_extras) + new_partitions[partition] = uuid + return new_partitions def string_partition_strategy(key): diff --git a/hazelcast/protocol/__init__.py b/hazelcast/protocol/__init__.py index e69de29bb2..4d02078e6d 100644 --- a/hazelcast/protocol/__init__.py +++ b/hazelcast/protocol/__init__.py @@ -0,0 +1,41 @@ +class ErrorHolder(object): + __slots__ = ("error_code", "class_name", "message", "stack_trace_elements") + + def __init__(self, error_code, class_name, message, stack_trace_elements): + self.error_code = error_code + self.class_name = class_name + self.message = message + self.stack_trace_elements = stack_trace_elements + + def __eq__(self, other): + return isinstance(other, ErrorHolder) and self.error_code == other.error_code \ + and self.class_name == other.class_name and self.message == other.message \ + and self.stack_trace_elements == other.stack_trace_elements + + def __ne__(self, other): + return not self.__eq__(other) + + +class StackTraceElement(object): + __slots__ = ("class_name", "method_name", "file_name", "line_number") + + def __init__(self, class_name, method_name, file_name, line_number): + self.class_name = class_name + self.method_name = method_name + self.file_name = file_name + self.line_number = line_number + + def __eq__(self, other): + return isinstance(other, StackTraceElement) and self.class_name == other.class_name \ + and self.method_name == other.method_name and self.file_name == other.file_name \ + and self.line_number == other.line_number + + def __ne__(self, other): + return not self.__eq__(other) + + +class EndpointQualifier(object): + __slots__ = () + + def __init__(self, type, identifier): + pass diff --git a/hazelcast/protocol/builtin.py b/hazelcast/protocol/builtin.py new file mode 100644 index 0000000000..a70d7bd1a6 --- /dev/null +++ b/hazelcast/protocol/builtin.py @@ -0,0 +1,477 @@ +import uuid + +from hazelcast import six +from hazelcast.protocol.client_message import NULL_FRAME_BUF, BEGIN_FRAME_BUF, END_FRAME_BUF, \ + SIZE_OF_FRAME_LENGTH_AND_FLAGS, _IS_FINAL_FLAG, NULL_FINAL_FRAME_BUF, END_FINAL_FRAME_BUF +from hazelcast.serialization import LONG_SIZE_IN_BYTES, UUID_SIZE_IN_BYTES, LE_INT, LE_LONG, BOOLEAN_SIZE_IN_BYTES, \ + INT_SIZE_IN_BYTES, LE_ULONG, LE_UINT16, LE_INT8 +from hazelcast.serialization.data import Data + + +class CodecUtil(object): + @staticmethod + def fast_forward_to_end_frame(msg): + # We are starting from 1 because of the BEGIN_FRAME we read + # in the beginning of the decode method + num_expected_end_frames = 1 + while num_expected_end_frames != 0: + frame = msg.next_frame() + if frame.is_end_frame(): + num_expected_end_frames -= 1 + elif frame.is_begin_frame(): + num_expected_end_frames += 1 + + @staticmethod + def encode_nullable(buf, value, encoder, is_final=False): + if value is None: + if is_final: + buf.extend(NULL_FINAL_FRAME_BUF) + else: + buf.extend(NULL_FRAME_BUF) + else: + encoder(buf, value, is_final) + + @staticmethod + def decode_nullable(msg, decoder): + if CodecUtil.next_frame_is_null_frame(msg): + return None + else: + return decoder(msg) + + @staticmethod + def next_frame_is_data_structure_end_frame(msg): + return msg.peek_next_frame().is_end_frame() + + @staticmethod + def next_frame_is_null_frame(msg): + """Returns whether the next frame is NULL_FRAME or not. + If it is, this method consumes the iterator + by calling msg.next_frame once to skip the NULL_FRAME. + """ + is_null = msg.peek_next_frame().is_null_frame() + if is_null: + msg.next_frame() + return is_null + + +class ByteArrayCodec(object): + @staticmethod + def encode(buf, value, is_final=False): + header = bytearray(SIZE_OF_FRAME_LENGTH_AND_FLAGS) + LE_INT.pack_into(header, 0, SIZE_OF_FRAME_LENGTH_AND_FLAGS + len(value)) + if is_final: + LE_UINT16.pack_into(header, INT_SIZE_IN_BYTES, _IS_FINAL_FLAG) + buf.extend(header) + buf.extend(value) + + @staticmethod + def decode(msg): + return msg.next_frame().buf + + +class DataCodec(object): + @staticmethod + def encode(buf, value, is_final=False): + value_bytes = value.to_bytes() + header = bytearray(SIZE_OF_FRAME_LENGTH_AND_FLAGS) + LE_INT.pack_into(header, 0, SIZE_OF_FRAME_LENGTH_AND_FLAGS + len(value_bytes)) + if is_final: + LE_UINT16.pack_into(header, INT_SIZE_IN_BYTES, _IS_FINAL_FLAG) + buf.extend(header) + buf.extend(value_bytes) + + @staticmethod + def decode(msg): + return Data(msg.next_frame().buf) + + @staticmethod + def encode_nullable(buf, value, is_final=False): + if value is None: + if is_final: + buf.extend(NULL_FINAL_FRAME_BUF) + else: + buf.extend(NULL_FRAME_BUF) + else: + DataCodec.encode(buf, value, is_final) + + @staticmethod + def decode_nullable(msg): + if CodecUtil.next_frame_is_null_frame(msg): + return None + else: + return DataCodec.decode(msg) + + +class EntryListCodec(object): + @staticmethod + def encode(buf, entries, key_encoder, value_encoder, is_final=False): + buf.extend(BEGIN_FRAME_BUF) + for key, value in entries: + key_encoder(buf, key) + value_encoder(buf, value) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def encode_nullable(buf, entries, key_encoder, value_encoder, is_final=False): + if entries is None: + if is_final: + buf.extend(NULL_FINAL_FRAME_BUF) + else: + buf.extend(NULL_FRAME_BUF) + else: + EntryListCodec.encode(buf, entries, key_encoder, value_encoder, is_final) + + @staticmethod + def decode(msg, key_decoder, value_decoder): + result = [] + msg.next_frame() + while not CodecUtil.next_frame_is_data_structure_end_frame(msg): + key = key_decoder(msg) + value = value_decoder(msg) + result.append((key, value)) + + msg.next_frame() + return result + + @staticmethod + def decode_nullable(msg, key_decoder, value_decoder): + if CodecUtil.next_frame_is_null_frame(msg): + return None + else: + return EntryListCodec.decode(msg, key_decoder, value_decoder) + + +_UUID_LONG_ENTRY_SIZE_IN_BYTES = UUID_SIZE_IN_BYTES + LONG_SIZE_IN_BYTES + + +class EntryListUUIDLongCodec(object): + + @staticmethod + def encode(buf, entries, is_final=False): + n = len(entries) + size = SIZE_OF_FRAME_LENGTH_AND_FLAGS + n * _UUID_LONG_ENTRY_SIZE_IN_BYTES + b = bytearray(size) + LE_INT.pack_into(b, 0, size) + if is_final: + LE_UINT16.pack_into(b, INT_SIZE_IN_BYTES, _IS_FINAL_FLAG) + for i in range(n): + key, value = entries[i] + o = SIZE_OF_FRAME_LENGTH_AND_FLAGS + i * _UUID_LONG_ENTRY_SIZE_IN_BYTES + FixSizedTypesCodec.encode_uuid(b, o, key) + FixSizedTypesCodec.encode_long(b, o + UUID_SIZE_IN_BYTES, value) + buf.extend(b) + + @staticmethod + def decode(msg): + b = msg.next_frame().buf + n = len(b) // _UUID_LONG_ENTRY_SIZE_IN_BYTES + result = [] + for i in range(n): + o = i * _UUID_LONG_ENTRY_SIZE_IN_BYTES + key = FixSizedTypesCodec.decode_uuid(b, o) + value = FixSizedTypesCodec.decode_long(b, o + UUID_SIZE_IN_BYTES) + result.append((key, value)) + return result + + +class EntryListUUIDListIntegerCodec(object): + @staticmethod + def encode(buf, entries, is_final=False): + keys = [] + buf.extend(BEGIN_FRAME_BUF) + for key, value in entries: + keys.append(key) + ListIntegerCodec.encode(buf, value) + buf.extend(END_FRAME_BUF) + ListUUIDCodec.encode(buf, keys, is_final) + + @staticmethod + def decode(msg): + values = ListMultiFrameCodec.decode(msg, ListIntegerCodec.decode) + keys = ListUUIDCodec.decode(msg) + result = [] + n = len(keys) + for i in range(n): + result.append((keys[i], values[i])) + return result + + +_UUID_MSB_SHIFT = 64 +_UUID_LSB_MASK = 0xFFFFFFFFFFFFFFFF + + +class FixSizedTypesCodec(object): + @staticmethod + def encode_int(buf, offset, value): + LE_INT.pack_into(buf, offset, value) + + @staticmethod + def decode_int(buf, offset): + return LE_INT.unpack_from(buf, offset)[0] + + @staticmethod + def encode_long(buf, offset, value): + LE_LONG.pack_into(buf, offset, value) + + @staticmethod + def decode_long(buf, offset): + return LE_LONG.unpack_from(buf, offset)[0] + + @staticmethod + def encode_boolean(buf, offset, value): + if value: + LE_INT8.pack_into(buf, offset, 1) + else: + LE_INT8.pack_into(buf, offset, 0) + + @staticmethod + def decode_boolean(buf, offset): + return LE_INT8.unpack_from(buf, offset)[0] == 1 + + @staticmethod + def encode_byte(buf, offset, value): + LE_INT8.pack_into(buf, offset, value) + + @staticmethod + def decode_byte(buf, offset): + return LE_INT8.unpack_from(buf, offset)[0] + + @staticmethod + def encode_uuid(buf, offset, value): + is_null = value is None + FixSizedTypesCodec.encode_boolean(buf, offset, is_null) + if is_null: + return + + o = offset + BOOLEAN_SIZE_IN_BYTES + LE_ULONG.pack_into(buf, o, value.int >> _UUID_MSB_SHIFT) + LE_ULONG.pack_into(buf, o + LONG_SIZE_IN_BYTES, value.int & _UUID_LSB_MASK) + + @staticmethod + def decode_uuid(buf, offset): + is_null = FixSizedTypesCodec.decode_boolean(buf, offset) + if is_null: + return None + + msb_offset = offset + BOOLEAN_SIZE_IN_BYTES + lsb_offset = msb_offset + LONG_SIZE_IN_BYTES + b = buf[lsb_offset - 1:msb_offset - 1:-1] + buf[lsb_offset + LONG_SIZE_IN_BYTES - 1:lsb_offset - 1:-1] + return uuid.UUID(bytes=bytes(b)) + + +class ListIntegerCodec(object): + @staticmethod + def encode(buf, arr, is_final=False): + n = len(arr) + size = SIZE_OF_FRAME_LENGTH_AND_FLAGS + n * INT_SIZE_IN_BYTES + b = bytearray(size) + LE_INT.pack_into(b, 0, size) + if is_final: + LE_UINT16.pack_into(b, INT_SIZE_IN_BYTES, _IS_FINAL_FLAG) + for i in range(n): + FixSizedTypesCodec.encode_int(b, SIZE_OF_FRAME_LENGTH_AND_FLAGS + i * INT_SIZE_IN_BYTES, arr[i]) + buf.extend(b) + + @staticmethod + def decode(msg): + b = msg.next_frame().buf + n = len(b) // INT_SIZE_IN_BYTES + result = [] + for i in range(n): + result.append(FixSizedTypesCodec.decode_int(b, i * INT_SIZE_IN_BYTES)) + return result + + +class ListLongCodec(object): + @staticmethod + def encode(buf, arr, is_final=False): + n = len(arr) + size = SIZE_OF_FRAME_LENGTH_AND_FLAGS + n * LONG_SIZE_IN_BYTES + b = bytearray(size) + LE_INT.pack_into(b, 0, size) + if is_final: + LE_UINT16.pack_into(b, INT_SIZE_IN_BYTES, _IS_FINAL_FLAG) + for i in range(n): + FixSizedTypesCodec.encode_long(b, SIZE_OF_FRAME_LENGTH_AND_FLAGS + i * LONG_SIZE_IN_BYTES, arr[i]) + buf.extend(b) + + @staticmethod + def decode(msg): + b = msg.next_frame().buf + n = len(b) // LONG_SIZE_IN_BYTES + result = [] + for i in range(n): + result.append(FixSizedTypesCodec.decode_long(b, i * LONG_SIZE_IN_BYTES)) + return result + + +class ListMultiFrameCodec(object): + @staticmethod + def encode(buf, arr, encoder, is_final=False): + buf.extend(BEGIN_FRAME_BUF) + for item in arr: + encoder(buf, item) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def encode_contains_nullable(buf, arr, encoder, is_final=False): + buf.extend(BEGIN_FRAME_BUF) + for item in arr: + if item is None: + buf.extend(NULL_FRAME_BUF) + else: + encoder(buf, item) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def encode_nullable(buf, arr, encoder, is_final=False): + if arr is None: + if is_final: + buf.extend(NULL_FINAL_FRAME_BUF) + else: + buf.extend(NULL_FRAME_BUF) + else: + ListMultiFrameCodec.encode(buf, arr, encoder, is_final) + + @staticmethod + def decode(msg, decoder): + result = [] + msg.next_frame() + while not CodecUtil.next_frame_is_data_structure_end_frame(msg): + result.append(decoder(msg)) + + msg.next_frame() + return result + + @staticmethod + def decode_contains_nullable(msg, decoder): + result = [] + msg.next_frame() + while not CodecUtil.next_frame_is_data_structure_end_frame(msg): + if CodecUtil.next_frame_is_null_frame(msg): + result.append(None) + else: + result.append(decoder(msg)) + + msg.next_frame() + return result + + @staticmethod + def decode_nullable(msg, decoder): + if CodecUtil.next_frame_is_null_frame(msg): + return None + else: + return ListMultiFrameCodec.decode(msg, decoder) + + +class ListUUIDCodec(object): + @staticmethod + def encode(buf, arr, is_final=False): + n = len(arr) + size = SIZE_OF_FRAME_LENGTH_AND_FLAGS + n * UUID_SIZE_IN_BYTES + b = bytearray(size) + LE_INT.pack_into(b, 0, size) + if is_final: + LE_UINT16.pack_into(b, INT_SIZE_IN_BYTES, _IS_FINAL_FLAG) + for i in range(n): + FixSizedTypesCodec.encode_uuid(b, SIZE_OF_FRAME_LENGTH_AND_FLAGS + i * UUID_SIZE_IN_BYTES, arr[i]) + buf.extend(b) + + @staticmethod + def decode(msg): + b = msg.next_frame().buf + n = len(b) // UUID_SIZE_IN_BYTES + result = [] + for i in range(n): + result.append(FixSizedTypesCodec.decode_uuid(b, i * UUID_SIZE_IN_BYTES)) + return result + + +class LongArrayCodec(object): + @staticmethod + def encode(buf, arr, is_final=False): + n = len(arr) + size = SIZE_OF_FRAME_LENGTH_AND_FLAGS + n * LONG_SIZE_IN_BYTES + b = bytearray(size) + LE_INT.pack_into(b, 0, size) + if is_final: + LE_UINT16.pack_into(b, INT_SIZE_IN_BYTES, _IS_FINAL_FLAG) + for i in range(n): + FixSizedTypesCodec.encode_long(b, SIZE_OF_FRAME_LENGTH_AND_FLAGS + i * LONG_SIZE_IN_BYTES, arr[i]) + buf.extend(b) + + @staticmethod + def decode(msg): + b = msg.next_frame().buf + n = len(b) // LONG_SIZE_IN_BYTES + result = [] + for i in range(n): + result.append(FixSizedTypesCodec.decode_long(b, i * LONG_SIZE_IN_BYTES)) + return result + + +class MapCodec(object): + @staticmethod + def encode(buf, m, key_encoder, value_encoder, is_final=False): + buf.extend(BEGIN_FRAME_BUF) + for key, value in six.iteritems(m): + key_encoder(buf, key) + value_encoder(buf, value) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def encode_nullable(buf, m, key_encoder, value_encoder, is_final=False): + if m is None: + if is_final: + buf.extend(NULL_FINAL_FRAME_BUF) + else: + buf.extend(NULL_FRAME_BUF) + else: + MapCodec.encode(buf, m, key_encoder, value_encoder, is_final) + + @staticmethod + def decode(msg, key_decoder, value_decoder): + result = dict() + msg.next_frame() + while not CodecUtil.next_frame_is_data_structure_end_frame(msg): + key = key_decoder(msg) + value = value_decoder(msg) + result[key] = value + + msg.next_frame() + return result + + @staticmethod + def decode_nullable(msg, key_decoder, value_decoder): + if CodecUtil.next_frame_is_null_frame(msg): + return None + else: + return MapCodec.decode(msg, key_decoder, value_decoder) + + +class StringCodec(object): + @staticmethod + def encode(buf, value, is_final=False): + value_bytes = value.encode("utf-8") + header = bytearray(SIZE_OF_FRAME_LENGTH_AND_FLAGS) + LE_INT.pack_into(header, 0, SIZE_OF_FRAME_LENGTH_AND_FLAGS + len(value_bytes)) + if is_final: + LE_UINT16.pack_into(header, INT_SIZE_IN_BYTES, _IS_FINAL_FLAG) + buf.extend(header) + buf.extend(value_bytes) + + @staticmethod + def decode(msg): + return msg.next_frame().buf.decode("utf-8") diff --git a/hazelcast/protocol/client_message.py b/hazelcast/protocol/client_message.py index 19e87e4c92..42ce4a19b5 100644 --- a/hazelcast/protocol/client_message.py +++ b/hazelcast/protocol/client_message.py @@ -1,264 +1,223 @@ -""" -Client Message is the carrier framed data as defined below. -Any request parameter, response or event data will be carried in the payload. - -0 1 2 3 -0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -|R| Frame Length | -+-------------+---------------+---------------------------------+ -| Version |B|E| Flags | Type | -+-------------+---------------+---------------------------------+ -| | -+ CorrelationId + -| | -+---------------------------------------------------------------+ -| PartitionId | -+-----------------------------+---------------------------------+ -| Data Offset | | -+-----------------------------+ | -| Message Payload Data ... -| ... - - -""" -import binascii -import struct -import socket import errno +import socket -from hazelcast.serialization.data import * - -# constants -VERSION = 0 -BEGIN_FLAG = 0x80 -END_FLAG = 0x40 -BEGIN_END_FLAG = BEGIN_FLAG | END_FLAG -LISTENER_FLAG = 0x01 - -PAYLOAD_OFFSET = 18 -SIZE_OFFSET = 0 - -FRAME_LENGTH_FIELD_OFFSET = 0 -VERSION_FIELD_OFFSET = FRAME_LENGTH_FIELD_OFFSET + INT_SIZE_IN_BYTES -FLAGS_FIELD_OFFSET = VERSION_FIELD_OFFSET + BYTE_SIZE_IN_BYTES -TYPE_FIELD_OFFSET = FLAGS_FIELD_OFFSET + BYTE_SIZE_IN_BYTES -CORRELATION_ID_FIELD_OFFSET = TYPE_FIELD_OFFSET + SHORT_SIZE_IN_BYTES -PARTITION_ID_FIELD_OFFSET = CORRELATION_ID_FIELD_OFFSET + LONG_SIZE_IN_BYTES -DATA_OFFSET_FIELD_OFFSET = PARTITION_ID_FIELD_OFFSET + INT_SIZE_IN_BYTES -HEADER_SIZE = DATA_OFFSET_FIELD_OFFSET + SHORT_SIZE_IN_BYTES - - -class ClientMessage(object): - def __init__(self, buff=None, payload_size=0): - if buff: - self.buffer = buff - self._read_index = 0 - else: - self.buffer = bytearray(HEADER_SIZE + payload_size) - self.set_data_offset(HEADER_SIZE) - self._write_index = 0 - self._retryable = False +from hazelcast.serialization.bits import * + +SIZE_OF_FRAME_LENGTH_AND_FLAGS = INT_SIZE_IN_BYTES + SHORT_SIZE_IN_BYTES + +_MESSAGE_TYPE_OFFSET = 0 +_CORRELATION_ID_OFFSET = _MESSAGE_TYPE_OFFSET + INT_SIZE_IN_BYTES +_RESPONSE_BACKUP_ACKS_OFFSET = _CORRELATION_ID_OFFSET + LONG_SIZE_IN_BYTES +_PARTITION_ID_OFFSET = _CORRELATION_ID_OFFSET + LONG_SIZE_IN_BYTES +_FRAGMENTATION_ID_OFFSET = 0 + +_OUTBOUND_MESSAGE_MESSAGE_TYPE_OFFSET = _MESSAGE_TYPE_OFFSET + SIZE_OF_FRAME_LENGTH_AND_FLAGS +_OUTBOUND_MESSAGE_CORRELATION_ID_OFFSET = _CORRELATION_ID_OFFSET + SIZE_OF_FRAME_LENGTH_AND_FLAGS +_OUTBOUND_MESSAGE_PARTITION_ID_OFFSET = _PARTITION_ID_OFFSET + SIZE_OF_FRAME_LENGTH_AND_FLAGS + +REQUEST_HEADER_SIZE = _OUTBOUND_MESSAGE_PARTITION_ID_OFFSET + INT_SIZE_IN_BYTES +RESPONSE_HEADER_SIZE = _RESPONSE_BACKUP_ACKS_OFFSET + BYTE_SIZE_IN_BYTES +EVENT_HEADER_SIZE = _PARTITION_ID_OFFSET + INT_SIZE_IN_BYTES + +_DEFAULT_FLAGS = 0 +_BEGIN_FRAGMENT_FLAG = 1 << 15 +_END_FRAGMENT_FLAG = 1 << 14 +_UNFRAGMENTED_MESSAGE_FLAGS = _BEGIN_FRAGMENT_FLAG | _END_FRAGMENT_FLAG +_IS_FINAL_FLAG = 1 << 13 +_BEGIN_DATA_STRUCTURE_FLAG = 1 << 12 +_END_DATA_STRUCTURE_FLAG = 1 << 11 +_IS_NULL_FLAG = 1 << 10 +_IS_EVENT_FLAG = 1 << 9 + + +# For codecs +def create_initial_buffer(size, message_type, is_final=False): + size += SIZE_OF_FRAME_LENGTH_AND_FLAGS + buf = bytearray(size) + LE_INT.pack_into(buf, 0, size) + flags = _UNFRAGMENTED_MESSAGE_FLAGS + if is_final: + flags |= _IS_FINAL_FLAG + LE_UINT16.pack_into(buf, INT_SIZE_IN_BYTES, flags) + LE_INT.pack_into(buf, _OUTBOUND_MESSAGE_MESSAGE_TYPE_OFFSET, message_type) + LE_INT.pack_into(buf, _OUTBOUND_MESSAGE_PARTITION_ID_OFFSET, -1) + return buf + + +# For custom codecs +def create_initial_buffer_custom(size, is_begin_frame=False): + size += SIZE_OF_FRAME_LENGTH_AND_FLAGS + if is_begin_frame: + # Needed for custom codecs that does not have initial frame at first + # but requires later due to new fix sized parameters + buf = bytearray(size) + LE_INT.pack_into(buf, 0, size) + LE_UINT16.pack_into(buf, INT_SIZE_IN_BYTES, _BEGIN_DATA_STRUCTURE_FLAG) + return buf + else: + # also add BEGIN_FRAME_BUF + buf = bytearray(SIZE_OF_FRAME_LENGTH_AND_FLAGS + size) + buf[:SIZE_OF_FRAME_LENGTH_AND_FLAGS] = BEGIN_FRAME_BUF + LE_INT.pack_into(buf, SIZE_OF_FRAME_LENGTH_AND_FLAGS, size) + # no need to encode flags since buf is initialized with zeros + return buf + + +class OutboundMessage(object): + __slots__ = ("buf", "retryable") + + def __init__(self, buf, retryable): + self.buf = buf + self.retryable = retryable + + def set_correlation_id(self, correlation_id): + LE_LONG.pack_into(self.buf, _OUTBOUND_MESSAGE_CORRELATION_ID_OFFSET, correlation_id) - # HEADER ACCESSORS def get_correlation_id(self): - return struct.unpack_from(FMT_LE_LONG, self.buffer, CORRELATION_ID_FIELD_OFFSET)[0] - - def set_correlation_id(self, val): - struct.pack_into(FMT_LE_LONG, self.buffer, CORRELATION_ID_FIELD_OFFSET, val) - return self - - def get_partition_id(self): - return struct.unpack_from(FMT_LE_INT, self.buffer, PARTITION_ID_FIELD_OFFSET)[0] - - def set_partition_id(self, val): - struct.pack_into(FMT_LE_INT, self.buffer, PARTITION_ID_FIELD_OFFSET, val) - return self - - def get_message_type(self): - return struct.unpack_from(FMT_LE_UINT16, self.buffer, TYPE_FIELD_OFFSET)[0] + return LE_LONG.unpack_from(self.buf, _OUTBOUND_MESSAGE_CORRELATION_ID_OFFSET)[0] - def set_message_type(self, val): - struct.pack_into(FMT_LE_UINT16, self.buffer, TYPE_FIELD_OFFSET, val) - return self + def set_partition_id(self, partition_id): + LE_INT.pack_into(self.buf, _OUTBOUND_MESSAGE_PARTITION_ID_OFFSET, partition_id) - def get_flags(self): - return struct.unpack_from(FMT_LE_UINT8, self.buffer, FLAGS_FIELD_OFFSET)[0] + def copy(self): + return OutboundMessage(bytearray(self.buf), self.retryable) - def set_flags(self, val): - struct.pack_into(FMT_LE_UINT8, self.buffer, FLAGS_FIELD_OFFSET, val) - return self - - def has_flags(self, flags): - return self.get_flags() & flags - - def get_frame_length(self): - return struct.unpack_from(FMT_LE_INT, self.buffer, FRAME_LENGTH_FIELD_OFFSET)[0] + def __repr__(self): + message_type = LE_INT.unpack_from(self.buf, _OUTBOUND_MESSAGE_MESSAGE_TYPE_OFFSET)[0] + correlation_id = self.get_correlation_id() + return "OutboundMessage(message_type=%s, correlation_id=%s, retryable=%s)" \ + % (message_type, correlation_id, self.retryable) - def set_frame_length(self, val): - struct.pack_into(FMT_LE_INT, self.buffer, FRAME_LENGTH_FIELD_OFFSET, val) - return self - def get_data_offset(self): - return struct.unpack_from(FMT_LE_UINT16, self.buffer, DATA_OFFSET_FIELD_OFFSET)[0] +class Frame(object): + __slots__ = ("buf", "flags", "next") - def set_data_offset(self, val): - struct.pack_into(FMT_LE_UINT16, self.buffer, DATA_OFFSET_FIELD_OFFSET, val) - return self + def __init__(self, buf, flags): + self.buf = buf + self.flags = flags + self.next = None - def _write_offset(self): - return self.get_data_offset() + self._write_index + def copy(self): + frame = Frame(self.buf, self.flags) + return frame - def _read_offset(self): - return self.get_data_offset() + self._read_index + def is_begin_frame(self): + return self._is_flag_set(_BEGIN_DATA_STRUCTURE_FLAG) - # PAYLOAD - def append_byte(self, val): - struct.pack_into(FMT_LE_UINT8, self.buffer, self._write_offset(), val) - self._write_index += BYTE_SIZE_IN_BYTES - return self + def is_end_frame(self): + return self._is_flag_set(_END_DATA_STRUCTURE_FLAG) - def append_bool(self, val): - return self.append_byte(1 if val else 0) + def is_null_frame(self): + return self._is_flag_set(_IS_NULL_FLAG) - def append_int(self, val): - struct.pack_into(FMT_LE_INT, self.buffer, self._write_offset(), val) - self._write_index += INT_SIZE_IN_BYTES - return self + def is_final_frame(self): + return self._is_flag_set(_IS_FINAL_FLAG) - def append_long(self, val): - struct.pack_into(FMT_LE_LONG, self.buffer, self._write_offset(), val) - self._write_index += LONG_SIZE_IN_BYTES - return self + def has_event_flag(self): + return self._is_flag_set(_IS_EVENT_FLAG) - def append_str(self, val): - self.append_byte_array(val.encode("utf-8")) - return self + def has_unfragmented_message_flags(self): + return self._is_flag_set(_UNFRAGMENTED_MESSAGE_FLAGS) - def append_data(self, val): - self.append_byte_array(val.to_bytes()) - return self + def has_begin_fragment_flag(self): + return self._is_flag_set(_BEGIN_FRAGMENT_FLAG) - def append_byte_array(self, arr): - length = len(arr) - # length - self.append_int(length) - # copy content - self.buffer[self._write_offset(): self._write_offset() + length] = arr[:] - self._write_index += length + def has_end_fragment_flag(self): + return self._is_flag_set(_END_FRAGMENT_FLAG) - def append_tuple(self, entry_tuple): - self.append_data(entry_tuple[0]).append_data(entry_tuple[1]) - return self + def _is_flag_set(self, flag_mask): + i = self.flags & flag_mask + return i == flag_mask - # PAYLOAD READ - def _read_from_buff(self, fmt, size): - val = struct.unpack_from(fmt, self.buffer, self._read_offset()) - self._read_index += size - return val[0] - def read_byte(self): - return self._read_from_buff(FMT_LE_UINT8, BYTE_SIZE_IN_BYTES) +class InboundMessage(object): + __slots__ = ("start_frame", "end_frame", "_next_frame") - def read_bool(self): - return True if self.read_byte() else False + def __init__(self, start_frame): + self.start_frame = start_frame + self.end_frame = start_frame + self._next_frame = start_frame - def read_int(self): - return self._read_from_buff(FMT_LE_INT, INT_SIZE_IN_BYTES) + def next_frame(self): + result = self._next_frame + if self._next_frame is not None: + self._next_frame = self._next_frame.next + return result - def read_long(self): - return self._read_from_buff(FMT_LE_LONG, LONG_SIZE_IN_BYTES) + def has_next_frame(self): + return self._next_frame is not None - def read_str(self): - return self.read_byte_array().decode("utf-8") + def peek_next_frame(self): + return self._next_frame - def read_data(self): - return Data(self.read_byte_array()) + def add_frame(self, frame): + frame.next = None + # For inbound messages, we always had the start_frame and end_frame set + self.end_frame.next = frame + self.end_frame = frame - def read_byte_array(self): - length = self.read_int() - result = bytearray(self.buffer[self._read_offset(): self._read_offset() + length]) - self._read_index += length - return result + def get_message_type(self): + return LE_INT.unpack_from(self.start_frame.buf, _MESSAGE_TYPE_OFFSET)[0] - # helpers + def get_correlation_id(self): + return LE_LONG.unpack_from(self.start_frame.buf, _CORRELATION_ID_OFFSET)[0] - def is_retryable(self): - return self._retryable + def get_fragmentation_id(self): + return LE_LONG.unpack_from(self.start_frame.buf, _FRAGMENTATION_ID_OFFSET)[0] - def set_retryable(self, val): - self._retryable = val - return self + def merge(self, fragment): + # should be called after calling drop_fragmentation_frame() on fragment + self.end_frame.next = fragment.start_frame + self.end_frame = fragment.end_frame - def is_complete(self): - try: - return (self._read_offset() >= HEADER_SIZE) and (self._read_offset() == self.get_frame_length()) - except AttributeError: - return False + def drop_fragmentation_frame(self): + self.start_frame = self.start_frame.next + self._next_frame = self.start_frame - def is_flag_set(self, flag): - i = self.get_flags() & flag - return i == flag - def add_flag(self, flags): - self.set_flags(self.get_flags() | flags) - return self +NULL_FRAME_BUF = bytearray(SIZE_OF_FRAME_LENGTH_AND_FLAGS) +LE_INT.pack_into(NULL_FRAME_BUF, 0, SIZE_OF_FRAME_LENGTH_AND_FLAGS) +LE_UINT16.pack_into(NULL_FRAME_BUF, INT_SIZE_IN_BYTES, _IS_NULL_FLAG) - def update_frame_length(self): - self.set_frame_length(self._write_offset()) - return self +# Has IS_NULL and IS_FINAL flags +NULL_FINAL_FRAME_BUF = bytearray(SIZE_OF_FRAME_LENGTH_AND_FLAGS) +LE_INT.pack_into(NULL_FINAL_FRAME_BUF, 0, SIZE_OF_FRAME_LENGTH_AND_FLAGS) +LE_UINT16.pack_into(NULL_FINAL_FRAME_BUF, INT_SIZE_IN_BYTES, _IS_NULL_FLAG | _IS_FINAL_FLAG) - def accumulate(self, client_message): - start = client_message.get_data_offset() - end = client_message.get_frame_length() - self.buffer += client_message.buffer[start:end] - self.set_frame_length(len(self.buffer)) +BEGIN_FRAME_BUF = bytearray(SIZE_OF_FRAME_LENGTH_AND_FLAGS) +LE_INT.pack_into(BEGIN_FRAME_BUF, 0, SIZE_OF_FRAME_LENGTH_AND_FLAGS) +LE_UINT16.pack_into(BEGIN_FRAME_BUF, INT_SIZE_IN_BYTES, _BEGIN_DATA_STRUCTURE_FLAG) - def clone(self): - client_message = ClientMessage(bytearray(self.buffer)) - client_message.set_retryable(self._retryable) - return client_message +END_FRAME_BUF = bytearray(SIZE_OF_FRAME_LENGTH_AND_FLAGS) +LE_INT.pack_into(END_FRAME_BUF, 0, SIZE_OF_FRAME_LENGTH_AND_FLAGS) +LE_UINT16.pack_into(END_FRAME_BUF, INT_SIZE_IN_BYTES, _END_DATA_STRUCTURE_FLAG) - def __repr__(self): - return binascii.hexlify(self.buffer) - - def __str__(self): - return "ClientMessage:{{" \ - "length={}, " \ - "correlationId={}, " \ - "messageType={}, " \ - "partitionId={}, " \ - "isComplete={}, " \ - "isRetryable={}, " \ - "isEvent={}, " \ - "writeOffset={}}}".format(self.get_frame_length(), - self.get_correlation_id(), - self.get_message_type(), - self.get_partition_id(), - self.is_complete(), - self.is_retryable(), - self.is_flag_set(LISTENER_FLAG), - self.get_data_offset()) +# Has END_DATA_STRUCTURE and IS_FINAL flags +END_FINAL_FRAME_BUF = bytearray(SIZE_OF_FRAME_LENGTH_AND_FLAGS) +LE_INT.pack_into(END_FINAL_FRAME_BUF, 0, SIZE_OF_FRAME_LENGTH_AND_FLAGS) +LE_UINT16.pack_into(END_FINAL_FRAME_BUF, INT_SIZE_IN_BYTES, _END_DATA_STRUCTURE_FLAG | _IS_FINAL_FLAG) class ClientMessageBuilder(object): def __init__(self, message_callback): - self._incomplete_messages = dict() + self._fragmented_messages = dict() self._message_callback = message_callback def on_message(self, client_message): - if client_message.is_flag_set(BEGIN_END_FLAG): - # handle message + if client_message.start_frame.has_unfragmented_message_flags(): self._message_callback(client_message) - elif client_message.is_flag_set(BEGIN_FLAG): - self._incomplete_messages[client_message.get_correlation_id()] = client_message else: - try: - message = self._incomplete_messages[client_message.get_correlation_id()] - except KeyError: - raise socket.error(errno.EIO, "A message without the begin part is received.") - message.accumulate(client_message) - if client_message.is_flag_set(END_FLAG): - message.add_flag(BEGIN_END_FLAG) - self._message_callback(message) - del self._incomplete_messages[client_message.get_correlation_id()] + fragmentation_frame = client_message.start_frame + fragmentation_id = client_message.get_fragmentation_id() + client_message.drop_fragmentation_frame() + if fragmentation_frame.has_begin_fragment_flag(): + self._fragmented_messages[fragmentation_id] = client_message + else: + existing_message = self._fragmented_messages.get(fragmentation_id, None) + if not existing_message: + raise socket.error(errno.EIO, "A message without the begin part is received.") + + existing_message.merge(client_message) + if fragmentation_frame.has_end_fragment_flag(): + self._message_callback(existing_message) + del self._fragmented_messages[fragmentation_id] diff --git a/hazelcast/protocol/codec/atomic_long_add_and_get_codec.py b/hazelcast/protocol/codec/atomic_long_add_and_get_codec.py deleted file mode 100644 index 6d817a7767..0000000000 --- a/hazelcast/protocol/codec/atomic_long_add_and_get_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_ADDANDGET -RESPONSE_TYPE = 103 -RETRYABLE = False - - -def calculate_size(name, delta): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, delta): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, delta)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(delta) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_alter_and_get_codec.py b/hazelcast/protocol/codec/atomic_long_alter_and_get_codec.py deleted file mode 100644 index 0cfe16a4bc..0000000000 --- a/hazelcast/protocol/codec/atomic_long_alter_and_get_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_ALTERANDGET -RESPONSE_TYPE = 103 -RETRYABLE = False - - -def calculate_size(name, function): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(function) - return data_size - - -def encode_request(name, function): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, function)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(function) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_alter_codec.py b/hazelcast/protocol/codec/atomic_long_alter_codec.py deleted file mode 100644 index 4bd5d572b7..0000000000 --- a/hazelcast/protocol/codec/atomic_long_alter_codec.py +++ /dev/null @@ -1,29 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_ALTER -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(name, function): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(function) - return data_size - - -def encode_request(name, function): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, function)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(function) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/atomic_long_apply_codec.py b/hazelcast/protocol/codec/atomic_long_apply_codec.py deleted file mode 100644 index b04059a6c6..0000000000 --- a/hazelcast/protocol/codec/atomic_long_apply_codec.py +++ /dev/null @@ -1,34 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_APPLY -RESPONSE_TYPE = 105 -RETRYABLE = False - - -def calculate_size(name, function): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(function) - return data_size - - -def encode_request(name, function): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, function)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(function) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_compare_and_set_codec.py b/hazelcast/protocol/codec/atomic_long_compare_and_set_codec.py deleted file mode 100644 index 20bb6b2e51..0000000000 --- a/hazelcast/protocol/codec/atomic_long_compare_and_set_codec.py +++ /dev/null @@ -1,35 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_COMPAREANDSET -RESPONSE_TYPE = 101 -RETRYABLE = False - - -def calculate_size(name, expected, updated): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, expected, updated): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, expected, updated)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(expected) - client_message.append_long(updated) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_decrement_and_get_codec.py b/hazelcast/protocol/codec/atomic_long_decrement_and_get_codec.py deleted file mode 100644 index 40752e16b4..0000000000 --- a/hazelcast/protocol/codec/atomic_long_decrement_and_get_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_DECREMENTANDGET -RESPONSE_TYPE = 103 -RETRYABLE = False - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_get_and_add_codec.py b/hazelcast/protocol/codec/atomic_long_get_and_add_codec.py deleted file mode 100644 index a50145a98d..0000000000 --- a/hazelcast/protocol/codec/atomic_long_get_and_add_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_GETANDADD -RESPONSE_TYPE = 103 -RETRYABLE = False - - -def calculate_size(name, delta): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, delta): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, delta)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(delta) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_get_and_alter_codec.py b/hazelcast/protocol/codec/atomic_long_get_and_alter_codec.py deleted file mode 100644 index 9e29c3ad40..0000000000 --- a/hazelcast/protocol/codec/atomic_long_get_and_alter_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_GETANDALTER -RESPONSE_TYPE = 103 -RETRYABLE = False - - -def calculate_size(name, function): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(function) - return data_size - - -def encode_request(name, function): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, function)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(function) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_get_and_increment_codec.py b/hazelcast/protocol/codec/atomic_long_get_and_increment_codec.py deleted file mode 100644 index a7640b60fd..0000000000 --- a/hazelcast/protocol/codec/atomic_long_get_and_increment_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_GETANDINCREMENT -RESPONSE_TYPE = 103 -RETRYABLE = False - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_get_and_set_codec.py b/hazelcast/protocol/codec/atomic_long_get_and_set_codec.py deleted file mode 100644 index c07c2fae0b..0000000000 --- a/hazelcast/protocol/codec/atomic_long_get_and_set_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_GETANDSET -RESPONSE_TYPE = 103 -RETRYABLE = False - - -def calculate_size(name, new_value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, new_value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, new_value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(new_value) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_get_codec.py b/hazelcast/protocol/codec/atomic_long_get_codec.py deleted file mode 100644 index ea17a326df..0000000000 --- a/hazelcast/protocol/codec/atomic_long_get_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_GET -RESPONSE_TYPE = 103 -RETRYABLE = False - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_increment_and_get_codec.py b/hazelcast/protocol/codec/atomic_long_increment_and_get_codec.py deleted file mode 100644 index 55d3ad18b3..0000000000 --- a/hazelcast/protocol/codec/atomic_long_increment_and_get_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_INCREMENTANDGET -RESPONSE_TYPE = 103 -RETRYABLE = False - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters diff --git a/hazelcast/protocol/codec/atomic_long_message_type.py b/hazelcast/protocol/codec/atomic_long_message_type.py deleted file mode 100644 index b076165feb..0000000000 --- a/hazelcast/protocol/codec/atomic_long_message_type.py +++ /dev/null @@ -1,14 +0,0 @@ - -ATOMICLONG_APPLY = 0x0a01 -ATOMICLONG_ALTER = 0x0a02 -ATOMICLONG_ALTERANDGET = 0x0a03 -ATOMICLONG_GETANDALTER = 0x0a04 -ATOMICLONG_ADDANDGET = 0x0a05 -ATOMICLONG_COMPAREANDSET = 0x0a06 -ATOMICLONG_DECREMENTANDGET = 0x0a07 -ATOMICLONG_GET = 0x0a08 -ATOMICLONG_GETANDADD = 0x0a09 -ATOMICLONG_GETANDSET = 0x0a0a -ATOMICLONG_INCREMENTANDGET = 0x0a0b -ATOMICLONG_GETANDINCREMENT = 0x0a0c -ATOMICLONG_SET = 0x0a0d diff --git a/hazelcast/protocol/codec/atomic_long_set_codec.py b/hazelcast/protocol/codec/atomic_long_set_codec.py deleted file mode 100644 index 2b3dbc3664..0000000000 --- a/hazelcast/protocol/codec/atomic_long_set_codec.py +++ /dev/null @@ -1,29 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_long_message_type import * - -REQUEST_TYPE = ATOMICLONG_SET -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(name, new_value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, new_value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, new_value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(new_value) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/atomic_reference_alter_and_get_codec.py b/hazelcast/protocol/codec/atomic_reference_alter_and_get_codec.py deleted file mode 100644 index 46cc58182d..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_alter_and_get_codec.py +++ /dev/null @@ -1,34 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_ALTERANDGET -RESPONSE_TYPE = 105 -RETRYABLE = False - - -def calculate_size(name, function): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(function) - return data_size - - -def encode_request(name, function): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, function)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(function) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters diff --git a/hazelcast/protocol/codec/atomic_reference_alter_codec.py b/hazelcast/protocol/codec/atomic_reference_alter_codec.py deleted file mode 100644 index 98c8103286..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_alter_codec.py +++ /dev/null @@ -1,29 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_ALTER -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(name, function): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(function) - return data_size - - -def encode_request(name, function): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, function)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(function) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/atomic_reference_apply_codec.py b/hazelcast/protocol/codec/atomic_reference_apply_codec.py deleted file mode 100644 index 3e855cb122..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_apply_codec.py +++ /dev/null @@ -1,34 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_APPLY -RESPONSE_TYPE = 105 -RETRYABLE = False - - -def calculate_size(name, function): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(function) - return data_size - - -def encode_request(name, function): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, function)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(function) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters diff --git a/hazelcast/protocol/codec/atomic_reference_clear_codec.py b/hazelcast/protocol/codec/atomic_reference_clear_codec.py deleted file mode 100644 index 873457799d..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_clear_codec.py +++ /dev/null @@ -1,27 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_CLEAR -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/atomic_reference_compare_and_set_codec.py b/hazelcast/protocol/codec/atomic_reference_compare_and_set_codec.py deleted file mode 100644 index df7425bdea..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_compare_and_set_codec.py +++ /dev/null @@ -1,43 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_COMPAREANDSET -RESPONSE_TYPE = 101 -RETRYABLE = False - - -def calculate_size(name, expected, updated): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - if expected is not None: - data_size += calculate_size_data(expected) - data_size += BOOLEAN_SIZE_IN_BYTES - if updated is not None: - data_size += calculate_size_data(updated) - return data_size - - -def encode_request(name, expected, updated): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, expected, updated)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(expected is None) - if expected is not None: - client_message.append_data(expected) - client_message.append_bool(updated is None) - if updated is not None: - client_message.append_data(updated) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/atomic_reference_contains_codec.py b/hazelcast/protocol/codec/atomic_reference_contains_codec.py deleted file mode 100644 index 819e75ca3a..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_contains_codec.py +++ /dev/null @@ -1,37 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_CONTAINS -RESPONSE_TYPE = 101 -RETRYABLE = True - - -def calculate_size(name, expected): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - if expected is not None: - data_size += calculate_size_data(expected) - return data_size - - -def encode_request(name, expected): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, expected)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(expected is None) - if expected is not None: - client_message.append_data(expected) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/atomic_reference_get_and_alter_codec.py b/hazelcast/protocol/codec/atomic_reference_get_and_alter_codec.py deleted file mode 100644 index a6fbc453ee..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_get_and_alter_codec.py +++ /dev/null @@ -1,34 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_GETANDALTER -RESPONSE_TYPE = 105 -RETRYABLE = False - - -def calculate_size(name, function): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(function) - return data_size - - -def encode_request(name, function): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, function)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(function) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters diff --git a/hazelcast/protocol/codec/atomic_reference_get_and_set_codec.py b/hazelcast/protocol/codec/atomic_reference_get_and_set_codec.py deleted file mode 100644 index 9305ce5ae0..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_get_and_set_codec.py +++ /dev/null @@ -1,38 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_GETANDSET -RESPONSE_TYPE = 105 -RETRYABLE = False - - -def calculate_size(name, new_value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - if new_value is not None: - data_size += calculate_size_data(new_value) - return data_size - - -def encode_request(name, new_value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, new_value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(new_value is None) - if new_value is not None: - client_message.append_data(new_value) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters diff --git a/hazelcast/protocol/codec/atomic_reference_get_codec.py b/hazelcast/protocol/codec/atomic_reference_get_codec.py deleted file mode 100644 index 19cf580f51..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_get_codec.py +++ /dev/null @@ -1,35 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_GET -RESPONSE_TYPE = 105 -RETRYABLE = True - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters - - - diff --git a/hazelcast/protocol/codec/atomic_reference_is_null_codec.py b/hazelcast/protocol/codec/atomic_reference_is_null_codec.py deleted file mode 100644 index b43583e904..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_is_null_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_ISNULL -RESPONSE_TYPE = 101 -RETRYABLE = True - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/atomic_reference_message_type.py b/hazelcast/protocol/codec/atomic_reference_message_type.py deleted file mode 100644 index 65a8c70a8e..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_message_type.py +++ /dev/null @@ -1,13 +0,0 @@ - -ATOMICREFERENCE_APPLY = 0x0b01 -ATOMICREFERENCE_ALTER = 0x0b02 -ATOMICREFERENCE_ALTERANDGET = 0x0b03 -ATOMICREFERENCE_GETANDALTER = 0x0b04 -ATOMICREFERENCE_CONTAINS = 0x0b05 -ATOMICREFERENCE_COMPAREANDSET = 0x0b06 -ATOMICREFERENCE_GET = 0x0b08 -ATOMICREFERENCE_SET = 0x0b09 -ATOMICREFERENCE_CLEAR = 0x0b0a -ATOMICREFERENCE_GETANDSET = 0x0b0b -ATOMICREFERENCE_SETANDGET = 0x0b0c -ATOMICREFERENCE_ISNULL = 0x0b0d diff --git a/hazelcast/protocol/codec/atomic_reference_set_and_get_codec.py b/hazelcast/protocol/codec/atomic_reference_set_and_get_codec.py deleted file mode 100644 index d06f786852..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_set_and_get_codec.py +++ /dev/null @@ -1,38 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_SETANDGET -RESPONSE_TYPE = 105 -RETRYABLE = False - - -def calculate_size(name, new_value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - if new_value is not None: - data_size += calculate_size_data(new_value) - return data_size - - -def encode_request(name, new_value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, new_value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(new_value is None) - if new_value is not None: - client_message.append_data(new_value) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters diff --git a/hazelcast/protocol/codec/atomic_reference_set_codec.py b/hazelcast/protocol/codec/atomic_reference_set_codec.py deleted file mode 100644 index 6163adcd55..0000000000 --- a/hazelcast/protocol/codec/atomic_reference_set_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.atomic_reference_message_type import * - -REQUEST_TYPE = ATOMICREFERENCE_SET -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(name, new_value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - if new_value is not None: - data_size += calculate_size_data(new_value) - return data_size - - -def encode_request(name, new_value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, new_value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(new_value is None) - if new_value is not None: - client_message.append_data(new_value) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/client_add_cluster_view_listener_codec.py b/hazelcast/protocol/codec/client_add_cluster_view_listener_codec.py new file mode 100644 index 0000000000..e4306ab338 --- /dev/null +++ b/hazelcast/protocol/codec/client_add_cluster_view_listener_codec.py @@ -0,0 +1,39 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.codec.custom.member_info_codec import MemberInfoCodec +from hazelcast.protocol.builtin import EntryListUUIDListIntegerCodec + +# hex: 0x000300 +_REQUEST_MESSAGE_TYPE = 768 +# hex: 0x000301 +_RESPONSE_MESSAGE_TYPE = 769 +# hex: 0x000302 +_EVENT_MEMBERS_VIEW_MESSAGE_TYPE = 770 +# hex: 0x000303 +_EVENT_PARTITIONS_VIEW_MESSAGE_TYPE = 771 + +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_EVENT_MEMBERS_VIEW_VERSION_OFFSET = EVENT_HEADER_SIZE +_EVENT_PARTITIONS_VIEW_VERSION_OFFSET = EVENT_HEADER_SIZE + + +def encode_request(): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + return OutboundMessage(buf, False) + + +def handle(msg, handle_members_view_event=None, handle_partitions_view_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_MEMBERS_VIEW_MESSAGE_TYPE and handle_members_view_event is not None: + initial_frame = msg.next_frame() + version = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_MEMBERS_VIEW_VERSION_OFFSET) + member_infos = ListMultiFrameCodec.decode(msg, MemberInfoCodec.decode) + handle_members_view_event(version, member_infos) + return + if message_type == _EVENT_PARTITIONS_VIEW_MESSAGE_TYPE and handle_partitions_view_event is not None: + initial_frame = msg.next_frame() + version = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_PARTITIONS_VIEW_VERSION_OFFSET) + partitions = EntryListUUIDListIntegerCodec.decode(msg) + handle_partitions_view_event(version, partitions) + return diff --git a/hazelcast/protocol/codec/client_add_distributed_object_listener_codec.py b/hazelcast/protocol/codec/client_add_distributed_object_listener_codec.py index 64d32c07ad..50e22ee776 100644 --- a/hazelcast/protocol/codec/client_add_distributed_object_listener_codec.py +++ b/hazelcast/protocol/codec/client_add_distributed_object_listener_codec.py @@ -1,42 +1,39 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.client_message_type import * -from hazelcast.protocol.event_response_const import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = CLIENT_ADDDISTRIBUTEDOBJECTLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False +# hex: 0x000900 +_REQUEST_MESSAGE_TYPE = 2304 +# hex: 0x000901 +_RESPONSE_MESSAGE_TYPE = 2305 +# hex: 0x000902 +_EVENT_DISTRIBUTED_OBJECT_MESSAGE_TYPE = 2306 - -def calculate_size(local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +_REQUEST_LOCAL_ONLY_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_DISTRIBUTED_OBJECT_SOURCE_OFFSET = EVENT_HEADER_SIZE def encode_request(local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_distributed_object=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_DISTRIBUTEDOBJECT and handle_event_distributed_object is not None: - name = client_message.read_str() - service_name = client_message.read_str() - event_type = client_message.read_str() - handle_event_distributed_object(name=name, service_name=service_name, event_type=event_type) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_distributed_object_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_DISTRIBUTED_OBJECT_MESSAGE_TYPE and handle_distributed_object_event is not None: + initial_frame = msg.next_frame() + source = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_DISTRIBUTED_OBJECT_SOURCE_OFFSET) + name = StringCodec.decode(msg) + service_name = StringCodec.decode(msg) + event_type = StringCodec.decode(msg) + handle_distributed_object_event(name, service_name, event_type, source) + return diff --git a/hazelcast/protocol/codec/client_add_membership_listener_codec.py b/hazelcast/protocol/codec/client_add_membership_listener_codec.py deleted file mode 100644 index 68db9845bf..0000000000 --- a/hazelcast/protocol/codec/client_add_membership_listener_codec.py +++ /dev/null @@ -1,58 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.protocol.codec.client_message_type import * -from hazelcast.protocol.event_response_const import * -from hazelcast.six.moves import range - -REQUEST_TYPE = CLIENT_ADDMEMBERSHIPLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size - - -def encode_request(local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_member=None, handle_event_member_list=None, handle_event_member_attribute_change=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_MEMBER and handle_event_member is not None: - member = MemberCodec.decode(client_message, to_object) - event_type = client_message.read_int() - handle_event_member(member=member, event_type=event_type) - if message_type == EVENT_MEMBERLIST and handle_event_member_list is not None: - members_size = client_message.read_int() - members = [] - for _ in range(0, members_size): - members_item = MemberCodec.decode(client_message, to_object) - members.append(members_item) - handle_event_member_list(members=members) - if message_type == EVENT_MEMBERATTRIBUTECHANGE and handle_event_member_attribute_change is not None: - uuid = client_message.read_str() - key = client_message.read_str() - operation_type = client_message.read_int() - value = None - if not client_message.read_bool(): - value = client_message.read_str() - handle_event_member_attribute_change(uuid=uuid, key=key, operation_type=operation_type, value=value) diff --git a/hazelcast/protocol/codec/client_add_partition_lost_listener_codec.py b/hazelcast/protocol/codec/client_add_partition_lost_listener_codec.py index 373d052c5c..0458a08d45 100644 --- a/hazelcast/protocol/codec/client_add_partition_lost_listener_codec.py +++ b/hazelcast/protocol/codec/client_add_partition_lost_listener_codec.py @@ -1,45 +1,39 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.protocol.codec.client_message_type import * -from hazelcast.protocol.event_response_const import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE -REQUEST_TYPE = CLIENT_ADDPARTITIONLOSTLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False +# hex: 0x000600 +_REQUEST_MESSAGE_TYPE = 1536 +# hex: 0x000601 +_RESPONSE_MESSAGE_TYPE = 1537 +# hex: 0x000602 +_EVENT_PARTITION_LOST_MESSAGE_TYPE = 1538 - -def calculate_size(local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +_REQUEST_LOCAL_ONLY_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_PARTITION_LOST_PARTITION_ID_OFFSET = EVENT_HEADER_SIZE +_EVENT_PARTITION_LOST_LOST_BACKUP_COUNT_OFFSET = _EVENT_PARTITION_LOST_PARTITION_ID_OFFSET + INT_SIZE_IN_BYTES +_EVENT_PARTITION_LOST_SOURCE_OFFSET = _EVENT_PARTITION_LOST_LOST_BACKUP_COUNT_OFFSET + INT_SIZE_IN_BYTES def encode_request(local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_partition_lost=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_PARTITIONLOST and handle_event_partition_lost is not None: - partition_id = client_message.read_int() - lost_backup_count = client_message.read_int() - source = None - if not client_message.read_bool(): - source = AddressCodec.decode(client_message, to_object) - handle_event_partition_lost(partition_id=partition_id, lost_backup_count=lost_backup_count, source=source) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_partition_lost_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_PARTITION_LOST_MESSAGE_TYPE and handle_partition_lost_event is not None: + initial_frame = msg.next_frame() + partition_id = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_PARTITION_LOST_PARTITION_ID_OFFSET) + lost_backup_count = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_PARTITION_LOST_LOST_BACKUP_COUNT_OFFSET) + source = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_PARTITION_LOST_SOURCE_OFFSET) + handle_partition_lost_event(partition_id, lost_backup_count, source) + return diff --git a/hazelcast/protocol/codec/client_authentication_codec.py b/hazelcast/protocol/codec/client_authentication_codec.py index 46b202de84..e5a03ee477 100644 --- a/hazelcast/protocol/codec/client_authentication_codec.py +++ b/hazelcast/protocol/codec/client_authentication_codec.py @@ -1,73 +1,50 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.client_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import CodecUtil +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.codec.custom.address_codec import AddressCodec -REQUEST_TYPE = CLIENT_AUTHENTICATION -RESPONSE_TYPE = 107 -RETRYABLE = True +# hex: 0x000100 +_REQUEST_MESSAGE_TYPE = 256 +# hex: 0x000101 +_RESPONSE_MESSAGE_TYPE = 257 +_REQUEST_UUID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_SERIALIZATION_VERSION_OFFSET = _REQUEST_UUID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_SERIALIZATION_VERSION_OFFSET + BYTE_SIZE_IN_BYTES +_RESPONSE_STATUS_OFFSET = RESPONSE_HEADER_SIZE +_RESPONSE_MEMBER_UUID_OFFSET = _RESPONSE_STATUS_OFFSET + BYTE_SIZE_IN_BYTES +_RESPONSE_SERIALIZATION_VERSION_OFFSET = _RESPONSE_MEMBER_UUID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_PARTITION_COUNT_OFFSET = _RESPONSE_SERIALIZATION_VERSION_OFFSET + BYTE_SIZE_IN_BYTES +_RESPONSE_CLUSTER_ID_OFFSET = _RESPONSE_PARTITION_COUNT_OFFSET + INT_SIZE_IN_BYTES +_RESPONSE_FAILOVER_SUPPORTED_OFFSET = _RESPONSE_CLUSTER_ID_OFFSET + UUID_SIZE_IN_BYTES -def calculate_size(username, password, uuid, owner_uuid, is_owner_connection, client_type, serialization_version, client_hazelcast_version): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(username) - data_size += calculate_size_str(password) - data_size += BOOLEAN_SIZE_IN_BYTES - if uuid is not None: - data_size += calculate_size_str(uuid) - data_size += BOOLEAN_SIZE_IN_BYTES - if owner_uuid is not None: - data_size += calculate_size_str(owner_uuid) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += calculate_size_str(client_type) - data_size += BYTE_SIZE_IN_BYTES - data_size += calculate_size_str(client_hazelcast_version) - return data_size +def encode_request(cluster_name, username, password, uuid, client_type, serialization_version, client_hazelcast_version, client_name, labels): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_UUID_OFFSET, uuid) + FixSizedTypesCodec.encode_byte(buf, _REQUEST_SERIALIZATION_VERSION_OFFSET, serialization_version) + StringCodec.encode(buf, cluster_name) + CodecUtil.encode_nullable(buf, username, StringCodec.encode) + CodecUtil.encode_nullable(buf, password, StringCodec.encode) + StringCodec.encode(buf, client_type) + StringCodec.encode(buf, client_hazelcast_version) + StringCodec.encode(buf, client_name) + ListMultiFrameCodec.encode(buf, labels, StringCodec.encode, True) + return OutboundMessage(buf, True) -def encode_request(username, password, uuid, owner_uuid, is_owner_connection, client_type, serialization_version, client_hazelcast_version): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(username, password, uuid, owner_uuid, is_owner_connection, client_type, serialization_version, client_hazelcast_version)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(username) - client_message.append_str(password) - client_message.append_bool(uuid is None) - if uuid is not None: - client_message.append_str(uuid) - client_message.append_bool(owner_uuid is None) - if owner_uuid is not None: - client_message.append_str(owner_uuid) - client_message.append_bool(is_owner_connection) - client_message.append_str(client_type) - client_message.append_byte(serialization_version) - client_message.append_str(client_hazelcast_version) - client_message.update_frame_length() - return client_message - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(status=None, address=None, uuid=None, owner_uuid=None, serialization_version=None, server_hazelcast_version=None, client_unregistered_members=None) - parameters['status'] = client_message.read_byte() - if not client_message.read_bool(): - parameters['address'] = AddressCodec.decode(client_message, to_object) - if not client_message.read_bool(): - parameters['uuid'] = client_message.read_str() - if not client_message.read_bool(): - parameters['owner_uuid'] = client_message.read_str() - parameters['serialization_version'] = client_message.read_byte() - if client_message.is_complete(): - return parameters - parameters['server_hazelcast_version'] = client_message.read_str() - if not client_message.read_bool(): - client_unregistered_members_size = client_message.read_int() - client_unregistered_members = [] - for _ in range(0, client_unregistered_members_size): - client_unregistered_members_item = MemberCodec.decode(client_message, to_object) - client_unregistered_members.append(client_unregistered_members_item) - parameters['client_unregistered_members'] = ImmutableLazyDataList(client_unregistered_members, to_object) - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + response = dict() + response["status"] = FixSizedTypesCodec.decode_byte(initial_frame.buf, _RESPONSE_STATUS_OFFSET) + response["member_uuid"] = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_MEMBER_UUID_OFFSET) + response["serialization_version"] = FixSizedTypesCodec.decode_byte(initial_frame.buf, _RESPONSE_SERIALIZATION_VERSION_OFFSET) + response["partition_count"] = FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_PARTITION_COUNT_OFFSET) + response["cluster_id"] = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_CLUSTER_ID_OFFSET) + response["failover_supported"] = FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_FAILOVER_SUPPORTED_OFFSET) + response["address"] = CodecUtil.decode_nullable(msg, AddressCodec.decode) + response["server_hazelcast_version"] = StringCodec.decode(msg) + return response diff --git a/hazelcast/protocol/codec/client_authentication_custom_codec.py b/hazelcast/protocol/codec/client_authentication_custom_codec.py index 8fc08ec952..0332a1cb41 100644 --- a/hazelcast/protocol/codec/client_authentication_custom_codec.py +++ b/hazelcast/protocol/codec/client_authentication_custom_codec.py @@ -1,71 +1,50 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.client_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ByteArrayCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.codec.custom.address_codec import AddressCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = CLIENT_AUTHENTICATIONCUSTOM -RESPONSE_TYPE = 107 -RETRYABLE = True +# hex: 0x000200 +_REQUEST_MESSAGE_TYPE = 512 +# hex: 0x000201 +_RESPONSE_MESSAGE_TYPE = 513 +_REQUEST_UUID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_SERIALIZATION_VERSION_OFFSET = _REQUEST_UUID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_SERIALIZATION_VERSION_OFFSET + BYTE_SIZE_IN_BYTES +_RESPONSE_STATUS_OFFSET = RESPONSE_HEADER_SIZE +_RESPONSE_MEMBER_UUID_OFFSET = _RESPONSE_STATUS_OFFSET + BYTE_SIZE_IN_BYTES +_RESPONSE_SERIALIZATION_VERSION_OFFSET = _RESPONSE_MEMBER_UUID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_PARTITION_COUNT_OFFSET = _RESPONSE_SERIALIZATION_VERSION_OFFSET + BYTE_SIZE_IN_BYTES +_RESPONSE_CLUSTER_ID_OFFSET = _RESPONSE_PARTITION_COUNT_OFFSET + INT_SIZE_IN_BYTES +_RESPONSE_FAILOVER_SUPPORTED_OFFSET = _RESPONSE_CLUSTER_ID_OFFSET + UUID_SIZE_IN_BYTES -def calculate_size(credentials, uuid, owner_uuid, is_owner_connection, client_type, serialization_version, client_hazelcast_version): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_data(credentials) - data_size += BOOLEAN_SIZE_IN_BYTES - if uuid is not None: - data_size += calculate_size_str(uuid) - data_size += BOOLEAN_SIZE_IN_BYTES - if owner_uuid is not None: - data_size += calculate_size_str(owner_uuid) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += calculate_size_str(client_type) - data_size += BYTE_SIZE_IN_BYTES - data_size += calculate_size_str(client_hazelcast_version) - return data_size +def encode_request(cluster_name, credentials, uuid, client_type, serialization_version, client_hazelcast_version, client_name, labels): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_UUID_OFFSET, uuid) + FixSizedTypesCodec.encode_byte(buf, _REQUEST_SERIALIZATION_VERSION_OFFSET, serialization_version) + StringCodec.encode(buf, cluster_name) + ByteArrayCodec.encode(buf, credentials) + StringCodec.encode(buf, client_type) + StringCodec.encode(buf, client_hazelcast_version) + StringCodec.encode(buf, client_name) + ListMultiFrameCodec.encode(buf, labels, StringCodec.encode, True) + return OutboundMessage(buf, True) -def encode_request(credentials, uuid, owner_uuid, is_owner_connection, client_type, serialization_version, client_hazelcast_version): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(credentials, uuid, owner_uuid, is_owner_connection, client_type, serialization_version, client_hazelcast_version)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_data(credentials) - client_message.append_bool(uuid is None) - if uuid is not None: - client_message.append_str(uuid) - client_message.append_bool(owner_uuid is None) - if owner_uuid is not None: - client_message.append_str(owner_uuid) - client_message.append_bool(is_owner_connection) - client_message.append_str(client_type) - client_message.append_byte(serialization_version) - client_message.append_str(client_hazelcast_version) - client_message.update_frame_length() - return client_message - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(status=None, address=None, uuid=None, owner_uuid=None, serialization_version=None, server_hazelcast_version=None, client_unregistered_members=None) - parameters['status'] = client_message.read_byte() - if not client_message.read_bool(): - parameters['address'] = AddressCodec.decode(client_message, to_object) - if not client_message.read_bool(): - parameters['uuid'] = client_message.read_str() - if not client_message.read_bool(): - parameters['owner_uuid'] = client_message.read_str() - parameters['serialization_version'] = client_message.read_byte() - if client_message.is_complete(): - return parameters - parameters['server_hazelcast_version'] = client_message.read_str() - if not client_message.read_bool(): - client_unregistered_members_size = client_message.read_int() - client_unregistered_members = [] - for _ in range(0, client_unregistered_members_size): - client_unregistered_members_item = MemberCodec.decode(client_message, to_object) - client_unregistered_members.append(client_unregistered_members_item) - parameters['client_unregistered_members'] = ImmutableLazyDataList(client_unregistered_members, to_object) - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + response = dict() + response["status"] = FixSizedTypesCodec.decode_byte(initial_frame.buf, _RESPONSE_STATUS_OFFSET) + response["member_uuid"] = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_MEMBER_UUID_OFFSET) + response["serialization_version"] = FixSizedTypesCodec.decode_byte(initial_frame.buf, _RESPONSE_SERIALIZATION_VERSION_OFFSET) + response["partition_count"] = FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_PARTITION_COUNT_OFFSET) + response["cluster_id"] = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_CLUSTER_ID_OFFSET) + response["failover_supported"] = FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_FAILOVER_SUPPORTED_OFFSET) + response["address"] = CodecUtil.decode_nullable(msg, AddressCodec.decode) + response["server_hazelcast_version"] = StringCodec.decode(msg) + return response diff --git a/hazelcast/protocol/codec/client_create_proxies_codec.py b/hazelcast/protocol/codec/client_create_proxies_codec.py new file mode 100644 index 0000000000..159c9a29bc --- /dev/null +++ b/hazelcast/protocol/codec/client_create_proxies_codec.py @@ -0,0 +1,16 @@ +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import EntryListCodec +from hazelcast.protocol.builtin import StringCodec + +# hex: 0x000E00 +_REQUEST_MESSAGE_TYPE = 3584 +# hex: 0x000E01 +_RESPONSE_MESSAGE_TYPE = 3585 + +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE + + +def encode_request(proxies): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + EntryListCodec.encode(buf, proxies, StringCodec.encode, StringCodec.encode, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/client_create_proxy_codec.py b/hazelcast/protocol/codec/client_create_proxy_codec.py index 0634802cd6..c42a50cfbc 100644 --- a/hazelcast/protocol/codec/client_create_proxy_codec.py +++ b/hazelcast/protocol/codec/client_create_proxy_codec.py @@ -1,32 +1,16 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.protocol.codec.client_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = CLIENT_CREATEPROXY -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x000400 +_REQUEST_MESSAGE_TYPE = 1024 +# hex: 0x000401 +_RESPONSE_MESSAGE_TYPE = 1025 +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE -def calculate_size(name, service_name, target): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(service_name) - data_size += calculate_size_address(target) - return data_size - -def encode_request(name, service_name, target): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, service_name, target)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(service_name) - AddressCodec.encode(client_message, target) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode +def encode_request(name, service_name): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + StringCodec.encode(buf, service_name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/client_destroy_proxy_codec.py b/hazelcast/protocol/codec/client_destroy_proxy_codec.py index e047ba77e9..0411a6d9b1 100644 --- a/hazelcast/protocol/codec/client_destroy_proxy_codec.py +++ b/hazelcast/protocol/codec/client_destroy_proxy_codec.py @@ -1,29 +1,16 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.client_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = CLIENT_DESTROYPROXY -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x000500 +_REQUEST_MESSAGE_TYPE = 1280 +# hex: 0x000501 +_RESPONSE_MESSAGE_TYPE = 1281 - -def calculate_size(name, service_name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(service_name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, service_name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, service_name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(service_name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + StringCodec.encode(buf, service_name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/client_get_distributed_objects_codec.py b/hazelcast/protocol/codec/client_get_distributed_objects_codec.py index 42b4e8a7b5..53372171c9 100644 --- a/hazelcast/protocol/codec/client_get_distributed_objects_codec.py +++ b/hazelcast/protocol/codec/client_get_distributed_objects_codec.py @@ -1,36 +1,20 @@ -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.client_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.codec.custom.distributed_object_info_codec import DistributedObjectInfoCodec -REQUEST_TYPE = CLIENT_GETDISTRIBUTEDOBJECTS -RESPONSE_TYPE = 110 -RETRYABLE = False +# hex: 0x000800 +_REQUEST_MESSAGE_TYPE = 2048 +# hex: 0x000801 +_RESPONSE_MESSAGE_TYPE = 2049 - -def calculate_size(): - """ Calculates the request payload size""" - data_size = 0 - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size()) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = DistributedObjectInfoCodec.decode(client_message) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DistributedObjectInfoCodec.decode) diff --git a/hazelcast/protocol/codec/client_get_partitions_codec.py b/hazelcast/protocol/codec/client_get_partitions_codec.py deleted file mode 100644 index 0d8077ff38..0000000000 --- a/hazelcast/protocol/codec/client_get_partitions_codec.py +++ /dev/null @@ -1,40 +0,0 @@ -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.protocol.codec.client_message_type import * -from hazelcast.six.moves import range - -REQUEST_TYPE = CLIENT_GETPARTITIONS -RESPONSE_TYPE = 108 -RETRYABLE = False - - -def calculate_size(): - """ Calculates the request payload size""" - data_size = 0 - return data_size - - -def encode_request(): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size()) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(partitions=None) - partitions_size = client_message.read_int() - partitions = {} - for _ in range(0, partitions_size): - partitions_key = AddressCodec.decode(client_message, to_object) - partitions_val_size = client_message.read_int() - partitions_val = [] - for _ in range(0, partitions_val_size): - partitions_val_item = client_message.read_int() - partitions_val.append(partitions_val_item) - partitions[partitions_key] = partitions_val - parameters['partitions'] = partitions - return parameters diff --git a/hazelcast/protocol/codec/client_local_backup_listener_codec.py b/hazelcast/protocol/codec/client_local_backup_listener_codec.py new file mode 100644 index 0000000000..1b0abe66cb --- /dev/null +++ b/hazelcast/protocol/codec/client_local_backup_listener_codec.py @@ -0,0 +1,32 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE + +# hex: 0x000F00 +_REQUEST_MESSAGE_TYPE = 3840 +# hex: 0x000F01 +_RESPONSE_MESSAGE_TYPE = 3841 +# hex: 0x000F02 +_EVENT_BACKUP_MESSAGE_TYPE = 3842 + +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_BACKUP_SOURCE_INVOCATION_CORRELATION_ID_OFFSET = EVENT_HEADER_SIZE + + +def encode_request(): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_backup_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_BACKUP_MESSAGE_TYPE and handle_backup_event is not None: + initial_frame = msg.next_frame() + source_invocation_correlation_id = FixSizedTypesCodec.decode_long(initial_frame.buf, _EVENT_BACKUP_SOURCE_INVOCATION_CORRELATION_ID_OFFSET) + handle_backup_event(source_invocation_correlation_id) + return diff --git a/hazelcast/protocol/codec/client_message_type.py b/hazelcast/protocol/codec/client_message_type.py deleted file mode 100644 index c8a56a16a0..0000000000 --- a/hazelcast/protocol/codec/client_message_type.py +++ /dev/null @@ -1,15 +0,0 @@ - -CLIENT_AUTHENTICATION = 0x0002 -CLIENT_AUTHENTICATIONCUSTOM = 0x0003 -CLIENT_ADDMEMBERSHIPLISTENER = 0x0004 -CLIENT_CREATEPROXY = 0x0005 -CLIENT_DESTROYPROXY = 0x0006 -CLIENT_GETPARTITIONS = 0x0008 -CLIENT_REMOVEALLLISTENERS = 0x0009 -CLIENT_ADDPARTITIONLOSTLISTENER = 0x000a -CLIENT_REMOVEPARTITIONLOSTLISTENER = 0x000b -CLIENT_GETDISTRIBUTEDOBJECTS = 0x000c -CLIENT_ADDDISTRIBUTEDOBJECTLISTENER = 0x000d -CLIENT_REMOVEDISTRIBUTEDOBJECTLISTENER = 0x000e -CLIENT_PING = 0x000f -CLIENT_STATISTICS = 0x0010 diff --git a/hazelcast/protocol/codec/client_ping_codec.py b/hazelcast/protocol/codec/client_ping_codec.py index fcf58c8edc..ec596d4e58 100644 --- a/hazelcast/protocol/codec/client_ping_codec.py +++ b/hazelcast/protocol/codec/client_ping_codec.py @@ -1,24 +1,13 @@ -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.client_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer -REQUEST_TYPE = CLIENT_PING -RESPONSE_TYPE = 100 -RETRYABLE = True +# hex: 0x000B00 +_REQUEST_MESSAGE_TYPE = 2816 +# hex: 0x000B01 +_RESPONSE_MESSAGE_TYPE = 2817 - -def calculate_size(): - """ Calculates the request payload size""" - data_size = 0 - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size()) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + return OutboundMessage(buf, True) diff --git a/hazelcast/protocol/codec/client_remove_all_listeners_codec.py b/hazelcast/protocol/codec/client_remove_all_listeners_codec.py deleted file mode 100644 index 0841a3f970..0000000000 --- a/hazelcast/protocol/codec/client_remove_all_listeners_codec.py +++ /dev/null @@ -1,24 +0,0 @@ -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.client_message_type import * - -REQUEST_TYPE = CLIENT_REMOVEALLLISTENERS -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(): - """ Calculates the request payload size""" - data_size = 0 - return data_size - - -def encode_request(): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size()) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/client_remove_distributed_object_listener_codec.py b/hazelcast/protocol/codec/client_remove_distributed_object_listener_codec.py index c78e6c391d..ec60c3cb30 100644 --- a/hazelcast/protocol/codec/client_remove_distributed_object_listener_codec.py +++ b/hazelcast/protocol/codec/client_remove_distributed_object_listener_codec.py @@ -1,31 +1,23 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.client_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE -REQUEST_TYPE = CLIENT_REMOVEDISTRIBUTEDOBJECTLISTENER -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x000A00 +_REQUEST_MESSAGE_TYPE = 2560 +# hex: 0x000A01 +_RESPONSE_MESSAGE_TYPE = 2561 - -def calculate_size(registration_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(registration_id) - return data_size +_REQUEST_REGISTRATION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REGISTRATION_ID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(registration_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(registration_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(registration_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_REGISTRATION_ID_OFFSET, registration_id) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/client_remove_partition_lost_listener_codec.py b/hazelcast/protocol/codec/client_remove_partition_lost_listener_codec.py index aa012649b3..4a783d24d4 100644 --- a/hazelcast/protocol/codec/client_remove_partition_lost_listener_codec.py +++ b/hazelcast/protocol/codec/client_remove_partition_lost_listener_codec.py @@ -1,31 +1,23 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.client_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE -REQUEST_TYPE = CLIENT_REMOVEPARTITIONLOSTLISTENER -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x000700 +_REQUEST_MESSAGE_TYPE = 1792 +# hex: 0x000701 +_RESPONSE_MESSAGE_TYPE = 1793 - -def calculate_size(registration_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(registration_id) - return data_size +_REQUEST_REGISTRATION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REGISTRATION_ID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(registration_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(registration_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(registration_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_REGISTRATION_ID_OFFSET, registration_id) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/client_statistics_codec.py b/hazelcast/protocol/codec/client_statistics_codec.py index bcbb3ff0f2..0f39fc4a5e 100644 --- a/hazelcast/protocol/codec/client_statistics_codec.py +++ b/hazelcast/protocol/codec/client_statistics_codec.py @@ -1,27 +1,21 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.client_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ByteArrayCodec -REQUEST_TYPE = CLIENT_STATISTICS -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x000C00 +_REQUEST_MESSAGE_TYPE = 3072 +# hex: 0x000C01 +_RESPONSE_MESSAGE_TYPE = 3073 +_REQUEST_TIMESTAMP_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TIMESTAMP_OFFSET + LONG_SIZE_IN_BYTES -def calculate_size(stats): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(stats) - return data_size - -def encode_request(stats): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(stats)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(stats) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode +def encode_request(timestamp, client_attributes, metrics_blob): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMESTAMP_OFFSET, timestamp) + StringCodec.encode(buf, client_attributes) + ByteArrayCodec.encode(buf, metrics_blob, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/client_trigger_partition_assignment_codec.py b/hazelcast/protocol/codec/client_trigger_partition_assignment_codec.py new file mode 100644 index 0000000000..629ad67f67 --- /dev/null +++ b/hazelcast/protocol/codec/client_trigger_partition_assignment_codec.py @@ -0,0 +1,13 @@ +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer + +# hex: 0x001000 +_REQUEST_MESSAGE_TYPE = 4096 +# hex: 0x001001 +_RESPONSE_MESSAGE_TYPE = 4097 + +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE + + +def encode_request(): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + return OutboundMessage(buf, True) diff --git a/hazelcast/protocol/codec/count_down_latch_await_codec.py b/hazelcast/protocol/codec/count_down_latch_await_codec.py deleted file mode 100644 index 338dc2561f..0000000000 --- a/hazelcast/protocol/codec/count_down_latch_await_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.count_down_latch_message_type import * - -REQUEST_TYPE = COUNTDOWNLATCH_AWAIT -RESPONSE_TYPE = 101 -RETRYABLE = False - - -def calculate_size(name, timeout): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, timeout): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, timeout)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(timeout) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/count_down_latch_count_down_codec.py b/hazelcast/protocol/codec/count_down_latch_count_down_codec.py deleted file mode 100644 index f8cd1c852c..0000000000 --- a/hazelcast/protocol/codec/count_down_latch_count_down_codec.py +++ /dev/null @@ -1,27 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.count_down_latch_message_type import * - -REQUEST_TYPE = COUNTDOWNLATCH_COUNTDOWN -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/count_down_latch_get_count_codec.py b/hazelcast/protocol/codec/count_down_latch_get_count_codec.py deleted file mode 100644 index 1a9ed04311..0000000000 --- a/hazelcast/protocol/codec/count_down_latch_get_count_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.count_down_latch_message_type import * - -REQUEST_TYPE = COUNTDOWNLATCH_GETCOUNT -RESPONSE_TYPE = 102 -RETRYABLE = True - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters diff --git a/hazelcast/protocol/codec/count_down_latch_message_type.py b/hazelcast/protocol/codec/count_down_latch_message_type.py deleted file mode 100644 index 6460efd1f8..0000000000 --- a/hazelcast/protocol/codec/count_down_latch_message_type.py +++ /dev/null @@ -1,5 +0,0 @@ - -COUNTDOWNLATCH_AWAIT = 0x0c01 -COUNTDOWNLATCH_COUNTDOWN = 0x0c02 -COUNTDOWNLATCH_GETCOUNT = 0x0c03 -COUNTDOWNLATCH_TRYSETCOUNT = 0x0c04 diff --git a/hazelcast/protocol/codec/count_down_latch_try_set_count_codec.py b/hazelcast/protocol/codec/count_down_latch_try_set_count_codec.py deleted file mode 100644 index 4bcc2902f6..0000000000 --- a/hazelcast/protocol/codec/count_down_latch_try_set_count_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.count_down_latch_message_type import * - -REQUEST_TYPE = COUNTDOWNLATCH_TRYSETCOUNT -RESPONSE_TYPE = 101 -RETRYABLE = False - - -def calculate_size(name, count): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - return data_size - - -def encode_request(name, count): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, count)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(count) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/benchmarks/__init__.py b/hazelcast/protocol/codec/custom/__init__.py similarity index 100% rename from benchmarks/__init__.py rename to hazelcast/protocol/codec/custom/__init__.py diff --git a/hazelcast/protocol/codec/custom/address_codec.py b/hazelcast/protocol/codec/custom/address_codec.py new file mode 100644 index 0000000000..b7e2710bf1 --- /dev/null +++ b/hazelcast/protocol/codec/custom/address_codec.py @@ -0,0 +1,31 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec, CodecUtil +from hazelcast.serialization.bits import * +from hazelcast.protocol.client_message import END_FRAME_BUF, END_FINAL_FRAME_BUF, SIZE_OF_FRAME_LENGTH_AND_FLAGS, create_initial_buffer_custom +from hazelcast.core import Address +from hazelcast.protocol.builtin import StringCodec + +_PORT_ENCODE_OFFSET = 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS +_PORT_DECODE_OFFSET = 0 +_INITIAL_FRAME_SIZE = _PORT_ENCODE_OFFSET + INT_SIZE_IN_BYTES - 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS + + +class AddressCodec(object): + @staticmethod + def encode(buf, address, is_final=False): + initial_frame_buf = create_initial_buffer_custom(_INITIAL_FRAME_SIZE) + FixSizedTypesCodec.encode_int(initial_frame_buf, _PORT_ENCODE_OFFSET, address.port) + buf.extend(initial_frame_buf) + StringCodec.encode(buf, address.host) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def decode(msg): + msg.next_frame() + initial_frame = msg.next_frame() + port = FixSizedTypesCodec.decode_int(initial_frame.buf, _PORT_DECODE_OFFSET) + host = StringCodec.decode(msg) + CodecUtil.fast_forward_to_end_frame(msg) + return Address(host, port) diff --git a/hazelcast/protocol/codec/custom/bitmap_index_options_codec.py b/hazelcast/protocol/codec/custom/bitmap_index_options_codec.py new file mode 100644 index 0000000000..d356553d6b --- /dev/null +++ b/hazelcast/protocol/codec/custom/bitmap_index_options_codec.py @@ -0,0 +1,31 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec, CodecUtil +from hazelcast.serialization.bits import * +from hazelcast.protocol.client_message import END_FRAME_BUF, END_FINAL_FRAME_BUF, SIZE_OF_FRAME_LENGTH_AND_FLAGS, create_initial_buffer_custom +from hazelcast.config import BitmapIndexOptions +from hazelcast.protocol.builtin import StringCodec + +_UNIQUE_KEY_TRANSFORMATION_ENCODE_OFFSET = 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS +_UNIQUE_KEY_TRANSFORMATION_DECODE_OFFSET = 0 +_INITIAL_FRAME_SIZE = _UNIQUE_KEY_TRANSFORMATION_ENCODE_OFFSET + INT_SIZE_IN_BYTES - 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS + + +class BitmapIndexOptionsCodec(object): + @staticmethod + def encode(buf, bitmap_index_options, is_final=False): + initial_frame_buf = create_initial_buffer_custom(_INITIAL_FRAME_SIZE) + FixSizedTypesCodec.encode_int(initial_frame_buf, _UNIQUE_KEY_TRANSFORMATION_ENCODE_OFFSET, bitmap_index_options.unique_key_transformation) + buf.extend(initial_frame_buf) + StringCodec.encode(buf, bitmap_index_options.unique_key) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def decode(msg): + msg.next_frame() + initial_frame = msg.next_frame() + unique_key_transformation = FixSizedTypesCodec.decode_int(initial_frame.buf, _UNIQUE_KEY_TRANSFORMATION_DECODE_OFFSET) + unique_key = StringCodec.decode(msg) + CodecUtil.fast_forward_to_end_frame(msg) + return BitmapIndexOptions(unique_key, unique_key_transformation) diff --git a/hazelcast/protocol/codec/custom/distributed_object_info_codec.py b/hazelcast/protocol/codec/custom/distributed_object_info_codec.py new file mode 100644 index 0000000000..6f7a7e27f7 --- /dev/null +++ b/hazelcast/protocol/codec/custom/distributed_object_info_codec.py @@ -0,0 +1,24 @@ +from hazelcast.protocol.builtin import CodecUtil +from hazelcast.protocol.client_message import END_FRAME_BUF, END_FINAL_FRAME_BUF, BEGIN_FRAME_BUF +from hazelcast.protocol.builtin import StringCodec +from hazelcast.core import DistributedObjectInfo + + +class DistributedObjectInfoCodec(object): + @staticmethod + def encode(buf, distributed_object_info, is_final=False): + buf.extend(BEGIN_FRAME_BUF) + StringCodec.encode(buf, distributed_object_info.service_name) + StringCodec.encode(buf, distributed_object_info.name) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def decode(msg): + msg.next_frame() + service_name = StringCodec.decode(msg) + name = StringCodec.decode(msg) + CodecUtil.fast_forward_to_end_frame(msg) + return DistributedObjectInfo(service_name, name) diff --git a/hazelcast/protocol/codec/custom/endpoint_qualifier_codec.py b/hazelcast/protocol/codec/custom/endpoint_qualifier_codec.py new file mode 100644 index 0000000000..591385da9d --- /dev/null +++ b/hazelcast/protocol/codec/custom/endpoint_qualifier_codec.py @@ -0,0 +1,31 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec, CodecUtil +from hazelcast.serialization.bits import * +from hazelcast.protocol.client_message import END_FRAME_BUF, END_FINAL_FRAME_BUF, SIZE_OF_FRAME_LENGTH_AND_FLAGS, create_initial_buffer_custom +from hazelcast.protocol import EndpointQualifier +from hazelcast.protocol.builtin import StringCodec + +_TYPE_ENCODE_OFFSET = 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS +_TYPE_DECODE_OFFSET = 0 +_INITIAL_FRAME_SIZE = _TYPE_ENCODE_OFFSET + INT_SIZE_IN_BYTES - 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS + + +class EndpointQualifierCodec(object): + @staticmethod + def encode(buf, endpoint_qualifier, is_final=False): + initial_frame_buf = create_initial_buffer_custom(_INITIAL_FRAME_SIZE) + FixSizedTypesCodec.encode_int(initial_frame_buf, _TYPE_ENCODE_OFFSET, endpoint_qualifier.type) + buf.extend(initial_frame_buf) + CodecUtil.encode_nullable(buf, endpoint_qualifier.identifier, StringCodec.encode) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def decode(msg): + msg.next_frame() + initial_frame = msg.next_frame() + type = FixSizedTypesCodec.decode_int(initial_frame.buf, _TYPE_DECODE_OFFSET) + identifier = CodecUtil.decode_nullable(msg, StringCodec.decode) + CodecUtil.fast_forward_to_end_frame(msg) + return EndpointQualifier(type, identifier) diff --git a/hazelcast/protocol/codec/custom/error_holder_codec.py b/hazelcast/protocol/codec/custom/error_holder_codec.py new file mode 100644 index 0000000000..c3aeefd410 --- /dev/null +++ b/hazelcast/protocol/codec/custom/error_holder_codec.py @@ -0,0 +1,37 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec, CodecUtil +from hazelcast.serialization.bits import * +from hazelcast.protocol.client_message import END_FRAME_BUF, END_FINAL_FRAME_BUF, SIZE_OF_FRAME_LENGTH_AND_FLAGS, create_initial_buffer_custom +from hazelcast.protocol import ErrorHolder +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.codec.custom.stack_trace_element_codec import StackTraceElementCodec + +_ERROR_CODE_ENCODE_OFFSET = 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS +_ERROR_CODE_DECODE_OFFSET = 0 +_INITIAL_FRAME_SIZE = _ERROR_CODE_ENCODE_OFFSET + INT_SIZE_IN_BYTES - 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS + + +class ErrorHolderCodec(object): + @staticmethod + def encode(buf, error_holder, is_final=False): + initial_frame_buf = create_initial_buffer_custom(_INITIAL_FRAME_SIZE) + FixSizedTypesCodec.encode_int(initial_frame_buf, _ERROR_CODE_ENCODE_OFFSET, error_holder.error_code) + buf.extend(initial_frame_buf) + StringCodec.encode(buf, error_holder.class_name) + CodecUtil.encode_nullable(buf, error_holder.message, StringCodec.encode) + ListMultiFrameCodec.encode(buf, error_holder.stack_trace_elements, StackTraceElementCodec.encode) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def decode(msg): + msg.next_frame() + initial_frame = msg.next_frame() + error_code = FixSizedTypesCodec.decode_int(initial_frame.buf, _ERROR_CODE_DECODE_OFFSET) + class_name = StringCodec.decode(msg) + message = CodecUtil.decode_nullable(msg, StringCodec.decode) + stack_trace_elements = ListMultiFrameCodec.decode(msg, StackTraceElementCodec.decode) + CodecUtil.fast_forward_to_end_frame(msg) + return ErrorHolder(error_code, class_name, message, stack_trace_elements) diff --git a/hazelcast/protocol/codec/custom/index_config_codec.py b/hazelcast/protocol/codec/custom/index_config_codec.py new file mode 100644 index 0000000000..fe4d9cdff4 --- /dev/null +++ b/hazelcast/protocol/codec/custom/index_config_codec.py @@ -0,0 +1,37 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec, CodecUtil +from hazelcast.serialization.bits import * +from hazelcast.protocol.client_message import END_FRAME_BUF, END_FINAL_FRAME_BUF, SIZE_OF_FRAME_LENGTH_AND_FLAGS, create_initial_buffer_custom +from hazelcast.config import IndexConfig +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.codec.custom.bitmap_index_options_codec import BitmapIndexOptionsCodec + +_TYPE_ENCODE_OFFSET = 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS +_TYPE_DECODE_OFFSET = 0 +_INITIAL_FRAME_SIZE = _TYPE_ENCODE_OFFSET + INT_SIZE_IN_BYTES - 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS + + +class IndexConfigCodec(object): + @staticmethod + def encode(buf, index_config, is_final=False): + initial_frame_buf = create_initial_buffer_custom(_INITIAL_FRAME_SIZE) + FixSizedTypesCodec.encode_int(initial_frame_buf, _TYPE_ENCODE_OFFSET, index_config.type) + buf.extend(initial_frame_buf) + CodecUtil.encode_nullable(buf, index_config.name, StringCodec.encode) + ListMultiFrameCodec.encode(buf, index_config.attributes, StringCodec.encode) + CodecUtil.encode_nullable(buf, index_config.bitmap_index_options, BitmapIndexOptionsCodec.encode) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def decode(msg): + msg.next_frame() + initial_frame = msg.next_frame() + type = FixSizedTypesCodec.decode_int(initial_frame.buf, _TYPE_DECODE_OFFSET) + name = CodecUtil.decode_nullable(msg, StringCodec.decode) + attributes = ListMultiFrameCodec.decode(msg, StringCodec.decode) + bitmap_index_options = CodecUtil.decode_nullable(msg, BitmapIndexOptionsCodec.decode) + CodecUtil.fast_forward_to_end_frame(msg) + return IndexConfig(name, type, attributes, bitmap_index_options) diff --git a/hazelcast/protocol/codec/custom/member_info_codec.py b/hazelcast/protocol/codec/custom/member_info_codec.py new file mode 100644 index 0000000000..3777ef5ead --- /dev/null +++ b/hazelcast/protocol/codec/custom/member_info_codec.py @@ -0,0 +1,49 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec, CodecUtil +from hazelcast.serialization.bits import * +from hazelcast.protocol.client_message import END_FRAME_BUF, END_FINAL_FRAME_BUF, SIZE_OF_FRAME_LENGTH_AND_FLAGS, create_initial_buffer_custom +from hazelcast.core import MemberInfo +from hazelcast.protocol.codec.custom.address_codec import AddressCodec +from hazelcast.protocol.builtin import MapCodec +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.codec.custom.member_version_codec import MemberVersionCodec +from hazelcast.protocol.codec.custom.endpoint_qualifier_codec import EndpointQualifierCodec + +_UUID_ENCODE_OFFSET = 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS +_UUID_DECODE_OFFSET = 0 +_LITE_MEMBER_ENCODE_OFFSET = _UUID_ENCODE_OFFSET + UUID_SIZE_IN_BYTES +_LITE_MEMBER_DECODE_OFFSET = _UUID_DECODE_OFFSET + UUID_SIZE_IN_BYTES +_INITIAL_FRAME_SIZE = _LITE_MEMBER_ENCODE_OFFSET + BOOLEAN_SIZE_IN_BYTES - 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS + + +class MemberInfoCodec(object): + @staticmethod + def encode(buf, member_info, is_final=False): + initial_frame_buf = create_initial_buffer_custom(_INITIAL_FRAME_SIZE) + FixSizedTypesCodec.encode_uuid(initial_frame_buf, _UUID_ENCODE_OFFSET, member_info.uuid) + FixSizedTypesCodec.encode_boolean(initial_frame_buf, _LITE_MEMBER_ENCODE_OFFSET, member_info.lite_member) + buf.extend(initial_frame_buf) + AddressCodec.encode(buf, member_info.address) + MapCodec.encode(buf, member_info.attributes, StringCodec.encode, StringCodec.encode) + MemberVersionCodec.encode(buf, member_info.version) + MapCodec.encode(buf, member_info.address_map, EndpointQualifierCodec.encode, AddressCodec.encode) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def decode(msg): + msg.next_frame() + initial_frame = msg.next_frame() + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _UUID_DECODE_OFFSET) + lite_member = FixSizedTypesCodec.decode_boolean(initial_frame.buf, _LITE_MEMBER_DECODE_OFFSET) + address = AddressCodec.decode(msg) + attributes = MapCodec.decode(msg, StringCodec.decode, StringCodec.decode) + version = MemberVersionCodec.decode(msg) + is_address_map_exists = False + address_map = None + if not msg.peek_next_frame().is_end_frame(): + address_map = MapCodec.decode(msg, EndpointQualifierCodec.decode, AddressCodec.decode) + is_address_map_exists = True + CodecUtil.fast_forward_to_end_frame(msg) + return MemberInfo(address, uuid, attributes, lite_member, version, is_address_map_exists, address_map) diff --git a/hazelcast/protocol/codec/custom/member_version_codec.py b/hazelcast/protocol/codec/custom/member_version_codec.py new file mode 100644 index 0000000000..6f45ca6f22 --- /dev/null +++ b/hazelcast/protocol/codec/custom/member_version_codec.py @@ -0,0 +1,36 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec, CodecUtil +from hazelcast.serialization.bits import * +from hazelcast.protocol.client_message import END_FRAME_BUF, END_FINAL_FRAME_BUF, SIZE_OF_FRAME_LENGTH_AND_FLAGS, create_initial_buffer_custom +from hazelcast.core import MemberVersion + +_MAJOR_ENCODE_OFFSET = 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS +_MAJOR_DECODE_OFFSET = 0 +_MINOR_ENCODE_OFFSET = _MAJOR_ENCODE_OFFSET + BYTE_SIZE_IN_BYTES +_MINOR_DECODE_OFFSET = _MAJOR_DECODE_OFFSET + BYTE_SIZE_IN_BYTES +_PATCH_ENCODE_OFFSET = _MINOR_ENCODE_OFFSET + BYTE_SIZE_IN_BYTES +_PATCH_DECODE_OFFSET = _MINOR_DECODE_OFFSET + BYTE_SIZE_IN_BYTES +_INITIAL_FRAME_SIZE = _PATCH_ENCODE_OFFSET + BYTE_SIZE_IN_BYTES - 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS + + +class MemberVersionCodec(object): + @staticmethod + def encode(buf, member_version, is_final=False): + initial_frame_buf = create_initial_buffer_custom(_INITIAL_FRAME_SIZE) + FixSizedTypesCodec.encode_byte(initial_frame_buf, _MAJOR_ENCODE_OFFSET, member_version.major) + FixSizedTypesCodec.encode_byte(initial_frame_buf, _MINOR_ENCODE_OFFSET, member_version.minor) + FixSizedTypesCodec.encode_byte(initial_frame_buf, _PATCH_ENCODE_OFFSET, member_version.patch) + buf.extend(initial_frame_buf) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def decode(msg): + msg.next_frame() + initial_frame = msg.next_frame() + major = FixSizedTypesCodec.decode_byte(initial_frame.buf, _MAJOR_DECODE_OFFSET) + minor = FixSizedTypesCodec.decode_byte(initial_frame.buf, _MINOR_DECODE_OFFSET) + patch = FixSizedTypesCodec.decode_byte(initial_frame.buf, _PATCH_DECODE_OFFSET) + CodecUtil.fast_forward_to_end_frame(msg) + return MemberVersion(major, minor, patch) diff --git a/hazelcast/protocol/codec/custom/simple_entry_view_codec.py b/hazelcast/protocol/codec/custom/simple_entry_view_codec.py new file mode 100644 index 0000000000..8174330941 --- /dev/null +++ b/hazelcast/protocol/codec/custom/simple_entry_view_codec.py @@ -0,0 +1,69 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec, CodecUtil +from hazelcast.serialization.bits import * +from hazelcast.protocol.client_message import END_FRAME_BUF, END_FINAL_FRAME_BUF, SIZE_OF_FRAME_LENGTH_AND_FLAGS, create_initial_buffer_custom +from hazelcast.core import SimpleEntryView +from hazelcast.protocol.builtin import DataCodec + +_COST_ENCODE_OFFSET = 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS +_COST_DECODE_OFFSET = 0 +_CREATION_TIME_ENCODE_OFFSET = _COST_ENCODE_OFFSET + LONG_SIZE_IN_BYTES +_CREATION_TIME_DECODE_OFFSET = _COST_DECODE_OFFSET + LONG_SIZE_IN_BYTES +_EXPIRATION_TIME_ENCODE_OFFSET = _CREATION_TIME_ENCODE_OFFSET + LONG_SIZE_IN_BYTES +_EXPIRATION_TIME_DECODE_OFFSET = _CREATION_TIME_DECODE_OFFSET + LONG_SIZE_IN_BYTES +_HITS_ENCODE_OFFSET = _EXPIRATION_TIME_ENCODE_OFFSET + LONG_SIZE_IN_BYTES +_HITS_DECODE_OFFSET = _EXPIRATION_TIME_DECODE_OFFSET + LONG_SIZE_IN_BYTES +_LAST_ACCESS_TIME_ENCODE_OFFSET = _HITS_ENCODE_OFFSET + LONG_SIZE_IN_BYTES +_LAST_ACCESS_TIME_DECODE_OFFSET = _HITS_DECODE_OFFSET + LONG_SIZE_IN_BYTES +_LAST_STORED_TIME_ENCODE_OFFSET = _LAST_ACCESS_TIME_ENCODE_OFFSET + LONG_SIZE_IN_BYTES +_LAST_STORED_TIME_DECODE_OFFSET = _LAST_ACCESS_TIME_DECODE_OFFSET + LONG_SIZE_IN_BYTES +_LAST_UPDATE_TIME_ENCODE_OFFSET = _LAST_STORED_TIME_ENCODE_OFFSET + LONG_SIZE_IN_BYTES +_LAST_UPDATE_TIME_DECODE_OFFSET = _LAST_STORED_TIME_DECODE_OFFSET + LONG_SIZE_IN_BYTES +_VERSION_ENCODE_OFFSET = _LAST_UPDATE_TIME_ENCODE_OFFSET + LONG_SIZE_IN_BYTES +_VERSION_DECODE_OFFSET = _LAST_UPDATE_TIME_DECODE_OFFSET + LONG_SIZE_IN_BYTES +_TTL_ENCODE_OFFSET = _VERSION_ENCODE_OFFSET + LONG_SIZE_IN_BYTES +_TTL_DECODE_OFFSET = _VERSION_DECODE_OFFSET + LONG_SIZE_IN_BYTES +_MAX_IDLE_ENCODE_OFFSET = _TTL_ENCODE_OFFSET + LONG_SIZE_IN_BYTES +_MAX_IDLE_DECODE_OFFSET = _TTL_DECODE_OFFSET + LONG_SIZE_IN_BYTES +_INITIAL_FRAME_SIZE = _MAX_IDLE_ENCODE_OFFSET + LONG_SIZE_IN_BYTES - 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS + + +class SimpleEntryViewCodec(object): + @staticmethod + def encode(buf, simple_entry_view, is_final=False): + initial_frame_buf = create_initial_buffer_custom(_INITIAL_FRAME_SIZE) + FixSizedTypesCodec.encode_long(initial_frame_buf, _COST_ENCODE_OFFSET, simple_entry_view.cost) + FixSizedTypesCodec.encode_long(initial_frame_buf, _CREATION_TIME_ENCODE_OFFSET, simple_entry_view.creation_time) + FixSizedTypesCodec.encode_long(initial_frame_buf, _EXPIRATION_TIME_ENCODE_OFFSET, simple_entry_view.expiration_time) + FixSizedTypesCodec.encode_long(initial_frame_buf, _HITS_ENCODE_OFFSET, simple_entry_view.hits) + FixSizedTypesCodec.encode_long(initial_frame_buf, _LAST_ACCESS_TIME_ENCODE_OFFSET, simple_entry_view.last_access_time) + FixSizedTypesCodec.encode_long(initial_frame_buf, _LAST_STORED_TIME_ENCODE_OFFSET, simple_entry_view.last_stored_time) + FixSizedTypesCodec.encode_long(initial_frame_buf, _LAST_UPDATE_TIME_ENCODE_OFFSET, simple_entry_view.last_update_time) + FixSizedTypesCodec.encode_long(initial_frame_buf, _VERSION_ENCODE_OFFSET, simple_entry_view.version) + FixSizedTypesCodec.encode_long(initial_frame_buf, _TTL_ENCODE_OFFSET, simple_entry_view.ttl) + FixSizedTypesCodec.encode_long(initial_frame_buf, _MAX_IDLE_ENCODE_OFFSET, simple_entry_view.max_idle) + buf.extend(initial_frame_buf) + DataCodec.encode(buf, simple_entry_view.key) + DataCodec.encode(buf, simple_entry_view.value) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def decode(msg): + msg.next_frame() + initial_frame = msg.next_frame() + cost = FixSizedTypesCodec.decode_long(initial_frame.buf, _COST_DECODE_OFFSET) + creation_time = FixSizedTypesCodec.decode_long(initial_frame.buf, _CREATION_TIME_DECODE_OFFSET) + expiration_time = FixSizedTypesCodec.decode_long(initial_frame.buf, _EXPIRATION_TIME_DECODE_OFFSET) + hits = FixSizedTypesCodec.decode_long(initial_frame.buf, _HITS_DECODE_OFFSET) + last_access_time = FixSizedTypesCodec.decode_long(initial_frame.buf, _LAST_ACCESS_TIME_DECODE_OFFSET) + last_stored_time = FixSizedTypesCodec.decode_long(initial_frame.buf, _LAST_STORED_TIME_DECODE_OFFSET) + last_update_time = FixSizedTypesCodec.decode_long(initial_frame.buf, _LAST_UPDATE_TIME_DECODE_OFFSET) + version = FixSizedTypesCodec.decode_long(initial_frame.buf, _VERSION_DECODE_OFFSET) + ttl = FixSizedTypesCodec.decode_long(initial_frame.buf, _TTL_DECODE_OFFSET) + max_idle = FixSizedTypesCodec.decode_long(initial_frame.buf, _MAX_IDLE_DECODE_OFFSET) + key = DataCodec.decode(msg) + value = DataCodec.decode(msg) + CodecUtil.fast_forward_to_end_frame(msg) + return SimpleEntryView(key, value, cost, creation_time, expiration_time, hits, last_access_time, last_stored_time, last_update_time, version, ttl, max_idle) diff --git a/hazelcast/protocol/codec/custom/stack_trace_element_codec.py b/hazelcast/protocol/codec/custom/stack_trace_element_codec.py new file mode 100644 index 0000000000..fd72863637 --- /dev/null +++ b/hazelcast/protocol/codec/custom/stack_trace_element_codec.py @@ -0,0 +1,35 @@ +from hazelcast.protocol.builtin import FixSizedTypesCodec, CodecUtil +from hazelcast.serialization.bits import * +from hazelcast.protocol.client_message import END_FRAME_BUF, END_FINAL_FRAME_BUF, SIZE_OF_FRAME_LENGTH_AND_FLAGS, create_initial_buffer_custom +from hazelcast.protocol import StackTraceElement +from hazelcast.protocol.builtin import StringCodec + +_LINE_NUMBER_ENCODE_OFFSET = 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS +_LINE_NUMBER_DECODE_OFFSET = 0 +_INITIAL_FRAME_SIZE = _LINE_NUMBER_ENCODE_OFFSET + INT_SIZE_IN_BYTES - 2 * SIZE_OF_FRAME_LENGTH_AND_FLAGS + + +class StackTraceElementCodec(object): + @staticmethod + def encode(buf, stack_trace_element, is_final=False): + initial_frame_buf = create_initial_buffer_custom(_INITIAL_FRAME_SIZE) + FixSizedTypesCodec.encode_int(initial_frame_buf, _LINE_NUMBER_ENCODE_OFFSET, stack_trace_element.line_number) + buf.extend(initial_frame_buf) + StringCodec.encode(buf, stack_trace_element.class_name) + StringCodec.encode(buf, stack_trace_element.method_name) + CodecUtil.encode_nullable(buf, stack_trace_element.file_name, StringCodec.encode) + if is_final: + buf.extend(END_FINAL_FRAME_BUF) + else: + buf.extend(END_FRAME_BUF) + + @staticmethod + def decode(msg): + msg.next_frame() + initial_frame = msg.next_frame() + line_number = FixSizedTypesCodec.decode_int(initial_frame.buf, _LINE_NUMBER_DECODE_OFFSET) + class_name = StringCodec.decode(msg) + method_name = StringCodec.decode(msg) + file_name = CodecUtil.decode_nullable(msg, StringCodec.decode) + CodecUtil.fast_forward_to_end_frame(msg) + return StackTraceElement(class_name, method_name, file_name, line_number) diff --git a/hazelcast/protocol/codec/executor_service_cancel_on_address_codec.py b/hazelcast/protocol/codec/executor_service_cancel_on_address_codec.py deleted file mode 100644 index a8c32383d6..0000000000 --- a/hazelcast/protocol/codec/executor_service_cancel_on_address_codec.py +++ /dev/null @@ -1,36 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.protocol.codec.executor_service_message_type import * - -REQUEST_TYPE = EXECUTORSERVICE_CANCELONADDRESS -RESPONSE_TYPE = 101 -RETRYABLE = False - - -def calculate_size(uuid, address, interrupt): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(uuid) - data_size += calculate_size_address(address) - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size - - -def encode_request(uuid, address, interrupt): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(uuid, address, interrupt)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(uuid) - AddressCodec.encode(client_message, address) - client_message.append_bool(interrupt) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/executor_service_cancel_on_member_codec.py b/hazelcast/protocol/codec/executor_service_cancel_on_member_codec.py new file mode 100644 index 0000000000..b51bc15cfc --- /dev/null +++ b/hazelcast/protocol/codec/executor_service_cancel_on_member_codec.py @@ -0,0 +1,27 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE + +# hex: 0x080400 +_REQUEST_MESSAGE_TYPE = 525312 +# hex: 0x080401 +_RESPONSE_MESSAGE_TYPE = 525313 + +_REQUEST_UUID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_MEMBER_UUID_OFFSET = _REQUEST_UUID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INTERRUPT_OFFSET = _REQUEST_MEMBER_UUID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_INTERRUPT_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE + + +def encode_request(uuid, member_uuid, interrupt): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_UUID_OFFSET, uuid) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_MEMBER_UUID_OFFSET, member_uuid) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INTERRUPT_OFFSET, interrupt) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/executor_service_cancel_on_partition_codec.py b/hazelcast/protocol/codec/executor_service_cancel_on_partition_codec.py index 568203a37e..db8c43daa6 100644 --- a/hazelcast/protocol/codec/executor_service_cancel_on_partition_codec.py +++ b/hazelcast/protocol/codec/executor_service_cancel_on_partition_codec.py @@ -1,35 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.executor_service_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE -REQUEST_TYPE = EXECUTORSERVICE_CANCELONPARTITION -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x080300 +_REQUEST_MESSAGE_TYPE = 525056 +# hex: 0x080301 +_RESPONSE_MESSAGE_TYPE = 525057 +_REQUEST_UUID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INTERRUPT_OFFSET = _REQUEST_UUID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_INTERRUPT_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE -def calculate_size(uuid, partition_id, interrupt): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(uuid) - data_size += INT_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +def encode_request(uuid, interrupt): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_UUID_OFFSET, uuid) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INTERRUPT_OFFSET, interrupt) + return OutboundMessage(buf, False) -def encode_request(uuid, partition_id, interrupt): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(uuid, partition_id, interrupt)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(uuid) - client_message.append_int(partition_id) - client_message.append_bool(interrupt) - client_message.update_frame_length() - return client_message - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/executor_service_is_shutdown_codec.py b/hazelcast/protocol/codec/executor_service_is_shutdown_codec.py index edfd7c7b4a..49220ede8c 100644 --- a/hazelcast/protocol/codec/executor_service_is_shutdown_codec.py +++ b/hazelcast/protocol/codec/executor_service_is_shutdown_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.executor_service_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = EXECUTORSERVICE_ISSHUTDOWN -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x080200 +_REQUEST_MESSAGE_TYPE = 524800 +# hex: 0x080201 +_RESPONSE_MESSAGE_TYPE = 524801 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/executor_service_message_type.py b/hazelcast/protocol/codec/executor_service_message_type.py deleted file mode 100644 index 8f7bef7e67..0000000000 --- a/hazelcast/protocol/codec/executor_service_message_type.py +++ /dev/null @@ -1,7 +0,0 @@ - -EXECUTORSERVICE_SHUTDOWN = 0x0901 -EXECUTORSERVICE_ISSHUTDOWN = 0x0902 -EXECUTORSERVICE_CANCELONPARTITION = 0x0903 -EXECUTORSERVICE_CANCELONADDRESS = 0x0904 -EXECUTORSERVICE_SUBMITTOPARTITION = 0x0905 -EXECUTORSERVICE_SUBMITTOADDRESS = 0x0906 diff --git a/hazelcast/protocol/codec/executor_service_shutdown_codec.py b/hazelcast/protocol/codec/executor_service_shutdown_codec.py index f26cc720ca..1e3b4c7ca9 100644 --- a/hazelcast/protocol/codec/executor_service_shutdown_codec.py +++ b/hazelcast/protocol/codec/executor_service_shutdown_codec.py @@ -1,27 +1,15 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.executor_service_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = EXECUTORSERVICE_SHUTDOWN -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x080100 +_REQUEST_MESSAGE_TYPE = 524544 +# hex: 0x080101 +_RESPONSE_MESSAGE_TYPE = 524545 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/executor_service_submit_to_address_codec.py b/hazelcast/protocol/codec/executor_service_submit_to_address_codec.py deleted file mode 100644 index 9e195d7333..0000000000 --- a/hazelcast/protocol/codec/executor_service_submit_to_address_codec.py +++ /dev/null @@ -1,39 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.protocol.codec.executor_service_message_type import * - -REQUEST_TYPE = EXECUTORSERVICE_SUBMITTOADDRESS -RESPONSE_TYPE = 105 -RETRYABLE = False - - -def calculate_size(name, uuid, callable, address): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(uuid) - data_size += calculate_size_data(callable) - data_size += calculate_size_address(address) - return data_size - - -def encode_request(name, uuid, callable, address): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, uuid, callable, address)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(uuid) - client_message.append_data(callable) - AddressCodec.encode(client_message, address) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters diff --git a/hazelcast/protocol/codec/executor_service_submit_to_member_codec.py b/hazelcast/protocol/codec/executor_service_submit_to_member_codec.py new file mode 100644 index 0000000000..4064bbf119 --- /dev/null +++ b/hazelcast/protocol/codec/executor_service_submit_to_member_codec.py @@ -0,0 +1,29 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x080600 +_REQUEST_MESSAGE_TYPE = 525824 +# hex: 0x080601 +_RESPONSE_MESSAGE_TYPE = 525825 + +_REQUEST_UUID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_MEMBER_UUID_OFFSET = _REQUEST_UUID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_MEMBER_UUID_OFFSET + UUID_SIZE_IN_BYTES + + +def encode_request(name, uuid, callable, member_uuid): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_UUID_OFFSET, uuid) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_MEMBER_UUID_OFFSET, member_uuid) + StringCodec.encode(buf, name) + DataCodec.encode(buf, callable, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/executor_service_submit_to_partition_codec.py b/hazelcast/protocol/codec/executor_service_submit_to_partition_codec.py index 520ea69b2c..6fa1475c6d 100644 --- a/hazelcast/protocol/codec/executor_service_submit_to_partition_codec.py +++ b/hazelcast/protocol/codec/executor_service_submit_to_partition_codec.py @@ -1,38 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.executor_service_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = EXECUTORSERVICE_SUBMITTOPARTITION -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x080500 +_REQUEST_MESSAGE_TYPE = 525568 +# hex: 0x080501 +_RESPONSE_MESSAGE_TYPE = 525569 +_REQUEST_UUID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_UUID_OFFSET + UUID_SIZE_IN_BYTES -def calculate_size(name, uuid, callable, partition_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(uuid) - data_size += calculate_size_data(callable) - data_size += INT_SIZE_IN_BYTES - return data_size +def encode_request(name, uuid, callable): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_UUID_OFFSET, uuid) + StringCodec.encode(buf, name) + DataCodec.encode(buf, callable, True) + return OutboundMessage(buf, False) -def encode_request(name, uuid, callable, partition_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, uuid, callable, partition_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(uuid) - client_message.append_data(callable) - client_message.append_int(partition_id) - client_message.update_frame_length() - return client_message - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/flake_id_generator_message_type.py b/hazelcast/protocol/codec/flake_id_generator_message_type.py deleted file mode 100644 index c3f4ff2ee0..0000000000 --- a/hazelcast/protocol/codec/flake_id_generator_message_type.py +++ /dev/null @@ -1 +0,0 @@ -FLAKEIDGENERATOR_NEWIDBATCH = 0x1f01 diff --git a/hazelcast/protocol/codec/flake_id_generator_new_id_batch_codec.py b/hazelcast/protocol/codec/flake_id_generator_new_id_batch_codec.py index 844a274651..bfef30ecce 100644 --- a/hazelcast/protocol/codec/flake_id_generator_new_id_batch_codec.py +++ b/hazelcast/protocol/codec/flake_id_generator_new_id_batch_codec.py @@ -1,38 +1,31 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.flake_id_generator_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = FLAKEIDGENERATOR_NEWIDBATCH -RESPONSE_TYPE = 126 -RETRYABLE = True +# hex: 0x1C0100 +_REQUEST_MESSAGE_TYPE = 1835264 +# hex: 0x1C0101 +_RESPONSE_MESSAGE_TYPE = 1835265 - -def calculate_size(name, batch_size): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - return data_size +_REQUEST_BATCH_SIZE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_BATCH_SIZE_OFFSET + INT_SIZE_IN_BYTES +_RESPONSE_BASE_OFFSET = RESPONSE_HEADER_SIZE +_RESPONSE_INCREMENT_OFFSET = _RESPONSE_BASE_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_BATCH_SIZE_OFFSET = _RESPONSE_INCREMENT_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, batch_size): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, batch_size)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(batch_size) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(base=None, increment=None, batch_size=None) - parameters['base'] = client_message.read_long() - parameters['increment'] = client_message.read_long() - parameters['batch_size'] = client_message.read_int() - return parameters - - - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_BATCH_SIZE_OFFSET, batch_size) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) + + +def decode_response(msg): + initial_frame = msg.next_frame() + response = dict() + response["base"] = FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_BASE_OFFSET) + response["increment"] = FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_INCREMENT_OFFSET) + response["batch_size"] = FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_BATCH_SIZE_OFFSET) + return response diff --git a/hazelcast/protocol/codec/list_add_all_codec.py b/hazelcast/protocol/codec/list_add_all_codec.py index cc206fde9d..3d4615f2d8 100644 --- a/hazelcast/protocol/codec/list_add_all_codec.py +++ b/hazelcast/protocol/codec/list_add_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_ADDALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x050600 +_REQUEST_MESSAGE_TYPE = 329216 +# hex: 0x050601 +_RESPONSE_MESSAGE_TYPE = 329217 - -def calculate_size(name, value_list): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for value_list_item in value_list: - data_size += calculate_size_data(value_list_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value_list): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value_list)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(value_list)) - for value_list_item in value_list: - client_message.append_data(value_list_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, value_list, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_add_all_with_index_codec.py b/hazelcast/protocol/codec/list_add_all_with_index_codec.py index 2d58a17012..50de235b13 100644 --- a/hazelcast/protocol/codec/list_add_all_with_index_codec.py +++ b/hazelcast/protocol/codec/list_add_all_with_index_codec.py @@ -1,39 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_ADDALLWITHINDEX -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x050E00 +_REQUEST_MESSAGE_TYPE = 331264 +# hex: 0x050E01 +_RESPONSE_MESSAGE_TYPE = 331265 - -def calculate_size(name, index, value_list): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - for value_list_item in value_list: - data_size += calculate_size_data(value_list_item) - return data_size +_REQUEST_INDEX_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_INDEX_OFFSET + INT_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, index, value_list): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, index, value_list)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(index) - client_message.append_int(len(value_list)) - for value_list_item in value_list: - client_message.append_data(value_list_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_INDEX_OFFSET, index) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, value_list, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_add_codec.py b/hazelcast/protocol/codec/list_add_codec.py index 9e4a311c95..00fcd85e95 100644 --- a/hazelcast/protocol/codec/list_add_codec.py +++ b/hazelcast/protocol/codec/list_add_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_ADD -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x050400 +_REQUEST_MESSAGE_TYPE = 328704 +# hex: 0x050401 +_RESPONSE_MESSAGE_TYPE = 328705 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_add_listener_codec.py b/hazelcast/protocol/codec/list_add_listener_codec.py index a20dfd212a..7d4e4f489a 100644 --- a/hazelcast/protocol/codec/list_add_listener_codec.py +++ b/hazelcast/protocol/codec/list_add_listener_codec.py @@ -1,48 +1,44 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = LIST_ADDLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, include_value, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x050B00 +_REQUEST_MESSAGE_TYPE = 330496 +# hex: 0x050B01 +_RESPONSE_MESSAGE_TYPE = 330497 +# hex: 0x050B02 +_EVENT_ITEM_MESSAGE_TYPE = 330498 + +_REQUEST_INCLUDE_VALUE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_INCLUDE_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ITEM_UUID_OFFSET = EVENT_HEADER_SIZE +_EVENT_ITEM_EVENT_TYPE_OFFSET = _EVENT_ITEM_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, include_value, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, include_value, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(include_value) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_item=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ITEM and handle_event_item is not None: - item = None - if not client_message.read_bool(): - item = client_message.read_data() - uuid = client_message.read_str() - event_type = client_message.read_int() - handle_event_item(item=item, uuid=uuid, event_type=event_type) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INCLUDE_VALUE_OFFSET, include_value) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_item_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ITEM_MESSAGE_TYPE and handle_item_event is not None: + initial_frame = msg.next_frame() + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ITEM_UUID_OFFSET) + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ITEM_EVENT_TYPE_OFFSET) + item = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_item_event(item, uuid, event_type) + return diff --git a/hazelcast/protocol/codec/list_add_with_index_codec.py b/hazelcast/protocol/codec/list_add_with_index_codec.py index 3e86b5502b..93a5bfd48b 100644 --- a/hazelcast/protocol/codec/list_add_with_index_codec.py +++ b/hazelcast/protocol/codec/list_add_with_index_codec.py @@ -1,31 +1,21 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_ADDWITHINDEX -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x051100 +_REQUEST_MESSAGE_TYPE = 332032 +# hex: 0x051101 +_RESPONSE_MESSAGE_TYPE = 332033 - -def calculate_size(name, index, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - data_size += calculate_size_data(value) - return data_size +_REQUEST_INDEX_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_INDEX_OFFSET + INT_SIZE_IN_BYTES def encode_request(name, index, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, index, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(index) - client_message.append_data(value) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_INDEX_OFFSET, index) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/list_clear_codec.py b/hazelcast/protocol/codec/list_clear_codec.py index fdad14e60e..a3d585ef44 100644 --- a/hazelcast/protocol/codec/list_clear_codec.py +++ b/hazelcast/protocol/codec/list_clear_codec.py @@ -1,27 +1,15 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = LIST_CLEAR -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x050900 +_REQUEST_MESSAGE_TYPE = 329984 +# hex: 0x050901 +_RESPONSE_MESSAGE_TYPE = 329985 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/list_compare_and_remove_all_codec.py b/hazelcast/protocol/codec/list_compare_and_remove_all_codec.py index 46b640c83c..4192a7b4e6 100644 --- a/hazelcast/protocol/codec/list_compare_and_remove_all_codec.py +++ b/hazelcast/protocol/codec/list_compare_and_remove_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_COMPAREANDREMOVEALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x050700 +_REQUEST_MESSAGE_TYPE = 329472 +# hex: 0x050701 +_RESPONSE_MESSAGE_TYPE = 329473 - -def calculate_size(name, values): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for values_item in values: - data_size += calculate_size_data(values_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, values): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, values)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(values)) - for values_item in values: - client_message.append_data(values_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, values, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_compare_and_retain_all_codec.py b/hazelcast/protocol/codec/list_compare_and_retain_all_codec.py index 3ca9d05280..197ce06e3c 100644 --- a/hazelcast/protocol/codec/list_compare_and_retain_all_codec.py +++ b/hazelcast/protocol/codec/list_compare_and_retain_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_COMPAREANDRETAINALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x050800 +_REQUEST_MESSAGE_TYPE = 329728 +# hex: 0x050801 +_RESPONSE_MESSAGE_TYPE = 329729 - -def calculate_size(name, values): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for values_item in values: - data_size += calculate_size_data(values_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, values): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, values)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(values)) - for values_item in values: - client_message.append_data(values_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, values, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_contains_all_codec.py b/hazelcast/protocol/codec/list_contains_all_codec.py index ac9f0b49f9..8cb5745b40 100644 --- a/hazelcast/protocol/codec/list_contains_all_codec.py +++ b/hazelcast/protocol/codec/list_contains_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_CONTAINSALL -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x050300 +_REQUEST_MESSAGE_TYPE = 328448 +# hex: 0x050301 +_RESPONSE_MESSAGE_TYPE = 328449 - -def calculate_size(name, values): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for values_item in values: - data_size += calculate_size_data(values_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, values): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, values)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(values)) - for values_item in values: - client_message.append_data(values_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, values, DataCodec.encode, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_contains_codec.py b/hazelcast/protocol/codec/list_contains_codec.py index d9e915ea3e..703e120c48 100644 --- a/hazelcast/protocol/codec/list_contains_codec.py +++ b/hazelcast/protocol/codec/list_contains_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_CONTAINS -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x050200 +_REQUEST_MESSAGE_TYPE = 328192 +# hex: 0x050201 +_RESPONSE_MESSAGE_TYPE = 328193 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_get_all_codec.py b/hazelcast/protocol/codec/list_get_all_codec.py index 4ac65bda4b..b6da226ef2 100644 --- a/hazelcast/protocol/codec/list_get_all_codec.py +++ b/hazelcast/protocol/codec/list_get_all_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.list_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_GETALL -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x050A00 +_REQUEST_MESSAGE_TYPE = 330240 +# hex: 0x050A01 +_RESPONSE_MESSAGE_TYPE = 330241 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/list_get_codec.py b/hazelcast/protocol/codec/list_get_codec.py index 3caf396393..a3977ccb09 100644 --- a/hazelcast/protocol/codec/list_get_codec.py +++ b/hazelcast/protocol/codec/list_get_codec.py @@ -1,34 +1,26 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = LIST_GET -RESPONSE_TYPE = 105 -RETRYABLE = True +# hex: 0x050F00 +_REQUEST_MESSAGE_TYPE = 331520 +# hex: 0x050F01 +_RESPONSE_MESSAGE_TYPE = 331521 - -def calculate_size(name, index): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - return data_size +_REQUEST_INDEX_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_INDEX_OFFSET + INT_SIZE_IN_BYTES def encode_request(name, index): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, index)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(index) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_INDEX_OFFSET, index) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/list_index_of_codec.py b/hazelcast/protocol/codec/list_index_of_codec.py index bfc2167dcd..f1cf60eb17 100644 --- a/hazelcast/protocol/codec/list_index_of_codec.py +++ b/hazelcast/protocol/codec/list_index_of_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_INDEXOF -RESPONSE_TYPE = 102 -RETRYABLE = True +# hex: 0x051400 +_REQUEST_MESSAGE_TYPE = 332800 +# hex: 0x051401 +_RESPONSE_MESSAGE_TYPE = 332801 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_is_empty_codec.py b/hazelcast/protocol/codec/list_is_empty_codec.py index 3c42ede40f..8c7e971604 100644 --- a/hazelcast/protocol/codec/list_is_empty_codec.py +++ b/hazelcast/protocol/codec/list_is_empty_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = LIST_ISEMPTY -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x050D00 +_REQUEST_MESSAGE_TYPE = 331008 +# hex: 0x050D01 +_RESPONSE_MESSAGE_TYPE = 331009 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_iterator_codec.py b/hazelcast/protocol/codec/list_iterator_codec.py index 11ca7a4db2..b3ad4c89ed 100644 --- a/hazelcast/protocol/codec/list_iterator_codec.py +++ b/hazelcast/protocol/codec/list_iterator_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.list_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_ITERATOR -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x051600 +_REQUEST_MESSAGE_TYPE = 333312 +# hex: 0x051601 +_RESPONSE_MESSAGE_TYPE = 333313 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/list_last_index_of_codec.py b/hazelcast/protocol/codec/list_last_index_of_codec.py index 322368d372..8e400ed091 100644 --- a/hazelcast/protocol/codec/list_last_index_of_codec.py +++ b/hazelcast/protocol/codec/list_last_index_of_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_LASTINDEXOF -RESPONSE_TYPE = 102 -RETRYABLE = True +# hex: 0x051300 +_REQUEST_MESSAGE_TYPE = 332544 +# hex: 0x051301 +_RESPONSE_MESSAGE_TYPE = 332545 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_list_iterator_codec.py b/hazelcast/protocol/codec/list_list_iterator_codec.py index 37ef96701b..877afc25ae 100644 --- a/hazelcast/protocol/codec/list_list_iterator_codec.py +++ b/hazelcast/protocol/codec/list_list_iterator_codec.py @@ -1,40 +1,26 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.list_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_LISTITERATOR -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x051700 +_REQUEST_MESSAGE_TYPE = 333568 +# hex: 0x051701 +_RESPONSE_MESSAGE_TYPE = 333569 - -def calculate_size(name, index): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - return data_size +_REQUEST_INDEX_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_INDEX_OFFSET + INT_SIZE_IN_BYTES def encode_request(name, index): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, index)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(index) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_INDEX_OFFSET, index) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/list_message_type.py b/hazelcast/protocol/codec/list_message_type.py deleted file mode 100644 index 643c8f9c0a..0000000000 --- a/hazelcast/protocol/codec/list_message_type.py +++ /dev/null @@ -1,24 +0,0 @@ - -LIST_SIZE = 0x0501 -LIST_CONTAINS = 0x0502 -LIST_CONTAINSALL = 0x0503 -LIST_ADD = 0x0504 -LIST_REMOVE = 0x0505 -LIST_ADDALL = 0x0506 -LIST_COMPAREANDREMOVEALL = 0x0507 -LIST_COMPAREANDRETAINALL = 0x0508 -LIST_CLEAR = 0x0509 -LIST_GETALL = 0x050a -LIST_ADDLISTENER = 0x050b -LIST_REMOVELISTENER = 0x050c -LIST_ISEMPTY = 0x050d -LIST_ADDALLWITHINDEX = 0x050e -LIST_GET = 0x050f -LIST_SET = 0x0510 -LIST_ADDWITHINDEX = 0x0511 -LIST_REMOVEWITHINDEX = 0x0512 -LIST_LASTINDEXOF = 0x0513 -LIST_INDEXOF = 0x0514 -LIST_SUB = 0x0515 -LIST_ITERATOR = 0x0516 -LIST_LISTITERATOR = 0x0517 diff --git a/hazelcast/protocol/codec/list_remove_codec.py b/hazelcast/protocol/codec/list_remove_codec.py index b7cbcf197a..b141c55d25 100644 --- a/hazelcast/protocol/codec/list_remove_codec.py +++ b/hazelcast/protocol/codec/list_remove_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_REMOVE -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x050500 +_REQUEST_MESSAGE_TYPE = 328960 +# hex: 0x050501 +_RESPONSE_MESSAGE_TYPE = 328961 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_remove_listener_codec.py b/hazelcast/protocol/codec/list_remove_listener_codec.py index ef7d2604cf..15273f453f 100644 --- a/hazelcast/protocol/codec/list_remove_listener_codec.py +++ b/hazelcast/protocol/codec/list_remove_listener_codec.py @@ -1,28 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = LIST_REMOVELISTENER -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x050C00 +_REQUEST_MESSAGE_TYPE = 330752 +# hex: 0x050C01 +_RESPONSE_MESSAGE_TYPE = 330753 - -def calculate_size(name, registration_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(registration_id) - return data_size +_REQUEST_REGISTRATION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REGISTRATION_ID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, registration_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, registration_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(registration_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_REGISTRATION_ID_OFFSET, registration_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) + -# Empty decode_response because response is not used to determine the return value. +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_remove_with_index_codec.py b/hazelcast/protocol/codec/list_remove_with_index_codec.py index 08d5ad4061..5e664b535b 100644 --- a/hazelcast/protocol/codec/list_remove_with_index_codec.py +++ b/hazelcast/protocol/codec/list_remove_with_index_codec.py @@ -1,34 +1,26 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = LIST_REMOVEWITHINDEX -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x051200 +_REQUEST_MESSAGE_TYPE = 332288 +# hex: 0x051201 +_RESPONSE_MESSAGE_TYPE = 332289 - -def calculate_size(name, index): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - return data_size +_REQUEST_INDEX_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_INDEX_OFFSET + INT_SIZE_IN_BYTES def encode_request(name, index): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, index)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(index) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_INDEX_OFFSET, index) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/list_set_codec.py b/hazelcast/protocol/codec/list_set_codec.py index df68427cb7..31cf007899 100644 --- a/hazelcast/protocol/codec/list_set_codec.py +++ b/hazelcast/protocol/codec/list_set_codec.py @@ -1,36 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = LIST_SET -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x051000 +_REQUEST_MESSAGE_TYPE = 331776 +# hex: 0x051001 +_RESPONSE_MESSAGE_TYPE = 331777 - -def calculate_size(name, index, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - data_size += calculate_size_data(value) - return data_size +_REQUEST_INDEX_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_INDEX_OFFSET + INT_SIZE_IN_BYTES def encode_request(name, index, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, index, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(index) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_INDEX_OFFSET, index) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/list_size_codec.py b/hazelcast/protocol/codec/list_size_codec.py index 9606b4dd1e..5a4cfbfdf3 100644 --- a/hazelcast/protocol/codec/list_size_codec.py +++ b/hazelcast/protocol/codec/list_size_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = LIST_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = True +# hex: 0x050100 +_REQUEST_MESSAGE_TYPE = 327936 +# hex: 0x050101 +_RESPONSE_MESSAGE_TYPE = 327937 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/list_sub_codec.py b/hazelcast/protocol/codec/list_sub_codec.py index 893034c829..a16957b1c3 100644 --- a/hazelcast/protocol/codec/list_sub_codec.py +++ b/hazelcast/protocol/codec/list_sub_codec.py @@ -1,42 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.list_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = LIST_SUB -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x051500 +_REQUEST_MESSAGE_TYPE = 333056 +# hex: 0x051501 +_RESPONSE_MESSAGE_TYPE = 333057 +_REQUEST_FROM_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TO_OFFSET = _REQUEST_FROM_OFFSET + INT_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TO_OFFSET + INT_SIZE_IN_BYTES -def calculate_size(name, from_, to): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - return data_size +def encode_request(name, _from, to): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_FROM_OFFSET, _from) + FixSizedTypesCodec.encode_int(buf, _REQUEST_TO_OFFSET, to) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def encode_request(name, from_, to): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, from_, to)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(from_) - client_message.append_int(to) - client_message.update_frame_length() - return client_message - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/lock_force_unlock_codec.py b/hazelcast/protocol/codec/lock_force_unlock_codec.py deleted file mode 100644 index cb3592b75e..0000000000 --- a/hazelcast/protocol/codec/lock_force_unlock_codec.py +++ /dev/null @@ -1,29 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.lock_message_type import * - -REQUEST_TYPE = LOCK_FORCEUNLOCK -RESPONSE_TYPE = 100 -RETRYABLE = True - - -def calculate_size(name, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/lock_get_lock_count_codec.py b/hazelcast/protocol/codec/lock_get_lock_count_codec.py deleted file mode 100644 index aa1eccc2af..0000000000 --- a/hazelcast/protocol/codec/lock_get_lock_count_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.lock_message_type import * - -REQUEST_TYPE = LOCK_GETLOCKCOUNT -RESPONSE_TYPE = 102 -RETRYABLE = True - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters diff --git a/hazelcast/protocol/codec/lock_get_remaining_lease_time_codec.py b/hazelcast/protocol/codec/lock_get_remaining_lease_time_codec.py deleted file mode 100644 index 2b58e4c2d4..0000000000 --- a/hazelcast/protocol/codec/lock_get_remaining_lease_time_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.lock_message_type import * - -REQUEST_TYPE = LOCK_GETREMAININGLEASETIME -RESPONSE_TYPE = 103 -RETRYABLE = True - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters diff --git a/hazelcast/protocol/codec/lock_is_locked_by_current_thread_codec.py b/hazelcast/protocol/codec/lock_is_locked_by_current_thread_codec.py deleted file mode 100644 index 835e766ad7..0000000000 --- a/hazelcast/protocol/codec/lock_is_locked_by_current_thread_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.lock_message_type import * - -REQUEST_TYPE = LOCK_ISLOCKEDBYCURRENTTHREAD -RESPONSE_TYPE = 101 -RETRYABLE = True - - -def calculate_size(name, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/lock_is_locked_codec.py b/hazelcast/protocol/codec/lock_is_locked_codec.py deleted file mode 100644 index 978cc5a307..0000000000 --- a/hazelcast/protocol/codec/lock_is_locked_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.lock_message_type import * - -REQUEST_TYPE = LOCK_ISLOCKED -RESPONSE_TYPE = 101 -RETRYABLE = True - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/lock_lock_codec.py b/hazelcast/protocol/codec/lock_lock_codec.py deleted file mode 100644 index 3d854fa73a..0000000000 --- a/hazelcast/protocol/codec/lock_lock_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.lock_message_type import * - -REQUEST_TYPE = LOCK_LOCK -RESPONSE_TYPE = 100 -RETRYABLE = True - - -def calculate_size(name, lease_time, thread_id, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, lease_time, thread_id, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, lease_time, thread_id, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(lease_time) - client_message.append_long(thread_id) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/lock_message_type.py b/hazelcast/protocol/codec/lock_message_type.py deleted file mode 100644 index c25a544a5f..0000000000 --- a/hazelcast/protocol/codec/lock_message_type.py +++ /dev/null @@ -1,9 +0,0 @@ - -LOCK_ISLOCKED = 0x0701 -LOCK_ISLOCKEDBYCURRENTTHREAD = 0x0702 -LOCK_GETLOCKCOUNT = 0x0703 -LOCK_GETREMAININGLEASETIME = 0x0704 -LOCK_LOCK = 0x0705 -LOCK_UNLOCK = 0x0706 -LOCK_FORCEUNLOCK = 0x0707 -LOCK_TRYLOCK = 0x0708 diff --git a/hazelcast/protocol/codec/lock_try_lock_codec.py b/hazelcast/protocol/codec/lock_try_lock_codec.py deleted file mode 100644 index 7680a026ee..0000000000 --- a/hazelcast/protocol/codec/lock_try_lock_codec.py +++ /dev/null @@ -1,39 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.lock_message_type import * - -REQUEST_TYPE = LOCK_TRYLOCK -RESPONSE_TYPE = 101 -RETRYABLE = True - - -def calculate_size(name, thread_id, lease, timeout, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, thread_id, lease, timeout, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, thread_id, lease, timeout, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(thread_id) - client_message.append_long(lease) - client_message.append_long(timeout) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/lock_unlock_codec.py b/hazelcast/protocol/codec/lock_unlock_codec.py deleted file mode 100644 index 97326c9907..0000000000 --- a/hazelcast/protocol/codec/lock_unlock_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.lock_message_type import * - -REQUEST_TYPE = LOCK_UNLOCK -RESPONSE_TYPE = 100 -RETRYABLE = True - - -def calculate_size(name, thread_id, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, thread_id, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, thread_id, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(thread_id) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/map_add_entry_listener_codec.py b/hazelcast/protocol/codec/map_add_entry_listener_codec.py index 12aa8ab86b..d14035cafd 100644 --- a/hazelcast/protocol/codec/map_add_entry_listener_codec.py +++ b/hazelcast/protocol/codec/map_add_entry_listener_codec.py @@ -1,60 +1,51 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = MAP_ADDENTRYLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, include_value, listener_flags, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x011900 +_REQUEST_MESSAGE_TYPE = 71936 +# hex: 0x011901 +_RESPONSE_MESSAGE_TYPE = 71937 +# hex: 0x011902 +_EVENT_ENTRY_MESSAGE_TYPE = 71938 + +_REQUEST_INCLUDE_VALUE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LISTENER_FLAGS_OFFSET = _REQUEST_INCLUDE_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_LISTENER_FLAGS_OFFSET + INT_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, include_value, listener_flags, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, include_value, listener_flags, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(include_value) - client_message.append_int(listener_flags) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INCLUDE_VALUE_OFFSET, include_value) + FixSizedTypesCodec.encode_int(buf, _REQUEST_LISTENER_FLAGS_OFFSET, listener_flags) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/map_add_entry_listener_to_key_codec.py b/hazelcast/protocol/codec/map_add_entry_listener_to_key_codec.py index d918532b1e..91507bf516 100644 --- a/hazelcast/protocol/codec/map_add_entry_listener_to_key_codec.py +++ b/hazelcast/protocol/codec/map_add_entry_listener_to_key_codec.py @@ -1,62 +1,52 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = MAP_ADDENTRYLISTENERTOKEY -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, key, include_value, listener_flags, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x011800 +_REQUEST_MESSAGE_TYPE = 71680 +# hex: 0x011801 +_RESPONSE_MESSAGE_TYPE = 71681 +# hex: 0x011802 +_EVENT_ENTRY_MESSAGE_TYPE = 71682 + +_REQUEST_INCLUDE_VALUE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LISTENER_FLAGS_OFFSET = _REQUEST_INCLUDE_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_LISTENER_FLAGS_OFFSET + INT_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, key, include_value, listener_flags, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, include_value, listener_flags, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_bool(include_value) - client_message.append_int(listener_flags) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INCLUDE_VALUE_OFFSET, include_value) + FixSizedTypesCodec.encode_int(buf, _REQUEST_LISTENER_FLAGS_OFFSET, listener_flags) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/map_add_entry_listener_to_key_with_predicate_codec.py b/hazelcast/protocol/codec/map_add_entry_listener_to_key_with_predicate_codec.py index a63c77d8f4..9716eb69cc 100644 --- a/hazelcast/protocol/codec/map_add_entry_listener_to_key_with_predicate_codec.py +++ b/hazelcast/protocol/codec/map_add_entry_listener_to_key_with_predicate_codec.py @@ -1,64 +1,53 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = MAP_ADDENTRYLISTENERTOKEYWITHPREDICATE -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, key, predicate, include_value, listener_flags, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(predicate) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x011600 +_REQUEST_MESSAGE_TYPE = 71168 +# hex: 0x011601 +_RESPONSE_MESSAGE_TYPE = 71169 +# hex: 0x011602 +_EVENT_ENTRY_MESSAGE_TYPE = 71170 + +_REQUEST_INCLUDE_VALUE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LISTENER_FLAGS_OFFSET = _REQUEST_INCLUDE_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_LISTENER_FLAGS_OFFSET + INT_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, key, predicate, include_value, listener_flags, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, predicate, include_value, listener_flags, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(predicate) - client_message.append_bool(include_value) - client_message.append_int(listener_flags) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INCLUDE_VALUE_OFFSET, include_value) + FixSizedTypesCodec.encode_int(buf, _REQUEST_LISTENER_FLAGS_OFFSET, listener_flags) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/map_add_entry_listener_with_predicate_codec.py b/hazelcast/protocol/codec/map_add_entry_listener_with_predicate_codec.py index 96dac33459..727ddaa3e8 100644 --- a/hazelcast/protocol/codec/map_add_entry_listener_with_predicate_codec.py +++ b/hazelcast/protocol/codec/map_add_entry_listener_with_predicate_codec.py @@ -1,62 +1,52 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = MAP_ADDENTRYLISTENERWITHPREDICATE -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, predicate, include_value, listener_flags, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(predicate) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x011700 +_REQUEST_MESSAGE_TYPE = 71424 +# hex: 0x011701 +_RESPONSE_MESSAGE_TYPE = 71425 +# hex: 0x011702 +_EVENT_ENTRY_MESSAGE_TYPE = 71426 + +_REQUEST_INCLUDE_VALUE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LISTENER_FLAGS_OFFSET = _REQUEST_INCLUDE_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_LISTENER_FLAGS_OFFSET + INT_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, predicate, include_value, listener_flags, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, predicate, include_value, listener_flags, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(predicate) - client_message.append_bool(include_value) - client_message.append_int(listener_flags) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INCLUDE_VALUE_OFFSET, include_value) + FixSizedTypesCodec.encode_int(buf, _REQUEST_LISTENER_FLAGS_OFFSET, listener_flags) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/map_add_index_codec.py b/hazelcast/protocol/codec/map_add_index_codec.py index 00fd94dbcb..299df0cb2b 100644 --- a/hazelcast/protocol/codec/map_add_index_codec.py +++ b/hazelcast/protocol/codec/map_add_index_codec.py @@ -1,31 +1,17 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.codec.custom.index_config_codec import IndexConfigCodec -REQUEST_TYPE = MAP_ADDINDEX -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x012900 +_REQUEST_MESSAGE_TYPE = 76032 +# hex: 0x012901 +_RESPONSE_MESSAGE_TYPE = 76033 +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE -def calculate_size(name, attribute, ordered): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(attribute) - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size - -def encode_request(name, attribute, ordered): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, attribute, ordered)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(attribute) - client_message.append_bool(ordered) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode +def encode_request(name, index_config): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + IndexConfigCodec.encode(buf, index_config, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_add_interceptor_codec.py b/hazelcast/protocol/codec/map_add_interceptor_codec.py index 3858b4b865..f996ec0939 100644 --- a/hazelcast/protocol/codec/map_add_interceptor_codec.py +++ b/hazelcast/protocol/codec/map_add_interceptor_codec.py @@ -1,33 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_ADDINTERCEPTOR -RESPONSE_TYPE = 104 -RETRYABLE = False +# hex: 0x011400 +_REQUEST_MESSAGE_TYPE = 70656 +# hex: 0x011401 +_RESPONSE_MESSAGE_TYPE = 70657 - -def calculate_size(name, interceptor): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(interceptor) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, interceptor): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, interceptor)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(interceptor) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, interceptor, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters +def decode_response(msg): + msg.next_frame() + return StringCodec.decode(msg) diff --git a/hazelcast/protocol/codec/map_add_near_cache_entry_listener_codec.py b/hazelcast/protocol/codec/map_add_near_cache_entry_listener_codec.py deleted file mode 100644 index 2eec4b5543..0000000000 --- a/hazelcast/protocol/codec/map_add_near_cache_entry_listener_codec.py +++ /dev/null @@ -1,54 +0,0 @@ -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.protocol.event_response_const import * -from hazelcast.serialization.bits import * -from hazelcast.six.moves import range - -REQUEST_TYPE = MAP_ADDNEARCACHEENTRYLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, listener_flags, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size - - -def encode_request(name, listener_flags, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, listener_flags, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(listener_flags) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_imap_invalidation=None, handle_event_imap_batch_invalidation=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_IMAPINVALIDATION and handle_event_imap_invalidation is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - handle_event_imap_invalidation(key=key) - if message_type == EVENT_IMAPBATCHINVALIDATION and handle_event_imap_batch_invalidation is not None: - keys_size = client_message.read_int() - keys = [] - for _ in range(0, keys_size): - keys_item = client_message.read_data() - keys.append(keys_item) - handle_event_imap_batch_invalidation(keys=keys) diff --git a/hazelcast/protocol/codec/map_add_near_cache_invalidation_listener_codec.py b/hazelcast/protocol/codec/map_add_near_cache_invalidation_listener_codec.py new file mode 100644 index 0000000000..09c02ae5ed --- /dev/null +++ b/hazelcast/protocol/codec/map_add_near_cache_invalidation_listener_codec.py @@ -0,0 +1,59 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import ListUUIDCodec +from hazelcast.protocol.builtin import ListLongCodec + +# hex: 0x013F00 +_REQUEST_MESSAGE_TYPE = 81664 +# hex: 0x013F01 +_RESPONSE_MESSAGE_TYPE = 81665 +# hex: 0x013F02 +_EVENT_I_MAP_INVALIDATION_MESSAGE_TYPE = 81666 +# hex: 0x013F03 +_EVENT_I_MAP_BATCH_INVALIDATION_MESSAGE_TYPE = 81667 + +_REQUEST_LISTENER_FLAGS_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_LISTENER_FLAGS_OFFSET + INT_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_I_MAP_INVALIDATION_SOURCE_UUID_OFFSET = EVENT_HEADER_SIZE +_EVENT_I_MAP_INVALIDATION_PARTITION_UUID_OFFSET = _EVENT_I_MAP_INVALIDATION_SOURCE_UUID_OFFSET + UUID_SIZE_IN_BYTES +_EVENT_I_MAP_INVALIDATION_SEQUENCE_OFFSET = _EVENT_I_MAP_INVALIDATION_PARTITION_UUID_OFFSET + UUID_SIZE_IN_BYTES + + +def encode_request(name, listener_flags, local_only): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_LISTENER_FLAGS_OFFSET, listener_flags) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_i_map_invalidation_event=None, handle_i_map_batch_invalidation_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_I_MAP_INVALIDATION_MESSAGE_TYPE and handle_i_map_invalidation_event is not None: + initial_frame = msg.next_frame() + source_uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_I_MAP_INVALIDATION_SOURCE_UUID_OFFSET) + partition_uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_I_MAP_INVALIDATION_PARTITION_UUID_OFFSET) + sequence = FixSizedTypesCodec.decode_long(initial_frame.buf, _EVENT_I_MAP_INVALIDATION_SEQUENCE_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_i_map_invalidation_event(key, source_uuid, partition_uuid, sequence) + return + if message_type == _EVENT_I_MAP_BATCH_INVALIDATION_MESSAGE_TYPE and handle_i_map_batch_invalidation_event is not None: + msg.next_frame() + keys = ListMultiFrameCodec.decode(msg, DataCodec.decode) + source_uuids = ListUUIDCodec.decode(msg) + partition_uuids = ListUUIDCodec.decode(msg) + sequences = ListLongCodec.decode(msg) + handle_i_map_batch_invalidation_event(keys, source_uuids, partition_uuids, sequences) + return diff --git a/hazelcast/protocol/codec/map_add_partition_lost_listener_codec.py b/hazelcast/protocol/codec/map_add_partition_lost_listener_codec.py index 6f2d3b46c3..d96f1f0b00 100644 --- a/hazelcast/protocol/codec/map_add_partition_lost_listener_codec.py +++ b/hazelcast/protocol/codec/map_add_partition_lost_listener_codec.py @@ -1,43 +1,39 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.protocol.event_response_const import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MAP_ADDPARTITIONLOSTLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False +# hex: 0x011B00 +_REQUEST_MESSAGE_TYPE = 72448 +# hex: 0x011B01 +_RESPONSE_MESSAGE_TYPE = 72449 +# hex: 0x011B02 +_EVENT_MAP_PARTITION_LOST_MESSAGE_TYPE = 72450 - -def calculate_size(name, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +_REQUEST_LOCAL_ONLY_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_MAP_PARTITION_LOST_PARTITION_ID_OFFSET = EVENT_HEADER_SIZE +_EVENT_MAP_PARTITION_LOST_UUID_OFFSET = _EVENT_MAP_PARTITION_LOST_PARTITION_ID_OFFSET + INT_SIZE_IN_BYTES def encode_request(name, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_map_partition_lost=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_MAPPARTITIONLOST and handle_event_map_partition_lost is not None: - partition_id = client_message.read_int() - uuid = client_message.read_str() - handle_event_map_partition_lost(partition_id=partition_id, uuid=uuid) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_map_partition_lost_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_MAP_PARTITION_LOST_MESSAGE_TYPE and handle_map_partition_lost_event is not None: + initial_frame = msg.next_frame() + partition_id = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_MAP_PARTITION_LOST_PARTITION_ID_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_MAP_PARTITION_LOST_UUID_OFFSET) + handle_map_partition_lost_event(partition_id, uuid) + return diff --git a/hazelcast/protocol/codec/map_aggregate_codec.py b/hazelcast/protocol/codec/map_aggregate_codec.py new file mode 100644 index 0000000000..52c6273204 --- /dev/null +++ b/hazelcast/protocol/codec/map_aggregate_codec.py @@ -0,0 +1,23 @@ +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x013900 +_REQUEST_MESSAGE_TYPE = 80128 +# hex: 0x013901 +_RESPONSE_MESSAGE_TYPE = 80129 + +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE + + +def encode_request(name, aggregator): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, aggregator, True) + return OutboundMessage(buf, True) + + +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_aggregate_with_predicate_codec.py b/hazelcast/protocol/codec/map_aggregate_with_predicate_codec.py new file mode 100644 index 0000000000..1aff715f90 --- /dev/null +++ b/hazelcast/protocol/codec/map_aggregate_with_predicate_codec.py @@ -0,0 +1,24 @@ +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x013A00 +_REQUEST_MESSAGE_TYPE = 80384 +# hex: 0x013A01 +_RESPONSE_MESSAGE_TYPE = 80385 + +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE + + +def encode_request(name, aggregator, predicate): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, aggregator) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, True) + + +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_clear_codec.py b/hazelcast/protocol/codec/map_clear_codec.py index 8a5b0819bc..f7fbca6c21 100644 --- a/hazelcast/protocol/codec/map_clear_codec.py +++ b/hazelcast/protocol/codec/map_clear_codec.py @@ -1,27 +1,15 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MAP_CLEAR -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x012D00 +_REQUEST_MESSAGE_TYPE = 77056 +# hex: 0x012D01 +_RESPONSE_MESSAGE_TYPE = 77057 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_clear_near_cache_codec.py b/hazelcast/protocol/codec/map_clear_near_cache_codec.py deleted file mode 100644 index 54efb8fe3f..0000000000 --- a/hazelcast/protocol/codec/map_clear_near_cache_codec.py +++ /dev/null @@ -1,30 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.protocol.codec.map_message_type import * - -REQUEST_TYPE = MAP_CLEARNEARCACHE -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(name, target): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_address(target) - return data_size - - -def encode_request(name, target): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, target)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - AddressCodec.encode(client_message, target) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/map_contains_key_codec.py b/hazelcast/protocol/codec/map_contains_key_codec.py index a22209a9a4..114afe1b63 100644 --- a/hazelcast/protocol/codec/map_contains_key_codec.py +++ b/hazelcast/protocol/codec/map_contains_key_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_CONTAINSKEY -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x010600 +_REQUEST_MESSAGE_TYPE = 67072 +# hex: 0x010601 +_RESPONSE_MESSAGE_TYPE = 67073 - -def calculate_size(name, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_contains_value_codec.py b/hazelcast/protocol/codec/map_contains_value_codec.py index 32c35dedc4..8f3053b5cc 100644 --- a/hazelcast/protocol/codec/map_contains_value_codec.py +++ b/hazelcast/protocol/codec/map_contains_value_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_CONTAINSVALUE -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x010700 +_REQUEST_MESSAGE_TYPE = 67328 +# hex: 0x010701 +_RESPONSE_MESSAGE_TYPE = 67329 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_delete_codec.py b/hazelcast/protocol/codec/map_delete_codec.py index ef347e8a2e..78c7ec16e4 100644 --- a/hazelcast/protocol/codec/map_delete_codec.py +++ b/hazelcast/protocol/codec/map_delete_codec.py @@ -1,31 +1,21 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_DELETE -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x010900 +_REQUEST_MESSAGE_TYPE = 67840 +# hex: 0x010901 +_RESPONSE_MESSAGE_TYPE = 67841 - -def calculate_size(name, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_entries_with_paging_predicate_codec.py b/hazelcast/protocol/codec/map_entries_with_paging_predicate_codec.py deleted file mode 100644 index 30efd6e3a4..0000000000 --- a/hazelcast/protocol/codec/map_entries_with_paging_predicate_codec.py +++ /dev/null @@ -1,40 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range - -REQUEST_TYPE = MAP_ENTRIESWITHPAGINGPREDICATE -RESPONSE_TYPE = 117 -RETRYABLE = False - - -def calculate_size(name, predicate): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(predicate) - return data_size - - -def encode_request(name, predicate): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, predicate)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(predicate) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = (client_message.read_data(), client_message.read_data()) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters diff --git a/hazelcast/protocol/codec/map_entries_with_predicate_codec.py b/hazelcast/protocol/codec/map_entries_with_predicate_codec.py index bb4ab59fce..5dd41fe868 100644 --- a/hazelcast/protocol/codec/map_entries_with_predicate_codec.py +++ b/hazelcast/protocol/codec/map_entries_with_predicate_codec.py @@ -1,43 +1,23 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import EntryListCodec -REQUEST_TYPE = MAP_ENTRIESWITHPREDICATE -RESPONSE_TYPE = 117 -RETRYABLE = False +# hex: 0x012800 +_REQUEST_MESSAGE_TYPE = 75776 +# hex: 0x012801 +_RESPONSE_MESSAGE_TYPE = 75777 - -def calculate_size(name, predicate): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(predicate) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, predicate): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, predicate)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(predicate) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = (client_message.read_data(), client_message.read_data()) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, True) +def decode_response(msg): + msg.next_frame() + return EntryListCodec.decode(msg, DataCodec.decode, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_entry_set_codec.py b/hazelcast/protocol/codec/map_entry_set_codec.py index 5c917491e4..72e490e143 100644 --- a/hazelcast/protocol/codec/map_entry_set_codec.py +++ b/hazelcast/protocol/codec/map_entry_set_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import EntryListCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_ENTRYSET -RESPONSE_TYPE = 117 -RETRYABLE = False +# hex: 0x012500 +_REQUEST_MESSAGE_TYPE = 75008 +# hex: 0x012501 +_RESPONSE_MESSAGE_TYPE = 75009 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = (client_message.read_data(), client_message.read_data()) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return EntryListCodec.decode(msg, DataCodec.decode, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_event_journal_read_codec.py b/hazelcast/protocol/codec/map_event_journal_read_codec.py new file mode 100644 index 0000000000..55d3be0301 --- /dev/null +++ b/hazelcast/protocol/codec/map_event_journal_read_codec.py @@ -0,0 +1,41 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import LongArrayCodec + +# hex: 0x014200 +_REQUEST_MESSAGE_TYPE = 82432 +# hex: 0x014201 +_RESPONSE_MESSAGE_TYPE = 82433 + +_REQUEST_START_SEQUENCE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_MIN_SIZE_OFFSET = _REQUEST_START_SEQUENCE_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_MAX_SIZE_OFFSET = _REQUEST_MIN_SIZE_OFFSET + INT_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_MAX_SIZE_OFFSET + INT_SIZE_IN_BYTES +_RESPONSE_READ_COUNT_OFFSET = RESPONSE_HEADER_SIZE +_RESPONSE_NEXT_SEQ_OFFSET = _RESPONSE_READ_COUNT_OFFSET + INT_SIZE_IN_BYTES + + +def encode_request(name, start_sequence, min_size, max_size, predicate, projection): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_START_SEQUENCE_OFFSET, start_sequence) + FixSizedTypesCodec.encode_int(buf, _REQUEST_MIN_SIZE_OFFSET, min_size) + FixSizedTypesCodec.encode_int(buf, _REQUEST_MAX_SIZE_OFFSET, max_size) + StringCodec.encode(buf, name) + CodecUtil.encode_nullable(buf, predicate, DataCodec.encode) + CodecUtil.encode_nullable(buf, projection, DataCodec.encode, True) + return OutboundMessage(buf, True) + + +def decode_response(msg): + initial_frame = msg.next_frame() + response = dict() + response["read_count"] = FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_READ_COUNT_OFFSET) + response["next_seq"] = FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_NEXT_SEQ_OFFSET) + response["items"] = ListMultiFrameCodec.decode(msg, DataCodec.decode) + response["item_seqs"] = CodecUtil.decode_nullable(msg, LongArrayCodec.decode) + return response diff --git a/hazelcast/protocol/codec/map_event_journal_subscribe_codec.py b/hazelcast/protocol/codec/map_event_journal_subscribe_codec.py new file mode 100644 index 0000000000..dc85d17e3a --- /dev/null +++ b/hazelcast/protocol/codec/map_event_journal_subscribe_codec.py @@ -0,0 +1,27 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec + +# hex: 0x014100 +_REQUEST_MESSAGE_TYPE = 82176 +# hex: 0x014101 +_RESPONSE_MESSAGE_TYPE = 82177 + +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_OLDEST_SEQUENCE_OFFSET = RESPONSE_HEADER_SIZE +_RESPONSE_NEWEST_SEQUENCE_OFFSET = _RESPONSE_OLDEST_SEQUENCE_OFFSET + LONG_SIZE_IN_BYTES + + +def encode_request(name): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) + + +def decode_response(msg): + initial_frame = msg.next_frame() + response = dict() + response["oldest_sequence"] = FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_OLDEST_SEQUENCE_OFFSET) + response["newest_sequence"] = FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_NEWEST_SEQUENCE_OFFSET) + return response diff --git a/hazelcast/protocol/codec/map_evict_all_codec.py b/hazelcast/protocol/codec/map_evict_all_codec.py index 9f9e28524b..4fae1d34bc 100644 --- a/hazelcast/protocol/codec/map_evict_all_codec.py +++ b/hazelcast/protocol/codec/map_evict_all_codec.py @@ -1,27 +1,15 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MAP_EVICTALL -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x011F00 +_REQUEST_MESSAGE_TYPE = 73472 +# hex: 0x011F01 +_RESPONSE_MESSAGE_TYPE = 73473 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_evict_codec.py b/hazelcast/protocol/codec/map_evict_codec.py index 25ce6d6226..0e6f8a4af5 100644 --- a/hazelcast/protocol/codec/map_evict_codec.py +++ b/hazelcast/protocol/codec/map_evict_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_EVICT -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x011E00 +_REQUEST_MESSAGE_TYPE = 73216 +# hex: 0x011E01 +_RESPONSE_MESSAGE_TYPE = 73217 - -def calculate_size(name, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_execute_on_all_keys_codec.py b/hazelcast/protocol/codec/map_execute_on_all_keys_codec.py index 0b28cc1a7a..f2daf9e804 100644 --- a/hazelcast/protocol/codec/map_execute_on_all_keys_codec.py +++ b/hazelcast/protocol/codec/map_execute_on_all_keys_codec.py @@ -1,40 +1,23 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import EntryListCodec -REQUEST_TYPE = MAP_EXECUTEONALLKEYS -RESPONSE_TYPE = 117 -RETRYABLE = False +# hex: 0x013000 +_REQUEST_MESSAGE_TYPE = 77824 +# hex: 0x013001 +_RESPONSE_MESSAGE_TYPE = 77825 - -def calculate_size(name, entry_processor): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(entry_processor) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, entry_processor): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, entry_processor)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(entry_processor) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, entry_processor, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = (client_message.read_data(), client_message.read_data()) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return EntryListCodec.decode(msg, DataCodec.decode, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_execute_on_key_codec.py b/hazelcast/protocol/codec/map_execute_on_key_codec.py index 1ff9432782..6697a14b0f 100644 --- a/hazelcast/protocol/codec/map_execute_on_key_codec.py +++ b/hazelcast/protocol/codec/map_execute_on_key_codec.py @@ -1,38 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = MAP_EXECUTEONKEY -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x012E00 +_REQUEST_MESSAGE_TYPE = 77312 +# hex: 0x012E01 +_RESPONSE_MESSAGE_TYPE = 77313 - -def calculate_size(name, entry_processor, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(entry_processor) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, entry_processor, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, entry_processor, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(entry_processor) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, entry_processor) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_execute_on_keys_codec.py b/hazelcast/protocol/codec/map_execute_on_keys_codec.py index e45695869b..e554558bb9 100644 --- a/hazelcast/protocol/codec/map_execute_on_keys_codec.py +++ b/hazelcast/protocol/codec/map_execute_on_keys_codec.py @@ -1,46 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import EntryListCodec -REQUEST_TYPE = MAP_EXECUTEONKEYS -RESPONSE_TYPE = 117 -RETRYABLE = False +# hex: 0x013200 +_REQUEST_MESSAGE_TYPE = 78336 +# hex: 0x013201 +_RESPONSE_MESSAGE_TYPE = 78337 - -def calculate_size(name, entry_processor, keys): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(entry_processor) - data_size += INT_SIZE_IN_BYTES - for keys_item in keys: - data_size += calculate_size_data(keys_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, entry_processor, keys): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, entry_processor, keys)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(entry_processor) - client_message.append_int(len(keys)) - for keys_item in keys: - client_message.append_data(keys_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, entry_processor) + ListMultiFrameCodec.encode(buf, keys, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = (client_message.read_data(), client_message.read_data()) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return EntryListCodec.decode(msg, DataCodec.decode, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_execute_with_predicate_codec.py b/hazelcast/protocol/codec/map_execute_with_predicate_codec.py index 3c47419db9..4a54302b58 100644 --- a/hazelcast/protocol/codec/map_execute_with_predicate_codec.py +++ b/hazelcast/protocol/codec/map_execute_with_predicate_codec.py @@ -1,42 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import EntryListCodec -REQUEST_TYPE = MAP_EXECUTEWITHPREDICATE -RESPONSE_TYPE = 117 -RETRYABLE = False +# hex: 0x013100 +_REQUEST_MESSAGE_TYPE = 78080 +# hex: 0x013101 +_RESPONSE_MESSAGE_TYPE = 78081 - -def calculate_size(name, entry_processor, predicate): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(entry_processor) - data_size += calculate_size_data(predicate) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, entry_processor, predicate): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, entry_processor, predicate)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(entry_processor) - client_message.append_data(predicate) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, entry_processor) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = (client_message.read_data(), client_message.read_data()) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return EntryListCodec.decode(msg, DataCodec.decode, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_flush_codec.py b/hazelcast/protocol/codec/map_flush_codec.py index 482ac2110f..3ab3e8fc3e 100644 --- a/hazelcast/protocol/codec/map_flush_codec.py +++ b/hazelcast/protocol/codec/map_flush_codec.py @@ -1,27 +1,15 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MAP_FLUSH -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x010A00 +_REQUEST_MESSAGE_TYPE = 68096 +# hex: 0x010A01 +_RESPONSE_MESSAGE_TYPE = 68097 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_force_unlock_codec.py b/hazelcast/protocol/codec/map_force_unlock_codec.py index 1f7030bcd3..9dd6410282 100644 --- a/hazelcast/protocol/codec/map_force_unlock_codec.py +++ b/hazelcast/protocol/codec/map_force_unlock_codec.py @@ -1,31 +1,21 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_FORCEUNLOCK -RESPONSE_TYPE = 100 -RETRYABLE = True +# hex: 0x013300 +_REQUEST_MESSAGE_TYPE = 78592 +# hex: 0x013301 +_RESPONSE_MESSAGE_TYPE = 78593 - -def calculate_size(name, key, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_REFERENCE_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REFERENCE_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_REFERENCE_ID_OFFSET, reference_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) diff --git a/hazelcast/protocol/codec/map_get_all_codec.py b/hazelcast/protocol/codec/map_get_all_codec.py index 0bcee07e31..2758d7249d 100644 --- a/hazelcast/protocol/codec/map_get_all_codec.py +++ b/hazelcast/protocol/codec/map_get_all_codec.py @@ -1,44 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import EntryListCodec -REQUEST_TYPE = MAP_GETALL -RESPONSE_TYPE = 117 -RETRYABLE = False +# hex: 0x012300 +_REQUEST_MESSAGE_TYPE = 74496 +# hex: 0x012301 +_RESPONSE_MESSAGE_TYPE = 74497 - -def calculate_size(name, keys): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for keys_item in keys: - data_size += calculate_size_data(keys_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, keys): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, keys)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(keys)) - for keys_item in keys: - client_message.append_data(keys_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, keys, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = (client_message.read_data(), client_message.read_data()) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return EntryListCodec.decode(msg, DataCodec.decode, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_get_codec.py b/hazelcast/protocol/codec/map_get_codec.py index c4e311730b..37e3deae8d 100644 --- a/hazelcast/protocol/codec/map_get_codec.py +++ b/hazelcast/protocol/codec/map_get_codec.py @@ -1,36 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = MAP_GET -RESPONSE_TYPE = 105 -RETRYABLE = True +# hex: 0x010200 +_REQUEST_MESSAGE_TYPE = 66048 +# hex: 0x010201 +_RESPONSE_MESSAGE_TYPE = 66049 - -def calculate_size(name, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_get_entry_view_codec.py b/hazelcast/protocol/codec/map_get_entry_view_codec.py index f78c641c6b..ef934460d9 100644 --- a/hazelcast/protocol/codec/map_get_entry_view_codec.py +++ b/hazelcast/protocol/codec/map_get_entry_view_codec.py @@ -1,37 +1,32 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.codec.custom.simple_entry_view_codec import SimpleEntryViewCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = MAP_GETENTRYVIEW -RESPONSE_TYPE = 111 -RETRYABLE = True +# hex: 0x011D00 +_REQUEST_MESSAGE_TYPE = 72960 +# hex: 0x011D01 +_RESPONSE_MESSAGE_TYPE = 72961 - -def calculate_size(name, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_MAX_IDLE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = EntryViewCodec.decode(client_message, to_object) - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + response = dict() + response["max_idle"] = FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_MAX_IDLE_OFFSET) + response["response"] = CodecUtil.decode_nullable(msg, SimpleEntryViewCodec.decode) + return response diff --git a/hazelcast/protocol/codec/map_is_empty_codec.py b/hazelcast/protocol/codec/map_is_empty_codec.py index a8da2259ce..653003cd74 100644 --- a/hazelcast/protocol/codec/map_is_empty_codec.py +++ b/hazelcast/protocol/codec/map_is_empty_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MAP_ISEMPTY -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x012B00 +_REQUEST_MESSAGE_TYPE = 76544 +# hex: 0x012B01 +_RESPONSE_MESSAGE_TYPE = 76545 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_is_locked_codec.py b/hazelcast/protocol/codec/map_is_locked_codec.py index f46a10c0a5..8e7321ad7b 100644 --- a/hazelcast/protocol/codec/map_is_locked_codec.py +++ b/hazelcast/protocol/codec/map_is_locked_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_ISLOCKED -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x011200 +_REQUEST_MESSAGE_TYPE = 70144 +# hex: 0x011201 +_RESPONSE_MESSAGE_TYPE = 70145 - -def calculate_size(name, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_key_set_codec.py b/hazelcast/protocol/codec/map_key_set_codec.py index 95a43c47f8..e94247fa92 100644 --- a/hazelcast/protocol/codec/map_key_set_codec.py +++ b/hazelcast/protocol/codec/map_key_set_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_KEYSET -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x012200 +_REQUEST_MESSAGE_TYPE = 74240 +# hex: 0x012201 +_RESPONSE_MESSAGE_TYPE = 74241 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_key_set_with_paging_predicate_codec.py b/hazelcast/protocol/codec/map_key_set_with_paging_predicate_codec.py deleted file mode 100644 index 351ca7e62e..0000000000 --- a/hazelcast/protocol/codec/map_key_set_with_paging_predicate_codec.py +++ /dev/null @@ -1,40 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range - -REQUEST_TYPE = MAP_KEYSETWITHPAGINGPREDICATE -RESPONSE_TYPE = 106 -RETRYABLE = True - - -def calculate_size(name, predicate): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(predicate) - return data_size - - -def encode_request(name, predicate): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, predicate)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(predicate) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters diff --git a/hazelcast/protocol/codec/map_key_set_with_predicate_codec.py b/hazelcast/protocol/codec/map_key_set_with_predicate_codec.py index cd16002f7f..5f69ea9025 100644 --- a/hazelcast/protocol/codec/map_key_set_with_predicate_codec.py +++ b/hazelcast/protocol/codec/map_key_set_with_predicate_codec.py @@ -1,40 +1,23 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec -REQUEST_TYPE = MAP_KEYSETWITHPREDICATE -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x012600 +_REQUEST_MESSAGE_TYPE = 75264 +# hex: 0x012601 +_RESPONSE_MESSAGE_TYPE = 75265 - -def calculate_size(name, predicate): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(predicate) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, predicate): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, predicate)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(predicate) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_load_all_codec.py b/hazelcast/protocol/codec/map_load_all_codec.py index cb6a9305f8..b0879d6781 100644 --- a/hazelcast/protocol/codec/map_load_all_codec.py +++ b/hazelcast/protocol/codec/map_load_all_codec.py @@ -1,29 +1,19 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MAP_LOADALL -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x012000 +_REQUEST_MESSAGE_TYPE = 73728 +# hex: 0x012001 +_RESPONSE_MESSAGE_TYPE = 73729 - -def calculate_size(name, replace_existing_values): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +_REQUEST_REPLACE_EXISTING_VALUES_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REPLACE_EXISTING_VALUES_OFFSET + BOOLEAN_SIZE_IN_BYTES def encode_request(name, replace_existing_values): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, replace_existing_values)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(replace_existing_values) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_REPLACE_EXISTING_VALUES_OFFSET, replace_existing_values) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_load_given_keys_codec.py b/hazelcast/protocol/codec/map_load_given_keys_codec.py index 9b9d918c17..6d658ba877 100644 --- a/hazelcast/protocol/codec/map_load_given_keys_codec.py +++ b/hazelcast/protocol/codec/map_load_given_keys_codec.py @@ -1,35 +1,22 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_LOADGIVENKEYS -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x012100 +_REQUEST_MESSAGE_TYPE = 73984 +# hex: 0x012101 +_RESPONSE_MESSAGE_TYPE = 73985 - -def calculate_size(name, keys, replace_existing_values): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for keys_item in keys: - data_size += calculate_size_data(keys_item) - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +_REQUEST_REPLACE_EXISTING_VALUES_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REPLACE_EXISTING_VALUES_OFFSET + BOOLEAN_SIZE_IN_BYTES def encode_request(name, keys, replace_existing_values): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, keys, replace_existing_values)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(keys)) - for keys_item in keys: - client_message.append_data(keys_item) - client_message.append_bool(replace_existing_values) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_REPLACE_EXISTING_VALUES_OFFSET, replace_existing_values) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, keys, DataCodec.encode, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_lock_codec.py b/hazelcast/protocol/codec/map_lock_codec.py index 1f0fa528c6..85583b0d6c 100644 --- a/hazelcast/protocol/codec/map_lock_codec.py +++ b/hazelcast/protocol/codec/map_lock_codec.py @@ -1,35 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_LOCK -RESPONSE_TYPE = 100 -RETRYABLE = True +# hex: 0x011000 +_REQUEST_MESSAGE_TYPE = 69632 +# hex: 0x011001 +_RESPONSE_MESSAGE_TYPE = 69633 - -def calculate_size(name, key, thread_id, ttl, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_REFERENCE_ID_OFFSET = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REFERENCE_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, thread_id, ttl, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id, ttl, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.append_long(ttl) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + FixSizedTypesCodec.encode_long(buf, _REQUEST_REFERENCE_ID_OFFSET, reference_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) diff --git a/hazelcast/protocol/codec/map_message_type.py b/hazelcast/protocol/codec/map_message_type.py deleted file mode 100644 index 4bfef44322..0000000000 --- a/hazelcast/protocol/codec/map_message_type.py +++ /dev/null @@ -1,58 +0,0 @@ - -MAP_PUT = 0x0101 -MAP_GET = 0x0102 -MAP_REMOVE = 0x0103 -MAP_REPLACE = 0x0104 -MAP_REPLACEIFSAME = 0x0105 -MAP_CONTAINSKEY = 0x0109 -MAP_CONTAINSVALUE = 0x010a -MAP_REMOVEIFSAME = 0x010b -MAP_DELETE = 0x010c -MAP_FLUSH = 0x010d -MAP_TRYREMOVE = 0x010e -MAP_TRYPUT = 0x010f -MAP_PUTTRANSIENT = 0x0110 -MAP_PUTIFABSENT = 0x0111 -MAP_SET = 0x0112 -MAP_LOCK = 0x0113 -MAP_TRYLOCK = 0x0114 -MAP_ISLOCKED = 0x0115 -MAP_UNLOCK = 0x0116 -MAP_ADDINTERCEPTOR = 0x0117 -MAP_REMOVEINTERCEPTOR = 0x0118 -MAP_ADDENTRYLISTENERTOKEYWITHPREDICATE = 0x0119 -MAP_ADDENTRYLISTENERWITHPREDICATE = 0x011a -MAP_ADDENTRYLISTENERTOKEY = 0x011b -MAP_ADDENTRYLISTENER = 0x011c -MAP_ADDNEARCACHEENTRYLISTENER = 0x011d -MAP_REMOVEENTRYLISTENER = 0x011e -MAP_ADDPARTITIONLOSTLISTENER = 0x011f -MAP_REMOVEPARTITIONLOSTLISTENER = 0x0120 -MAP_GETENTRYVIEW = 0x0121 -MAP_EVICT = 0x0122 -MAP_EVICTALL = 0x0123 -MAP_LOADALL = 0x0124 -MAP_LOADGIVENKEYS = 0x0125 -MAP_KEYSET = 0x0126 -MAP_GETALL = 0x0127 -MAP_VALUES = 0x0128 -MAP_ENTRYSET = 0x0129 -MAP_KEYSETWITHPREDICATE = 0x012a -MAP_VALUESWITHPREDICATE = 0x012b -MAP_ENTRIESWITHPREDICATE = 0x012c -MAP_ADDINDEX = 0x012d -MAP_SIZE = 0x012e -MAP_ISEMPTY = 0x012f -MAP_PUTALL = 0x0130 -MAP_CLEAR = 0x0131 -MAP_EXECUTEONKEY = 0x0132 -MAP_SUBMITTOKEY = 0x0133 -MAP_EXECUTEONALLKEYS = 0x0134 -MAP_EXECUTEWITHPREDICATE = 0x0135 -MAP_EXECUTEONKEYS = 0x0136 -MAP_FORCEUNLOCK = 0x0137 -MAP_KEYSETWITHPAGINGPREDICATE = 0x0138 -MAP_VALUESWITHPAGINGPREDICATE = 0x0139 -MAP_ENTRIESWITHPAGINGPREDICATE = 0x013a -MAP_CLEARNEARCACHE = 0x013b -MAP_SETTTL = 0x0149 diff --git a/hazelcast/protocol/codec/map_project_codec.py b/hazelcast/protocol/codec/map_project_codec.py new file mode 100644 index 0000000000..1c5cfd4433 --- /dev/null +++ b/hazelcast/protocol/codec/map_project_codec.py @@ -0,0 +1,23 @@ +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec + +# hex: 0x013B00 +_REQUEST_MESSAGE_TYPE = 80640 +# hex: 0x013B01 +_RESPONSE_MESSAGE_TYPE = 80641 + +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE + + +def encode_request(name, projection): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, projection, True) + return OutboundMessage(buf, True) + + +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode_contains_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_project_with_predicate_codec.py b/hazelcast/protocol/codec/map_project_with_predicate_codec.py new file mode 100644 index 0000000000..9ba5200014 --- /dev/null +++ b/hazelcast/protocol/codec/map_project_with_predicate_codec.py @@ -0,0 +1,24 @@ +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec + +# hex: 0x013C00 +_REQUEST_MESSAGE_TYPE = 80896 +# hex: 0x013C01 +_RESPONSE_MESSAGE_TYPE = 80897 + +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE + + +def encode_request(name, projection, predicate): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, projection) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, True) + + +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode_contains_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_put_all_codec.py b/hazelcast/protocol/codec/map_put_all_codec.py index 1ff710f7f8..9dfc8ae95d 100644 --- a/hazelcast/protocol/codec/map_put_all_codec.py +++ b/hazelcast/protocol/codec/map_put_all_codec.py @@ -1,36 +1,22 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * -from hazelcast import six +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import EntryListCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_PUTALL -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x012C00 +_REQUEST_MESSAGE_TYPE = 76800 +# hex: 0x012C01 +_RESPONSE_MESSAGE_TYPE = 76801 +_REQUEST_TRIGGER_MAP_LOADER_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TRIGGER_MAP_LOADER_OFFSET + BOOLEAN_SIZE_IN_BYTES -def calculate_size(name, entries): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for key, val in six.iteritems(entries): - data_size += calculate_size_data(key) - data_size += calculate_size_data(val) - return data_size - -def encode_request(name, entries): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, entries)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(entries)) - for key, value in six.iteritems(entries): - client_message.append_data(key) - client_message.append_data(value) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode +def encode_request(name, entries, trigger_map_loader): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_TRIGGER_MAP_LOADER_OFFSET, trigger_map_loader) + StringCodec.encode(buf, name) + EntryListCodec.encode(buf, entries, DataCodec.encode, DataCodec.encode, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_put_codec.py b/hazelcast/protocol/codec/map_put_codec.py index 38e1faa8a7..fa6919ea55 100644 --- a/hazelcast/protocol/codec/map_put_codec.py +++ b/hazelcast/protocol/codec/map_put_codec.py @@ -1,40 +1,30 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = MAP_PUT -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x010100 +_REQUEST_MESSAGE_TYPE = 65792 +# hex: 0x010101 +_RESPONSE_MESSAGE_TYPE = 65793 - -def calculate_size(name, key, value, thread_id, ttl): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, value, thread_id, ttl): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, thread_id, ttl)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.append_long(ttl) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_put_if_absent_codec.py b/hazelcast/protocol/codec/map_put_if_absent_codec.py index 257953fd9f..6d9cd16d37 100644 --- a/hazelcast/protocol/codec/map_put_if_absent_codec.py +++ b/hazelcast/protocol/codec/map_put_if_absent_codec.py @@ -1,40 +1,30 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = MAP_PUTIFABSENT -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x010E00 +_REQUEST_MESSAGE_TYPE = 69120 +# hex: 0x010E01 +_RESPONSE_MESSAGE_TYPE = 69121 - -def calculate_size(name, key, value, thread_id, ttl): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, value, thread_id, ttl): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, thread_id, ttl)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.append_long(ttl) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_put_if_absent_with_max_idle_codec.py b/hazelcast/protocol/codec/map_put_if_absent_with_max_idle_codec.py new file mode 100644 index 0000000000..871e188525 --- /dev/null +++ b/hazelcast/protocol/codec/map_put_if_absent_with_max_idle_codec.py @@ -0,0 +1,32 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x014600 +_REQUEST_MESSAGE_TYPE = 83456 +# hex: 0x014601 +_RESPONSE_MESSAGE_TYPE = 83457 + +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_MAX_IDLE_OFFSET = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_MAX_IDLE_OFFSET + LONG_SIZE_IN_BYTES + + +def encode_request(name, key, value, thread_id, ttl, max_idle): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + FixSizedTypesCodec.encode_long(buf, _REQUEST_MAX_IDLE_OFFSET, max_idle) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_put_transient_codec.py b/hazelcast/protocol/codec/map_put_transient_codec.py index b55608bc31..035bad3db0 100644 --- a/hazelcast/protocol/codec/map_put_transient_codec.py +++ b/hazelcast/protocol/codec/map_put_transient_codec.py @@ -1,35 +1,24 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_PUTTRANSIENT -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x010D00 +_REQUEST_MESSAGE_TYPE = 68864 +# hex: 0x010D01 +_RESPONSE_MESSAGE_TYPE = 68865 - -def calculate_size(name, key, value, thread_id, ttl): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, value, thread_id, ttl): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, thread_id, ttl)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.append_long(ttl) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_put_transient_with_max_idle_codec.py b/hazelcast/protocol/codec/map_put_transient_with_max_idle_codec.py new file mode 100644 index 0000000000..f0a7dce75b --- /dev/null +++ b/hazelcast/protocol/codec/map_put_transient_with_max_idle_codec.py @@ -0,0 +1,32 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x014500 +_REQUEST_MESSAGE_TYPE = 83200 +# hex: 0x014501 +_RESPONSE_MESSAGE_TYPE = 83201 + +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_MAX_IDLE_OFFSET = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_MAX_IDLE_OFFSET + LONG_SIZE_IN_BYTES + + +def encode_request(name, key, value, thread_id, ttl, max_idle): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + FixSizedTypesCodec.encode_long(buf, _REQUEST_MAX_IDLE_OFFSET, max_idle) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_put_with_max_idle_codec.py b/hazelcast/protocol/codec/map_put_with_max_idle_codec.py new file mode 100644 index 0000000000..83196ffafe --- /dev/null +++ b/hazelcast/protocol/codec/map_put_with_max_idle_codec.py @@ -0,0 +1,32 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x014400 +_REQUEST_MESSAGE_TYPE = 82944 +# hex: 0x014401 +_RESPONSE_MESSAGE_TYPE = 82945 + +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_MAX_IDLE_OFFSET = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_MAX_IDLE_OFFSET + LONG_SIZE_IN_BYTES + + +def encode_request(name, key, value, thread_id, ttl, max_idle): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + FixSizedTypesCodec.encode_long(buf, _REQUEST_MAX_IDLE_OFFSET, max_idle) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_remove_all_codec.py b/hazelcast/protocol/codec/map_remove_all_codec.py new file mode 100644 index 0000000000..2df01d2b07 --- /dev/null +++ b/hazelcast/protocol/codec/map_remove_all_codec.py @@ -0,0 +1,17 @@ +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec + +# hex: 0x013E00 +_REQUEST_MESSAGE_TYPE = 81408 +# hex: 0x013E01 +_RESPONSE_MESSAGE_TYPE = 81409 + +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE + + +def encode_request(name, predicate): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_remove_codec.py b/hazelcast/protocol/codec/map_remove_codec.py index 61850d484d..2001d5fa87 100644 --- a/hazelcast/protocol/codec/map_remove_codec.py +++ b/hazelcast/protocol/codec/map_remove_codec.py @@ -1,36 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = MAP_REMOVE -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x010300 +_REQUEST_MESSAGE_TYPE = 66304 +# hex: 0x010301 +_RESPONSE_MESSAGE_TYPE = 66305 - -def calculate_size(name, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_remove_entry_listener_codec.py b/hazelcast/protocol/codec/map_remove_entry_listener_codec.py index ac79243d5d..d81b1ead95 100644 --- a/hazelcast/protocol/codec/map_remove_entry_listener_codec.py +++ b/hazelcast/protocol/codec/map_remove_entry_listener_codec.py @@ -1,28 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MAP_REMOVEENTRYLISTENER -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x011A00 +_REQUEST_MESSAGE_TYPE = 72192 +# hex: 0x011A01 +_RESPONSE_MESSAGE_TYPE = 72193 - -def calculate_size(name, registration_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(registration_id) - return data_size +_REQUEST_REGISTRATION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REGISTRATION_ID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, registration_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, registration_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(registration_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_REGISTRATION_ID_OFFSET, registration_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) + -# Empty decode_response because response is not used to determine the return value. +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_remove_if_same_codec.py b/hazelcast/protocol/codec/map_remove_if_same_codec.py index acd055c534..8bace723da 100644 --- a/hazelcast/protocol/codec/map_remove_if_same_codec.py +++ b/hazelcast/protocol/codec/map_remove_if_same_codec.py @@ -1,37 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_REMOVEIFSAME -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x010800 +_REQUEST_MESSAGE_TYPE = 67584 +# hex: 0x010801 +_RESPONSE_MESSAGE_TYPE = 67585 - -def calculate_size(name, key, value, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, value, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_remove_interceptor_codec.py b/hazelcast/protocol/codec/map_remove_interceptor_codec.py index 2c937c89de..93d14e1c16 100644 --- a/hazelcast/protocol/codec/map_remove_interceptor_codec.py +++ b/hazelcast/protocol/codec/map_remove_interceptor_codec.py @@ -1,33 +1,23 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MAP_REMOVEINTERCEPTOR -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x011500 +_REQUEST_MESSAGE_TYPE = 70912 +# hex: 0x011501 +_RESPONSE_MESSAGE_TYPE = 70913 - -def calculate_size(name, id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(id) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + StringCodec.encode(buf, id, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_remove_partition_lost_listener_codec.py b/hazelcast/protocol/codec/map_remove_partition_lost_listener_codec.py index 77cdb2000c..e089b1dd63 100644 --- a/hazelcast/protocol/codec/map_remove_partition_lost_listener_codec.py +++ b/hazelcast/protocol/codec/map_remove_partition_lost_listener_codec.py @@ -1,33 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MAP_REMOVEPARTITIONLOSTLISTENER -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x011C00 +_REQUEST_MESSAGE_TYPE = 72704 +# hex: 0x011C01 +_RESPONSE_MESSAGE_TYPE = 72705 - -def calculate_size(name, registration_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(registration_id) - return data_size +_REQUEST_REGISTRATION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REGISTRATION_ID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, registration_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, registration_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(registration_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_REGISTRATION_ID_OFFSET, registration_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_replace_codec.py b/hazelcast/protocol/codec/map_replace_codec.py index 29f4145fd9..910857cf55 100644 --- a/hazelcast/protocol/codec/map_replace_codec.py +++ b/hazelcast/protocol/codec/map_replace_codec.py @@ -1,38 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = MAP_REPLACE -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x010400 +_REQUEST_MESSAGE_TYPE = 66560 +# hex: 0x010401 +_RESPONSE_MESSAGE_TYPE = 66561 - -def calculate_size(name, key, value, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, value, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_replace_if_same_codec.py b/hazelcast/protocol/codec/map_replace_if_same_codec.py index e2cf7f9a54..8f7f295121 100644 --- a/hazelcast/protocol/codec/map_replace_if_same_codec.py +++ b/hazelcast/protocol/codec/map_replace_if_same_codec.py @@ -1,39 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_REPLACEIFSAME -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x010500 +_REQUEST_MESSAGE_TYPE = 66816 +# hex: 0x010501 +_RESPONSE_MESSAGE_TYPE = 66817 - -def calculate_size(name, key, test_value, value, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(test_value) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, test_value, value, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, test_value, value, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(test_value) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, test_value) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_set_codec.py b/hazelcast/protocol/codec/map_set_codec.py index ffb14dfee9..a7baffea49 100644 --- a/hazelcast/protocol/codec/map_set_codec.py +++ b/hazelcast/protocol/codec/map_set_codec.py @@ -1,35 +1,24 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_SET -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x010F00 +_REQUEST_MESSAGE_TYPE = 69376 +# hex: 0x010F01 +_RESPONSE_MESSAGE_TYPE = 69377 - -def calculate_size(name, key, value, thread_id, ttl): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, value, thread_id, ttl): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, thread_id, ttl)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.append_long(ttl) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/map_set_ttl_codec.py b/hazelcast/protocol/codec/map_set_ttl_codec.py index e3691c7616..ea27b26ce9 100644 --- a/hazelcast/protocol/codec/map_set_ttl_codec.py +++ b/hazelcast/protocol/codec/map_set_ttl_codec.py @@ -1,40 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_SETTTL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x014300 +_REQUEST_MESSAGE_TYPE = 82688 +# hex: 0x014301 +_RESPONSE_MESSAGE_TYPE = 82689 - -def calculate_size(name, key, ttl): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TTL_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, ttl): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, ttl)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(ttl) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_set_with_max_idle_codec.py b/hazelcast/protocol/codec/map_set_with_max_idle_codec.py new file mode 100644 index 0000000000..2afae0097a --- /dev/null +++ b/hazelcast/protocol/codec/map_set_with_max_idle_codec.py @@ -0,0 +1,32 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x014700 +_REQUEST_MESSAGE_TYPE = 83712 +# hex: 0x014701 +_RESPONSE_MESSAGE_TYPE = 83713 + +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_MAX_IDLE_OFFSET = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_MAX_IDLE_OFFSET + LONG_SIZE_IN_BYTES + + +def encode_request(name, key, value, thread_id, ttl, max_idle): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + FixSizedTypesCodec.encode_long(buf, _REQUEST_MAX_IDLE_OFFSET, max_idle) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_size_codec.py b/hazelcast/protocol/codec/map_size_codec.py index 446ebf4419..b781a5afd1 100644 --- a/hazelcast/protocol/codec/map_size_codec.py +++ b/hazelcast/protocol/codec/map_size_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MAP_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = True +# hex: 0x012A00 +_REQUEST_MESSAGE_TYPE = 76288 +# hex: 0x012A01 +_RESPONSE_MESSAGE_TYPE = 76289 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_submit_to_key_codec.py b/hazelcast/protocol/codec/map_submit_to_key_codec.py new file mode 100644 index 0000000000..660d412968 --- /dev/null +++ b/hazelcast/protocol/codec/map_submit_to_key_codec.py @@ -0,0 +1,28 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x012F00 +_REQUEST_MESSAGE_TYPE = 77568 +# hex: 0x012F01 +_RESPONSE_MESSAGE_TYPE = 77569 + +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES + + +def encode_request(name, entry_processor, key, thread_id): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, entry_processor) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_try_lock_codec.py b/hazelcast/protocol/codec/map_try_lock_codec.py index 42cd4d5576..c8684830b6 100644 --- a/hazelcast/protocol/codec/map_try_lock_codec.py +++ b/hazelcast/protocol/codec/map_try_lock_codec.py @@ -1,41 +1,33 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_TRYLOCK -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x011100 +_REQUEST_MESSAGE_TYPE = 69888 +# hex: 0x011101 +_RESPONSE_MESSAGE_TYPE = 69889 - -def calculate_size(name, key, thread_id, lease, timeout, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LEASE_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_TIMEOUT_OFFSET = _REQUEST_LEASE_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_REFERENCE_ID_OFFSET = _REQUEST_TIMEOUT_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REFERENCE_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, thread_id, lease, timeout, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id, lease, timeout, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.append_long(lease) - client_message.append_long(timeout) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_LEASE_OFFSET, lease) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMEOUT_OFFSET, timeout) + FixSizedTypesCodec.encode_long(buf, _REQUEST_REFERENCE_ID_OFFSET, reference_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_try_put_codec.py b/hazelcast/protocol/codec/map_try_put_codec.py index 6c9bea7fbd..0ff8680321 100644 --- a/hazelcast/protocol/codec/map_try_put_codec.py +++ b/hazelcast/protocol/codec/map_try_put_codec.py @@ -1,39 +1,30 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_TRYPUT -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x010C00 +_REQUEST_MESSAGE_TYPE = 68608 +# hex: 0x010C01 +_RESPONSE_MESSAGE_TYPE = 68609 - -def calculate_size(name, key, value, thread_id, timeout): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TIMEOUT_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TIMEOUT_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, value, thread_id, timeout): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, thread_id, timeout)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.append_long(timeout) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMEOUT_OFFSET, timeout) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_try_remove_codec.py b/hazelcast/protocol/codec/map_try_remove_codec.py index d1139d895f..8e456dedc1 100644 --- a/hazelcast/protocol/codec/map_try_remove_codec.py +++ b/hazelcast/protocol/codec/map_try_remove_codec.py @@ -1,37 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_TRYREMOVE -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x010B00 +_REQUEST_MESSAGE_TYPE = 68352 +# hex: 0x010B01 +_RESPONSE_MESSAGE_TYPE = 68353 - -def calculate_size(name, key, thread_id, timeout): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TIMEOUT_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TIMEOUT_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, thread_id, timeout): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id, timeout)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.append_long(timeout) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMEOUT_OFFSET, timeout) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/map_unlock_codec.py b/hazelcast/protocol/codec/map_unlock_codec.py index 3623be14b7..3b3568e8db 100644 --- a/hazelcast/protocol/codec/map_unlock_codec.py +++ b/hazelcast/protocol/codec/map_unlock_codec.py @@ -1,33 +1,23 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_UNLOCK -RESPONSE_TYPE = 100 -RETRYABLE = True +# hex: 0x011300 +_REQUEST_MESSAGE_TYPE = 70400 +# hex: 0x011301 +_RESPONSE_MESSAGE_TYPE = 70401 - -def calculate_size(name, key, thread_id, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_REFERENCE_ID_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REFERENCE_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, thread_id, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_REFERENCE_ID_OFFSET, reference_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) diff --git a/hazelcast/protocol/codec/map_values_codec.py b/hazelcast/protocol/codec/map_values_codec.py index 38fc5090dd..a6548d12d7 100644 --- a/hazelcast/protocol/codec/map_values_codec.py +++ b/hazelcast/protocol/codec/map_values_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MAP_VALUES -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x012400 +_REQUEST_MESSAGE_TYPE = 74752 +# hex: 0x012401 +_RESPONSE_MESSAGE_TYPE = 74753 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/map_values_with_paging_predicate_codec.py b/hazelcast/protocol/codec/map_values_with_paging_predicate_codec.py deleted file mode 100644 index 6ccfedab00..0000000000 --- a/hazelcast/protocol/codec/map_values_with_paging_predicate_codec.py +++ /dev/null @@ -1,40 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range - -REQUEST_TYPE = MAP_VALUESWITHPAGINGPREDICATE -RESPONSE_TYPE = 117 -RETRYABLE = True - - -def calculate_size(name, predicate): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(predicate) - return data_size - - -def encode_request(name, predicate): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, predicate)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(predicate) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = (client_message.read_data(), client_message.read_data()) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters diff --git a/hazelcast/protocol/codec/map_values_with_predicate_codec.py b/hazelcast/protocol/codec/map_values_with_predicate_codec.py index 314679c7e8..b16dc1cc0d 100644 --- a/hazelcast/protocol/codec/map_values_with_predicate_codec.py +++ b/hazelcast/protocol/codec/map_values_with_predicate_codec.py @@ -1,40 +1,23 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec -REQUEST_TYPE = MAP_VALUESWITHPREDICATE -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x012700 +_REQUEST_MESSAGE_TYPE = 75520 +# hex: 0x012701 +_RESPONSE_MESSAGE_TYPE = 75521 - -def calculate_size(name, predicate): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(predicate) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, predicate): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, predicate)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(predicate) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/multi_map_add_entry_listener_codec.py b/hazelcast/protocol/codec/multi_map_add_entry_listener_codec.py index d3ee4dbb1c..87a5f53e9e 100644 --- a/hazelcast/protocol/codec/multi_map_add_entry_listener_codec.py +++ b/hazelcast/protocol/codec/multi_map_add_entry_listener_codec.py @@ -1,58 +1,49 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = MULTIMAP_ADDENTRYLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, include_value, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x020E00 +_REQUEST_MESSAGE_TYPE = 134656 +# hex: 0x020E01 +_RESPONSE_MESSAGE_TYPE = 134657 +# hex: 0x020E02 +_EVENT_ENTRY_MESSAGE_TYPE = 134658 + +_REQUEST_INCLUDE_VALUE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_INCLUDE_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, include_value, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, include_value, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(include_value) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INCLUDE_VALUE_OFFSET, include_value) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/multi_map_add_entry_listener_to_key_codec.py b/hazelcast/protocol/codec/multi_map_add_entry_listener_to_key_codec.py index c73173a801..66eaa4243d 100644 --- a/hazelcast/protocol/codec/multi_map_add_entry_listener_to_key_codec.py +++ b/hazelcast/protocol/codec/multi_map_add_entry_listener_to_key_codec.py @@ -1,61 +1,50 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = MULTIMAP_ADDENTRYLISTENERTOKEY -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, key, include_value, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x020D00 +_REQUEST_MESSAGE_TYPE = 134400 +# hex: 0x020D01 +_RESPONSE_MESSAGE_TYPE = 134401 +# hex: 0x020D02 +_EVENT_ENTRY_MESSAGE_TYPE = 134402 + +_REQUEST_INCLUDE_VALUE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_INCLUDE_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, key, include_value, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, include_value, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_bool(include_value) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INCLUDE_VALUE_OFFSET, include_value) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/multi_map_clear_codec.py b/hazelcast/protocol/codec/multi_map_clear_codec.py index c544ae42a1..6fe42e26db 100644 --- a/hazelcast/protocol/codec/multi_map_clear_codec.py +++ b/hazelcast/protocol/codec/multi_map_clear_codec.py @@ -1,27 +1,15 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MULTIMAP_CLEAR -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x020B00 +_REQUEST_MESSAGE_TYPE = 133888 +# hex: 0x020B01 +_RESPONSE_MESSAGE_TYPE = 133889 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/multi_map_contains_entry_codec.py b/hazelcast/protocol/codec/multi_map_contains_entry_codec.py index 5145501440..934d5355a4 100644 --- a/hazelcast/protocol/codec/multi_map_contains_entry_codec.py +++ b/hazelcast/protocol/codec/multi_map_contains_entry_codec.py @@ -1,37 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_CONTAINSENTRY -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x020900 +_REQUEST_MESSAGE_TYPE = 133376 +# hex: 0x020901 +_RESPONSE_MESSAGE_TYPE = 133377 - -def calculate_size(name, key, value, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, value, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/multi_map_contains_key_codec.py b/hazelcast/protocol/codec/multi_map_contains_key_codec.py index 20d1fc853a..f01ecf66da 100644 --- a/hazelcast/protocol/codec/multi_map_contains_key_codec.py +++ b/hazelcast/protocol/codec/multi_map_contains_key_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_CONTAINSKEY -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x020700 +_REQUEST_MESSAGE_TYPE = 132864 +# hex: 0x020701 +_RESPONSE_MESSAGE_TYPE = 132865 - -def calculate_size(name, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/multi_map_contains_value_codec.py b/hazelcast/protocol/codec/multi_map_contains_value_codec.py index b52cc23c77..5cd15d3a62 100644 --- a/hazelcast/protocol/codec/multi_map_contains_value_codec.py +++ b/hazelcast/protocol/codec/multi_map_contains_value_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_CONTAINSVALUE -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x020800 +_REQUEST_MESSAGE_TYPE = 133120 +# hex: 0x020801 +_RESPONSE_MESSAGE_TYPE = 133121 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/multi_map_delete_codec.py b/hazelcast/protocol/codec/multi_map_delete_codec.py new file mode 100644 index 0000000000..8a6bd44cb5 --- /dev/null +++ b/hazelcast/protocol/codec/multi_map_delete_codec.py @@ -0,0 +1,21 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec + +# hex: 0x021600 +_REQUEST_MESSAGE_TYPE = 136704 +# hex: 0x021601 +_RESPONSE_MESSAGE_TYPE = 136705 + +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES + + +def encode_request(name, key, thread_id): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/multi_map_entry_set_codec.py b/hazelcast/protocol/codec/multi_map_entry_set_codec.py index 3ff5abf29c..26f974a1b4 100644 --- a/hazelcast/protocol/codec/multi_map_entry_set_codec.py +++ b/hazelcast/protocol/codec/multi_map_entry_set_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.multi_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import EntryListCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_ENTRYSET -RESPONSE_TYPE = 117 -RETRYABLE = True +# hex: 0x020600 +_REQUEST_MESSAGE_TYPE = 132608 +# hex: 0x020601 +_RESPONSE_MESSAGE_TYPE = 132609 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = (client_message.read_data(), client_message.read_data()) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return EntryListCodec.decode(msg, DataCodec.decode, DataCodec.decode) diff --git a/hazelcast/protocol/codec/multi_map_force_unlock_codec.py b/hazelcast/protocol/codec/multi_map_force_unlock_codec.py index 098d6dc77f..baae79651b 100644 --- a/hazelcast/protocol/codec/multi_map_force_unlock_codec.py +++ b/hazelcast/protocol/codec/multi_map_force_unlock_codec.py @@ -1,31 +1,21 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_FORCEUNLOCK -RESPONSE_TYPE = 100 -RETRYABLE = True +# hex: 0x021400 +_REQUEST_MESSAGE_TYPE = 136192 +# hex: 0x021401 +_RESPONSE_MESSAGE_TYPE = 136193 - -def calculate_size(name, key, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_REFERENCE_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REFERENCE_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_REFERENCE_ID_OFFSET, reference_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) diff --git a/hazelcast/protocol/codec/multi_map_get_codec.py b/hazelcast/protocol/codec/multi_map_get_codec.py index e85e286777..8267f55c02 100644 --- a/hazelcast/protocol/codec/multi_map_get_codec.py +++ b/hazelcast/protocol/codec/multi_map_get_codec.py @@ -1,42 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.multi_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec -REQUEST_TYPE = MULTIMAP_GET -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x020200 +_REQUEST_MESSAGE_TYPE = 131584 +# hex: 0x020201 +_RESPONSE_MESSAGE_TYPE = 131585 - -def calculate_size(name, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/multi_map_is_locked_codec.py b/hazelcast/protocol/codec/multi_map_is_locked_codec.py index ca38673576..e535373e2f 100644 --- a/hazelcast/protocol/codec/multi_map_is_locked_codec.py +++ b/hazelcast/protocol/codec/multi_map_is_locked_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_ISLOCKED -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x021200 +_REQUEST_MESSAGE_TYPE = 135680 +# hex: 0x021201 +_RESPONSE_MESSAGE_TYPE = 135681 - -def calculate_size(name, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/multi_map_key_set_codec.py b/hazelcast/protocol/codec/multi_map_key_set_codec.py index 828114a889..35e9c89c76 100644 --- a/hazelcast/protocol/codec/multi_map_key_set_codec.py +++ b/hazelcast/protocol/codec/multi_map_key_set_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.multi_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_KEYSET -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x020400 +_REQUEST_MESSAGE_TYPE = 132096 +# hex: 0x020401 +_RESPONSE_MESSAGE_TYPE = 132097 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/multi_map_lock_codec.py b/hazelcast/protocol/codec/multi_map_lock_codec.py index 02b4e1feb0..09e80e3c08 100644 --- a/hazelcast/protocol/codec/multi_map_lock_codec.py +++ b/hazelcast/protocol/codec/multi_map_lock_codec.py @@ -1,35 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_LOCK -RESPONSE_TYPE = 100 -RETRYABLE = True +# hex: 0x021000 +_REQUEST_MESSAGE_TYPE = 135168 +# hex: 0x021001 +_RESPONSE_MESSAGE_TYPE = 135169 - -def calculate_size(name, key, thread_id, ttl, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_REFERENCE_ID_OFFSET = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REFERENCE_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, thread_id, ttl, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id, ttl, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.append_long(ttl) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + FixSizedTypesCodec.encode_long(buf, _REQUEST_REFERENCE_ID_OFFSET, reference_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) diff --git a/hazelcast/protocol/codec/multi_map_message_type.py b/hazelcast/protocol/codec/multi_map_message_type.py deleted file mode 100644 index f73e00081d..0000000000 --- a/hazelcast/protocol/codec/multi_map_message_type.py +++ /dev/null @@ -1,22 +0,0 @@ - -MULTIMAP_PUT = 0x0201 -MULTIMAP_GET = 0x0202 -MULTIMAP_REMOVE = 0x0203 -MULTIMAP_KEYSET = 0x0204 -MULTIMAP_VALUES = 0x0205 -MULTIMAP_ENTRYSET = 0x0206 -MULTIMAP_CONTAINSKEY = 0x0207 -MULTIMAP_CONTAINSVALUE = 0x0208 -MULTIMAP_CONTAINSENTRY = 0x0209 -MULTIMAP_SIZE = 0x020a -MULTIMAP_CLEAR = 0x020b -MULTIMAP_VALUECOUNT = 0x020c -MULTIMAP_ADDENTRYLISTENERTOKEY = 0x020d -MULTIMAP_ADDENTRYLISTENER = 0x020e -MULTIMAP_REMOVEENTRYLISTENER = 0x020f -MULTIMAP_LOCK = 0x0210 -MULTIMAP_TRYLOCK = 0x0211 -MULTIMAP_ISLOCKED = 0x0212 -MULTIMAP_UNLOCK = 0x0213 -MULTIMAP_FORCEUNLOCK = 0x0214 -MULTIMAP_REMOVEENTRY = 0x0215 diff --git a/hazelcast/protocol/codec/multi_map_put_codec.py b/hazelcast/protocol/codec/multi_map_put_codec.py index 7b3acd7dfd..493c83072d 100644 --- a/hazelcast/protocol/codec/multi_map_put_codec.py +++ b/hazelcast/protocol/codec/multi_map_put_codec.py @@ -1,37 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_PUT -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x020100 +_REQUEST_MESSAGE_TYPE = 131328 +# hex: 0x020101 +_RESPONSE_MESSAGE_TYPE = 131329 - -def calculate_size(name, key, value, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, value, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/multi_map_remove_codec.py b/hazelcast/protocol/codec/multi_map_remove_codec.py index 030c378d2c..29c56a071e 100644 --- a/hazelcast/protocol/codec/multi_map_remove_codec.py +++ b/hazelcast/protocol/codec/multi_map_remove_codec.py @@ -1,45 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.multi_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec -REQUEST_TYPE = MULTIMAP_REMOVE -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x020300 +_REQUEST_MESSAGE_TYPE = 131840 +# hex: 0x020301 +_RESPONSE_MESSAGE_TYPE = 131841 - -def calculate_size(name, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/multi_map_remove_entry_codec.py b/hazelcast/protocol/codec/multi_map_remove_entry_codec.py index 51b03e91b3..f286076eed 100644 --- a/hazelcast/protocol/codec/multi_map_remove_entry_codec.py +++ b/hazelcast/protocol/codec/multi_map_remove_entry_codec.py @@ -1,37 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_REMOVEENTRY -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x021500 +_REQUEST_MESSAGE_TYPE = 136448 +# hex: 0x021501 +_RESPONSE_MESSAGE_TYPE = 136449 - -def calculate_size(name, key, value, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, value, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/multi_map_remove_entry_listener_codec.py b/hazelcast/protocol/codec/multi_map_remove_entry_listener_codec.py index b14745960b..ec47a2e7a1 100644 --- a/hazelcast/protocol/codec/multi_map_remove_entry_listener_codec.py +++ b/hazelcast/protocol/codec/multi_map_remove_entry_listener_codec.py @@ -1,28 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MULTIMAP_REMOVEENTRYLISTENER -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x020F00 +_REQUEST_MESSAGE_TYPE = 134912 +# hex: 0x020F01 +_RESPONSE_MESSAGE_TYPE = 134913 - -def calculate_size(name, registration_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(registration_id) - return data_size +_REQUEST_REGISTRATION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REGISTRATION_ID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, registration_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, registration_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(registration_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_REGISTRATION_ID_OFFSET, registration_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) + -# Empty decode_response because response is not used to determine the return value. +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/multi_map_size_codec.py b/hazelcast/protocol/codec/multi_map_size_codec.py index 5ea730de5d..ec108ee1d6 100644 --- a/hazelcast/protocol/codec/multi_map_size_codec.py +++ b/hazelcast/protocol/codec/multi_map_size_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = MULTIMAP_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = True +# hex: 0x020A00 +_REQUEST_MESSAGE_TYPE = 133632 +# hex: 0x020A01 +_RESPONSE_MESSAGE_TYPE = 133633 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/multi_map_try_lock_codec.py b/hazelcast/protocol/codec/multi_map_try_lock_codec.py index 0c2d9c2bc0..f410cf121e 100644 --- a/hazelcast/protocol/codec/multi_map_try_lock_codec.py +++ b/hazelcast/protocol/codec/multi_map_try_lock_codec.py @@ -1,41 +1,33 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_TRYLOCK -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x021100 +_REQUEST_MESSAGE_TYPE = 135424 +# hex: 0x021101 +_RESPONSE_MESSAGE_TYPE = 135425 - -def calculate_size(name, key, thread_id, lease, timeout, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LEASE_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_TIMEOUT_OFFSET = _REQUEST_LEASE_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_REFERENCE_ID_OFFSET = _REQUEST_TIMEOUT_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REFERENCE_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, thread_id, lease, timeout, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id, lease, timeout, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.append_long(lease) - client_message.append_long(timeout) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_LEASE_OFFSET, lease) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMEOUT_OFFSET, timeout) + FixSizedTypesCodec.encode_long(buf, _REQUEST_REFERENCE_ID_OFFSET, reference_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/multi_map_unlock_codec.py b/hazelcast/protocol/codec/multi_map_unlock_codec.py index ee5ceff51c..5a14162d34 100644 --- a/hazelcast/protocol/codec/multi_map_unlock_codec.py +++ b/hazelcast/protocol/codec/multi_map_unlock_codec.py @@ -1,33 +1,23 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_UNLOCK -RESPONSE_TYPE = 100 -RETRYABLE = True +# hex: 0x021300 +_REQUEST_MESSAGE_TYPE = 135936 +# hex: 0x021301 +_RESPONSE_MESSAGE_TYPE = 135937 - -def calculate_size(name, key, thread_id, reference_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_REFERENCE_ID_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REFERENCE_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, thread_id, reference_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id, reference_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.append_long(reference_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_REFERENCE_ID_OFFSET, reference_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) diff --git a/hazelcast/protocol/codec/multi_map_value_count_codec.py b/hazelcast/protocol/codec/multi_map_value_count_codec.py index eeebc3eba7..0a1a570c0e 100644 --- a/hazelcast/protocol/codec/multi_map_value_count_codec.py +++ b/hazelcast/protocol/codec/multi_map_value_count_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_VALUECOUNT -RESPONSE_TYPE = 102 -RETRYABLE = True +# hex: 0x020C00 +_REQUEST_MESSAGE_TYPE = 134144 +# hex: 0x020C01 +_RESPONSE_MESSAGE_TYPE = 134145 - -def calculate_size(name, key, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_THREAD_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/multi_map_values_codec.py b/hazelcast/protocol/codec/multi_map_values_codec.py index 010623b70a..e61a82211b 100644 --- a/hazelcast/protocol/codec/multi_map_values_codec.py +++ b/hazelcast/protocol/codec/multi_map_values_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.multi_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = MULTIMAP_VALUES -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x020500 +_REQUEST_MESSAGE_TYPE = 132352 +# hex: 0x020501 +_RESPONSE_MESSAGE_TYPE = 132353 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/pn_counter_add_codec.py b/hazelcast/protocol/codec/pn_counter_add_codec.py index eea9926101..f4c9ba7d5a 100644 --- a/hazelcast/protocol/codec/pn_counter_add_codec.py +++ b/hazelcast/protocol/codec/pn_counter_add_codec.py @@ -1,63 +1,36 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.pn_counter_message_type import * -from hazelcast.six.moves import range - -REQUEST_TYPE = PNCOUNTER_ADD -RESPONSE_TYPE = 127 -RETRYABLE = False - - -def calculate_size(name, delta, get_before_update, replica_timestamps, target_replica): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - for replica_timestamps_item in replica_timestamps: - key = replica_timestamps_item[0] - val = replica_timestamps_item[1] - data_size += calculate_size_str(key) - data_size += LONG_SIZE_IN_BYTES - - data_size += calculate_size_address(target_replica) - return data_size - - -def encode_request(name, delta, get_before_update, replica_timestamps, target_replica): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, delta, get_before_update, replica_timestamps, target_replica)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(delta) - client_message.append_bool(get_before_update) - client_message.append_int(len(replica_timestamps)) - for replica_timestamps_item in replica_timestamps: - key = replica_timestamps_item[0] - val = replica_timestamps_item[1] - client_message.append_str(key) - client_message.append_long(val) - - AddressCodec.encode(client_message, target_replica) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(value=None, replica_timestamps=None, replica_count=None) - parameters['value'] = client_message.read_long() - - replica_timestamps_size = client_message.read_int() - replica_timestamps = [] - for _ in range(0, replica_timestamps_size): - replica_timestamps_item = (client_message.read_str(), client_message.read_long()) - replica_timestamps.append(replica_timestamps_item) - - parameters['replica_timestamps'] = ImmutableLazyDataList(replica_timestamps, to_object) - parameters['replica_count'] = client_message.read_int() - return parameters +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import EntryListUUIDLongCodec + +# hex: 0x1D0200 +_REQUEST_MESSAGE_TYPE = 1901056 +# hex: 0x1D0201 +_RESPONSE_MESSAGE_TYPE = 1901057 + +_REQUEST_DELTA_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_GET_BEFORE_UPDATE_OFFSET = _REQUEST_DELTA_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_TARGET_REPLICA_UUID_OFFSET = _REQUEST_GET_BEFORE_UPDATE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TARGET_REPLICA_UUID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_VALUE_OFFSET = RESPONSE_HEADER_SIZE +_RESPONSE_REPLICA_COUNT_OFFSET = _RESPONSE_VALUE_OFFSET + LONG_SIZE_IN_BYTES + + +def encode_request(name, delta, get_before_update, replica_timestamps, target_replica_uuid): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_DELTA_OFFSET, delta) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_GET_BEFORE_UPDATE_OFFSET, get_before_update) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TARGET_REPLICA_UUID_OFFSET, target_replica_uuid) + StringCodec.encode(buf, name) + EntryListUUIDLongCodec.encode(buf, replica_timestamps, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + response = dict() + response["value"] = FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_VALUE_OFFSET) + response["replica_count"] = FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_REPLICA_COUNT_OFFSET) + response["replica_timestamps"] = EntryListUUIDLongCodec.decode(msg) + return response diff --git a/hazelcast/protocol/codec/pn_counter_get_codec.py b/hazelcast/protocol/codec/pn_counter_get_codec.py index c27adc32f5..149c8dc2c0 100644 --- a/hazelcast/protocol/codec/pn_counter_get_codec.py +++ b/hazelcast/protocol/codec/pn_counter_get_codec.py @@ -1,59 +1,32 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.custom_codec import * -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.pn_counter_message_type import * -from hazelcast.six.moves import range - -REQUEST_TYPE = PNCOUNTER_GET -RESPONSE_TYPE = 127 -RETRYABLE = True - - -def calculate_size(name, replica_timestamps, target_replica): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for replica_timestamps_item in replica_timestamps: - key = replica_timestamps_item[0] - val = replica_timestamps_item[1] - data_size += calculate_size_str(key) - data_size += LONG_SIZE_IN_BYTES - - data_size += calculate_size_address(target_replica) - return data_size - - -def encode_request(name, replica_timestamps, target_replica): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, replica_timestamps, target_replica)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(replica_timestamps)) - for replica_timestamps_item in replica_timestamps: - key = replica_timestamps_item[0] - val = replica_timestamps_item[1] - client_message.append_str(key) - client_message.append_long(val) - - AddressCodec.encode(client_message, target_replica) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(value=None, replica_timestamps=None, replica_count=None) - parameters['value'] = client_message.read_long() - - replica_timestamps_size = client_message.read_int() - replica_timestamps = [] - for _ in range(0, replica_timestamps_size): - replica_timestamps_item = (client_message.read_str(), client_message.read_long()) - replica_timestamps.append(replica_timestamps_item) - - parameters['replica_timestamps'] = ImmutableLazyDataList(replica_timestamps, to_object) - parameters['replica_count'] = client_message.read_int() - return parameters +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import EntryListUUIDLongCodec + +# hex: 0x1D0100 +_REQUEST_MESSAGE_TYPE = 1900800 +# hex: 0x1D0101 +_RESPONSE_MESSAGE_TYPE = 1900801 + +_REQUEST_TARGET_REPLICA_UUID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TARGET_REPLICA_UUID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_VALUE_OFFSET = RESPONSE_HEADER_SIZE +_RESPONSE_REPLICA_COUNT_OFFSET = _RESPONSE_VALUE_OFFSET + LONG_SIZE_IN_BYTES + + +def encode_request(name, replica_timestamps, target_replica_uuid): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TARGET_REPLICA_UUID_OFFSET, target_replica_uuid) + StringCodec.encode(buf, name) + EntryListUUIDLongCodec.encode(buf, replica_timestamps, True) + return OutboundMessage(buf, True) + + +def decode_response(msg): + initial_frame = msg.next_frame() + response = dict() + response["value"] = FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_VALUE_OFFSET) + response["replica_count"] = FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_REPLICA_COUNT_OFFSET) + response["replica_timestamps"] = EntryListUUIDLongCodec.decode(msg) + return response diff --git a/hazelcast/protocol/codec/pn_counter_get_configured_replica_count_codec.py b/hazelcast/protocol/codec/pn_counter_get_configured_replica_count_codec.py index 717b0d48ec..504fcca5d8 100644 --- a/hazelcast/protocol/codec/pn_counter_get_configured_replica_count_codec.py +++ b/hazelcast/protocol/codec/pn_counter_get_configured_replica_count_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.pn_counter_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = PNCOUNTER_GETCONFIGUREDREPLICACOUNT -RESPONSE_TYPE = 102 -RETRYABLE = True +# hex: 0x1D0300 +_REQUEST_MESSAGE_TYPE = 1901312 +# hex: 0x1D0301 +_RESPONSE_MESSAGE_TYPE = 1901313 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/pn_counter_message_type.py b/hazelcast/protocol/codec/pn_counter_message_type.py deleted file mode 100644 index e8ef2f090e..0000000000 --- a/hazelcast/protocol/codec/pn_counter_message_type.py +++ /dev/null @@ -1,3 +0,0 @@ -PNCOUNTER_GET = 0x2001 -PNCOUNTER_ADD = 0x2002 -PNCOUNTER_GETCONFIGUREDREPLICACOUNT = 0x2003 diff --git a/hazelcast/protocol/codec/queue_add_all_codec.py b/hazelcast/protocol/codec/queue_add_all_codec.py index d6bba5168e..8a2b58ee0e 100644 --- a/hazelcast/protocol/codec/queue_add_all_codec.py +++ b/hazelcast/protocol/codec/queue_add_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_ADDALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x031000 +_REQUEST_MESSAGE_TYPE = 200704 +# hex: 0x031001 +_RESPONSE_MESSAGE_TYPE = 200705 - -def calculate_size(name, data_list): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for data_list_item in data_list: - data_size += calculate_size_data(data_list_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, data_list): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, data_list)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(data_list)) - for data_list_item in data_list: - client_message.append_data(data_list_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, data_list, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_add_listener_codec.py b/hazelcast/protocol/codec/queue_add_listener_codec.py index c85251946f..41f59af93b 100644 --- a/hazelcast/protocol/codec/queue_add_listener_codec.py +++ b/hazelcast/protocol/codec/queue_add_listener_codec.py @@ -1,48 +1,44 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = QUEUE_ADDLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, include_value, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x031100 +_REQUEST_MESSAGE_TYPE = 200960 +# hex: 0x031101 +_RESPONSE_MESSAGE_TYPE = 200961 +# hex: 0x031102 +_EVENT_ITEM_MESSAGE_TYPE = 200962 + +_REQUEST_INCLUDE_VALUE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_INCLUDE_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ITEM_UUID_OFFSET = EVENT_HEADER_SIZE +_EVENT_ITEM_EVENT_TYPE_OFFSET = _EVENT_ITEM_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, include_value, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, include_value, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(include_value) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_item=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ITEM and handle_event_item is not None: - item = None - if not client_message.read_bool(): - item = client_message.read_data() - uuid = client_message.read_str() - event_type = client_message.read_int() - handle_event_item(item=item, uuid=uuid, event_type=event_type) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INCLUDE_VALUE_OFFSET, include_value) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_item_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ITEM_MESSAGE_TYPE and handle_item_event is not None: + initial_frame = msg.next_frame() + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ITEM_UUID_OFFSET) + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ITEM_EVENT_TYPE_OFFSET) + item = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_item_event(item, uuid, event_type) + return diff --git a/hazelcast/protocol/codec/queue_clear_codec.py b/hazelcast/protocol/codec/queue_clear_codec.py index 53ad8b9162..f055b757b5 100644 --- a/hazelcast/protocol/codec/queue_clear_codec.py +++ b/hazelcast/protocol/codec/queue_clear_codec.py @@ -1,27 +1,15 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = QUEUE_CLEAR -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x030F00 +_REQUEST_MESSAGE_TYPE = 200448 +# hex: 0x030F01 +_RESPONSE_MESSAGE_TYPE = 200449 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/queue_compare_and_remove_all_codec.py b/hazelcast/protocol/codec/queue_compare_and_remove_all_codec.py index c39d911439..56fb5ff438 100644 --- a/hazelcast/protocol/codec/queue_compare_and_remove_all_codec.py +++ b/hazelcast/protocol/codec/queue_compare_and_remove_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_COMPAREANDREMOVEALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x030D00 +_REQUEST_MESSAGE_TYPE = 199936 +# hex: 0x030D01 +_RESPONSE_MESSAGE_TYPE = 199937 - -def calculate_size(name, data_list): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for data_list_item in data_list: - data_size += calculate_size_data(data_list_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, data_list): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, data_list)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(data_list)) - for data_list_item in data_list: - client_message.append_data(data_list_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, data_list, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_compare_and_retain_all_codec.py b/hazelcast/protocol/codec/queue_compare_and_retain_all_codec.py index 424c7d9178..27a371827a 100644 --- a/hazelcast/protocol/codec/queue_compare_and_retain_all_codec.py +++ b/hazelcast/protocol/codec/queue_compare_and_retain_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_COMPAREANDRETAINALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x030E00 +_REQUEST_MESSAGE_TYPE = 200192 +# hex: 0x030E01 +_RESPONSE_MESSAGE_TYPE = 200193 - -def calculate_size(name, data_list): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for data_list_item in data_list: - data_size += calculate_size_data(data_list_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, data_list): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, data_list)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(data_list)) - for data_list_item in data_list: - client_message.append_data(data_list_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, data_list, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_contains_all_codec.py b/hazelcast/protocol/codec/queue_contains_all_codec.py index 2e6f37cbbc..07e24ef5bb 100644 --- a/hazelcast/protocol/codec/queue_contains_all_codec.py +++ b/hazelcast/protocol/codec/queue_contains_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_CONTAINSALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x030C00 +_REQUEST_MESSAGE_TYPE = 199680 +# hex: 0x030C01 +_RESPONSE_MESSAGE_TYPE = 199681 - -def calculate_size(name, data_list): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for data_list_item in data_list: - data_size += calculate_size_data(data_list_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, data_list): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, data_list)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(data_list)) - for data_list_item in data_list: - client_message.append_data(data_list_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, data_list, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_contains_codec.py b/hazelcast/protocol/codec/queue_contains_codec.py index ecc3f99ff2..49250f1838 100644 --- a/hazelcast/protocol/codec/queue_contains_codec.py +++ b/hazelcast/protocol/codec/queue_contains_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_CONTAINS -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x030B00 +_REQUEST_MESSAGE_TYPE = 199424 +# hex: 0x030B01 +_RESPONSE_MESSAGE_TYPE = 199425 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_drain_to_codec.py b/hazelcast/protocol/codec/queue_drain_to_codec.py index 2b7fec85b7..98b6208045 100644 --- a/hazelcast/protocol/codec/queue_drain_to_codec.py +++ b/hazelcast/protocol/codec/queue_drain_to_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.queue_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_DRAINTO -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x030900 +_REQUEST_MESSAGE_TYPE = 198912 +# hex: 0x030901 +_RESPONSE_MESSAGE_TYPE = 198913 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/queue_drain_to_max_size_codec.py b/hazelcast/protocol/codec/queue_drain_to_max_size_codec.py index 7f3a283117..cbfd4831bb 100644 --- a/hazelcast/protocol/codec/queue_drain_to_max_size_codec.py +++ b/hazelcast/protocol/codec/queue_drain_to_max_size_codec.py @@ -1,40 +1,26 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.queue_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_DRAINTOMAXSIZE -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x030A00 +_REQUEST_MESSAGE_TYPE = 199168 +# hex: 0x030A01 +_RESPONSE_MESSAGE_TYPE = 199169 - -def calculate_size(name, max_size): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - return data_size +_REQUEST_MAX_SIZE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_MAX_SIZE_OFFSET + INT_SIZE_IN_BYTES def encode_request(name, max_size): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, max_size)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(max_size) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_MAX_SIZE_OFFSET, max_size) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/queue_is_empty_codec.py b/hazelcast/protocol/codec/queue_is_empty_codec.py index f6be6be032..561c260af0 100644 --- a/hazelcast/protocol/codec/queue_is_empty_codec.py +++ b/hazelcast/protocol/codec/queue_is_empty_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = QUEUE_ISEMPTY -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x031400 +_REQUEST_MESSAGE_TYPE = 201728 +# hex: 0x031401 +_RESPONSE_MESSAGE_TYPE = 201729 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_iterator_codec.py b/hazelcast/protocol/codec/queue_iterator_codec.py index 47c02a07c7..908b58e6d6 100644 --- a/hazelcast/protocol/codec/queue_iterator_codec.py +++ b/hazelcast/protocol/codec/queue_iterator_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.queue_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_ITERATOR -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x030800 +_REQUEST_MESSAGE_TYPE = 198656 +# hex: 0x030801 +_RESPONSE_MESSAGE_TYPE = 198657 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/queue_message_type.py b/hazelcast/protocol/codec/queue_message_type.py deleted file mode 100644 index 4a67a1cabe..0000000000 --- a/hazelcast/protocol/codec/queue_message_type.py +++ /dev/null @@ -1,21 +0,0 @@ - -QUEUE_OFFER = 0x0301 -QUEUE_PUT = 0x0302 -QUEUE_SIZE = 0x0303 -QUEUE_REMOVE = 0x0304 -QUEUE_POLL = 0x0305 -QUEUE_TAKE = 0x0306 -QUEUE_PEEK = 0x0307 -QUEUE_ITERATOR = 0x0308 -QUEUE_DRAINTO = 0x0309 -QUEUE_DRAINTOMAXSIZE = 0x030a -QUEUE_CONTAINS = 0x030b -QUEUE_CONTAINSALL = 0x030c -QUEUE_COMPAREANDREMOVEALL = 0x030d -QUEUE_COMPAREANDRETAINALL = 0x030e -QUEUE_CLEAR = 0x030f -QUEUE_ADDALL = 0x0310 -QUEUE_ADDLISTENER = 0x0311 -QUEUE_REMOVELISTENER = 0x0312 -QUEUE_REMAININGCAPACITY = 0x0313 -QUEUE_ISEMPTY = 0x0314 diff --git a/hazelcast/protocol/codec/queue_offer_codec.py b/hazelcast/protocol/codec/queue_offer_codec.py index c731374d94..782800e4e0 100644 --- a/hazelcast/protocol/codec/queue_offer_codec.py +++ b/hazelcast/protocol/codec/queue_offer_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_OFFER -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x030100 +_REQUEST_MESSAGE_TYPE = 196864 +# hex: 0x030101 +_RESPONSE_MESSAGE_TYPE = 196865 - -def calculate_size(name, value, timeout_millis): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TIMEOUT_MILLIS_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TIMEOUT_MILLIS_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value, timeout_millis): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value, timeout_millis)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.append_long(timeout_millis) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMEOUT_MILLIS_OFFSET, timeout_millis) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_peek_codec.py b/hazelcast/protocol/codec/queue_peek_codec.py index 3002262439..b382cf0b43 100644 --- a/hazelcast/protocol/codec/queue_peek_codec.py +++ b/hazelcast/protocol/codec/queue_peek_codec.py @@ -1,32 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = QUEUE_PEEK -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x030700 +_REQUEST_MESSAGE_TYPE = 198400 +# hex: 0x030701 +_RESPONSE_MESSAGE_TYPE = 198401 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/queue_poll_codec.py b/hazelcast/protocol/codec/queue_poll_codec.py index 7bcb7474ac..72de092e3d 100644 --- a/hazelcast/protocol/codec/queue_poll_codec.py +++ b/hazelcast/protocol/codec/queue_poll_codec.py @@ -1,34 +1,26 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = QUEUE_POLL -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x030500 +_REQUEST_MESSAGE_TYPE = 197888 +# hex: 0x030501 +_RESPONSE_MESSAGE_TYPE = 197889 - -def calculate_size(name, timeout_millis): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TIMEOUT_MILLIS_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TIMEOUT_MILLIS_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, timeout_millis): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, timeout_millis)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(timeout_millis) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMEOUT_MILLIS_OFFSET, timeout_millis) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/queue_put_codec.py b/hazelcast/protocol/codec/queue_put_codec.py index be184a6a2c..1d16075041 100644 --- a/hazelcast/protocol/codec/queue_put_codec.py +++ b/hazelcast/protocol/codec/queue_put_codec.py @@ -1,29 +1,17 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_PUT -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x030200 +_REQUEST_MESSAGE_TYPE = 197120 +# hex: 0x030201 +_RESPONSE_MESSAGE_TYPE = 197121 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/queue_remaining_capacity_codec.py b/hazelcast/protocol/codec/queue_remaining_capacity_codec.py index 16ba1604e6..20840bbe9c 100644 --- a/hazelcast/protocol/codec/queue_remaining_capacity_codec.py +++ b/hazelcast/protocol/codec/queue_remaining_capacity_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = QUEUE_REMAININGCAPACITY -RESPONSE_TYPE = 102 -RETRYABLE = False +# hex: 0x031300 +_REQUEST_MESSAGE_TYPE = 201472 +# hex: 0x031301 +_RESPONSE_MESSAGE_TYPE = 201473 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_remove_codec.py b/hazelcast/protocol/codec/queue_remove_codec.py index 3ffc99816f..a6356c7166 100644 --- a/hazelcast/protocol/codec/queue_remove_codec.py +++ b/hazelcast/protocol/codec/queue_remove_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = QUEUE_REMOVE -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x030400 +_REQUEST_MESSAGE_TYPE = 197632 +# hex: 0x030401 +_RESPONSE_MESSAGE_TYPE = 197633 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_remove_listener_codec.py b/hazelcast/protocol/codec/queue_remove_listener_codec.py index d8f7589028..5fecf5d315 100644 --- a/hazelcast/protocol/codec/queue_remove_listener_codec.py +++ b/hazelcast/protocol/codec/queue_remove_listener_codec.py @@ -1,28 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = QUEUE_REMOVELISTENER -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x031200 +_REQUEST_MESSAGE_TYPE = 201216 +# hex: 0x031201 +_RESPONSE_MESSAGE_TYPE = 201217 - -def calculate_size(name, registration_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(registration_id) - return data_size +_REQUEST_REGISTRATION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REGISTRATION_ID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, registration_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, registration_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(registration_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_REGISTRATION_ID_OFFSET, registration_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) + -# Empty decode_response because response is not used to determine the return value. +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_size_codec.py b/hazelcast/protocol/codec/queue_size_codec.py index 49b9afeb9d..747ff4a6cc 100644 --- a/hazelcast/protocol/codec/queue_size_codec.py +++ b/hazelcast/protocol/codec/queue_size_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = QUEUE_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = False +# hex: 0x030300 +_REQUEST_MESSAGE_TYPE = 197376 +# hex: 0x030301 +_RESPONSE_MESSAGE_TYPE = 197377 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/queue_take_codec.py b/hazelcast/protocol/codec/queue_take_codec.py index 992c7f6879..1cef947bb8 100644 --- a/hazelcast/protocol/codec/queue_take_codec.py +++ b/hazelcast/protocol/codec/queue_take_codec.py @@ -1,32 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.queue_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = QUEUE_TAKE -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x030600 +_REQUEST_MESSAGE_TYPE = 198144 +# hex: 0x030601 +_RESPONSE_MESSAGE_TYPE = 198145 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/replicated_map_add_entry_listener_codec.py b/hazelcast/protocol/codec/replicated_map_add_entry_listener_codec.py index 9865bf0684..9243021f98 100644 --- a/hazelcast/protocol/codec/replicated_map_add_entry_listener_codec.py +++ b/hazelcast/protocol/codec/replicated_map_add_entry_listener_codec.py @@ -1,56 +1,47 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = REPLICATEDMAP_ADDENTRYLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x0D0D00 +_REQUEST_MESSAGE_TYPE = 855296 +# hex: 0x0D0D01 +_RESPONSE_MESSAGE_TYPE = 855297 +# hex: 0x0D0D02 +_EVENT_ENTRY_MESSAGE_TYPE = 855298 + +_REQUEST_LOCAL_ONLY_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/replicated_map_add_entry_listener_to_key_codec.py b/hazelcast/protocol/codec/replicated_map_add_entry_listener_to_key_codec.py index 387f349a8d..1e1f47075a 100644 --- a/hazelcast/protocol/codec/replicated_map_add_entry_listener_to_key_codec.py +++ b/hazelcast/protocol/codec/replicated_map_add_entry_listener_to_key_codec.py @@ -1,58 +1,48 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = REPLICATEDMAP_ADDENTRYLISTENERTOKEY -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, key, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x0D0C00 +_REQUEST_MESSAGE_TYPE = 855040 +# hex: 0x0D0C01 +_RESPONSE_MESSAGE_TYPE = 855041 +# hex: 0x0D0C02 +_EVENT_ENTRY_MESSAGE_TYPE = 855042 + +_REQUEST_LOCAL_ONLY_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, key, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/replicated_map_add_entry_listener_to_key_with_predicate_codec.py b/hazelcast/protocol/codec/replicated_map_add_entry_listener_to_key_with_predicate_codec.py index 582734572b..5f804087b8 100644 --- a/hazelcast/protocol/codec/replicated_map_add_entry_listener_to_key_with_predicate_codec.py +++ b/hazelcast/protocol/codec/replicated_map_add_entry_listener_to_key_with_predicate_codec.py @@ -1,60 +1,49 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = REPLICATEDMAP_ADDENTRYLISTENERTOKEYWITHPREDICATE -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, key, predicate, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(predicate) - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x0D0A00 +_REQUEST_MESSAGE_TYPE = 854528 +# hex: 0x0D0A01 +_RESPONSE_MESSAGE_TYPE = 854529 +# hex: 0x0D0A02 +_EVENT_ENTRY_MESSAGE_TYPE = 854530 + +_REQUEST_LOCAL_ONLY_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, key, predicate, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, predicate, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(predicate) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/replicated_map_add_entry_listener_with_predicate_codec.py b/hazelcast/protocol/codec/replicated_map_add_entry_listener_with_predicate_codec.py index f6f4662805..5ca861ef74 100644 --- a/hazelcast/protocol/codec/replicated_map_add_entry_listener_with_predicate_codec.py +++ b/hazelcast/protocol/codec/replicated_map_add_entry_listener_with_predicate_codec.py @@ -1,58 +1,48 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = REPLICATEDMAP_ADDENTRYLISTENERWITHPREDICATE -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, predicate, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(predicate) - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x0D0B00 +_REQUEST_MESSAGE_TYPE = 854784 +# hex: 0x0D0B01 +_RESPONSE_MESSAGE_TYPE = 854785 +# hex: 0x0D0B02 +_EVENT_ENTRY_MESSAGE_TYPE = 854786 + +_REQUEST_LOCAL_ONLY_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, predicate, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, predicate, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(predicate) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/replicated_map_add_near_cache_entry_listener_codec.py b/hazelcast/protocol/codec/replicated_map_add_near_cache_entry_listener_codec.py index f63df89b92..85ac2df246 100644 --- a/hazelcast/protocol/codec/replicated_map_add_near_cache_entry_listener_codec.py +++ b/hazelcast/protocol/codec/replicated_map_add_near_cache_entry_listener_codec.py @@ -1,59 +1,49 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = REPLICATEDMAP_ADDNEARCACHEENTRYLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, include_value, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x0D1200 +_REQUEST_MESSAGE_TYPE = 856576 +# hex: 0x0D1201 +_RESPONSE_MESSAGE_TYPE = 856577 +# hex: 0x0D1202 +_EVENT_ENTRY_MESSAGE_TYPE = 856578 + +_REQUEST_INCLUDE_VALUE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_INCLUDE_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ENTRY_EVENT_TYPE_OFFSET = EVENT_HEADER_SIZE +_EVENT_ENTRY_UUID_OFFSET = _EVENT_ENTRY_EVENT_TYPE_OFFSET + INT_SIZE_IN_BYTES +_EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET = _EVENT_ENTRY_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, include_value, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, include_value, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(include_value) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_entry=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ENTRY and handle_event_entry is not None: - key = None - if not client_message.read_bool(): - key = client_message.read_data() - value = None - if not client_message.read_bool(): - value = client_message.read_data() - old_value = None - if not client_message.read_bool(): - old_value = client_message.read_data() - merging_value = None - if not client_message.read_bool(): - merging_value = client_message.read_data() - event_type = client_message.read_int() - uuid = client_message.read_str() - number_of_affected_entries = client_message.read_int() - handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries) - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INCLUDE_VALUE_OFFSET, include_value) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_entry_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ENTRY_MESSAGE_TYPE and handle_entry_event is not None: + initial_frame = msg.next_frame() + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_EVENT_TYPE_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ENTRY_UUID_OFFSET) + number_of_affected_entries = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ENTRY_NUMBER_OF_AFFECTED_ENTRIES_OFFSET) + key = CodecUtil.decode_nullable(msg, DataCodec.decode) + value = CodecUtil.decode_nullable(msg, DataCodec.decode) + old_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + merging_value = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_entry_event(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries) + return diff --git a/hazelcast/protocol/codec/replicated_map_clear_codec.py b/hazelcast/protocol/codec/replicated_map_clear_codec.py index 232cba87ad..1a43cecc72 100644 --- a/hazelcast/protocol/codec/replicated_map_clear_codec.py +++ b/hazelcast/protocol/codec/replicated_map_clear_codec.py @@ -1,27 +1,15 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = REPLICATEDMAP_CLEAR -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x0D0900 +_REQUEST_MESSAGE_TYPE = 854272 +# hex: 0x0D0901 +_RESPONSE_MESSAGE_TYPE = 854273 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/replicated_map_contains_key_codec.py b/hazelcast/protocol/codec/replicated_map_contains_key_codec.py index 038c45d035..51516cc044 100644 --- a/hazelcast/protocol/codec/replicated_map_contains_key_codec.py +++ b/hazelcast/protocol/codec/replicated_map_contains_key_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = REPLICATEDMAP_CONTAINSKEY -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x0D0400 +_REQUEST_MESSAGE_TYPE = 852992 +# hex: 0x0D0401 +_RESPONSE_MESSAGE_TYPE = 852993 - -def calculate_size(name, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/replicated_map_contains_value_codec.py b/hazelcast/protocol/codec/replicated_map_contains_value_codec.py index 22bcf2c50c..7b1bf23021 100644 --- a/hazelcast/protocol/codec/replicated_map_contains_value_codec.py +++ b/hazelcast/protocol/codec/replicated_map_contains_value_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = REPLICATEDMAP_CONTAINSVALUE -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x0D0500 +_REQUEST_MESSAGE_TYPE = 853248 +# hex: 0x0D0501 +_RESPONSE_MESSAGE_TYPE = 853249 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/replicated_map_entry_set_codec.py b/hazelcast/protocol/codec/replicated_map_entry_set_codec.py index 1526666998..e66c0914f8 100644 --- a/hazelcast/protocol/codec/replicated_map_entry_set_codec.py +++ b/hazelcast/protocol/codec/replicated_map_entry_set_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.replicated_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import EntryListCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = REPLICATEDMAP_ENTRYSET -RESPONSE_TYPE = 117 -RETRYABLE = True +# hex: 0x0D1100 +_REQUEST_MESSAGE_TYPE = 856320 +# hex: 0x0D1101 +_RESPONSE_MESSAGE_TYPE = 856321 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = (client_message.read_data(), client_message.read_data()) - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return EntryListCodec.decode(msg, DataCodec.decode, DataCodec.decode) diff --git a/hazelcast/protocol/codec/replicated_map_get_codec.py b/hazelcast/protocol/codec/replicated_map_get_codec.py index 1c0d5b4417..6529fe9997 100644 --- a/hazelcast/protocol/codec/replicated_map_get_codec.py +++ b/hazelcast/protocol/codec/replicated_map_get_codec.py @@ -1,34 +1,23 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = REPLICATEDMAP_GET -RESPONSE_TYPE = 105 -RETRYABLE = True +# hex: 0x0D0600 +_REQUEST_MESSAGE_TYPE = 853504 +# hex: 0x0D0601 +_RESPONSE_MESSAGE_TYPE = 853505 - -def calculate_size(name, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/replicated_map_is_empty_codec.py b/hazelcast/protocol/codec/replicated_map_is_empty_codec.py index bf534b1d11..249e4487a7 100644 --- a/hazelcast/protocol/codec/replicated_map_is_empty_codec.py +++ b/hazelcast/protocol/codec/replicated_map_is_empty_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = REPLICATEDMAP_ISEMPTY -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x0D0300 +_REQUEST_MESSAGE_TYPE = 852736 +# hex: 0x0D0301 +_RESPONSE_MESSAGE_TYPE = 852737 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/replicated_map_key_set_codec.py b/hazelcast/protocol/codec/replicated_map_key_set_codec.py index cad5f96acb..192439648f 100644 --- a/hazelcast/protocol/codec/replicated_map_key_set_codec.py +++ b/hazelcast/protocol/codec/replicated_map_key_set_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.replicated_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = REPLICATEDMAP_KEYSET -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x0D0F00 +_REQUEST_MESSAGE_TYPE = 855808 +# hex: 0x0D0F01 +_RESPONSE_MESSAGE_TYPE = 855809 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/replicated_map_message_type.py b/hazelcast/protocol/codec/replicated_map_message_type.py deleted file mode 100644 index 545b1783b8..0000000000 --- a/hazelcast/protocol/codec/replicated_map_message_type.py +++ /dev/null @@ -1,19 +0,0 @@ - -REPLICATEDMAP_PUT = 0x0e01 -REPLICATEDMAP_SIZE = 0x0e02 -REPLICATEDMAP_ISEMPTY = 0x0e03 -REPLICATEDMAP_CONTAINSKEY = 0x0e04 -REPLICATEDMAP_CONTAINSVALUE = 0x0e05 -REPLICATEDMAP_GET = 0x0e06 -REPLICATEDMAP_REMOVE = 0x0e07 -REPLICATEDMAP_PUTALL = 0x0e08 -REPLICATEDMAP_CLEAR = 0x0e09 -REPLICATEDMAP_ADDENTRYLISTENERTOKEYWITHPREDICATE = 0x0e0a -REPLICATEDMAP_ADDENTRYLISTENERWITHPREDICATE = 0x0e0b -REPLICATEDMAP_ADDENTRYLISTENERTOKEY = 0x0e0c -REPLICATEDMAP_ADDENTRYLISTENER = 0x0e0d -REPLICATEDMAP_REMOVEENTRYLISTENER = 0x0e0e -REPLICATEDMAP_KEYSET = 0x0e0f -REPLICATEDMAP_VALUES = 0x0e10 -REPLICATEDMAP_ENTRYSET = 0x0e11 -REPLICATEDMAP_ADDNEARCACHEENTRYLISTENER = 0x0e12 diff --git a/hazelcast/protocol/codec/replicated_map_put_all_codec.py b/hazelcast/protocol/codec/replicated_map_put_all_codec.py index a9509c53cb..e0f1741dd0 100644 --- a/hazelcast/protocol/codec/replicated_map_put_all_codec.py +++ b/hazelcast/protocol/codec/replicated_map_put_all_codec.py @@ -1,35 +1,18 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * -from hazelcast import six +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import EntryListCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = REPLICATEDMAP_PUTALL -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x0D0800 +_REQUEST_MESSAGE_TYPE = 854016 +# hex: 0x0D0801 +_RESPONSE_MESSAGE_TYPE = 854017 - -def calculate_size(name, entries): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for key, val in six.iteritems(entries): - data_size += calculate_size_data(key) - data_size += calculate_size_data(val) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, entries): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, entries)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(entries)) - for entries_item in six.iteritems(entries): - client_message.append_tuple(entries_item) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + EntryListCodec.encode(buf, entries, DataCodec.encode, DataCodec.encode, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/replicated_map_put_codec.py b/hazelcast/protocol/codec/replicated_map_put_codec.py index e9d7e7bcf7..f0f0040ebe 100644 --- a/hazelcast/protocol/codec/replicated_map_put_codec.py +++ b/hazelcast/protocol/codec/replicated_map_put_codec.py @@ -1,38 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = REPLICATEDMAP_PUT -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x0D0100 +_REQUEST_MESSAGE_TYPE = 852224 +# hex: 0x0D0101 +_RESPONSE_MESSAGE_TYPE = 852225 - -def calculate_size(name, key, value, ttl): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TTL_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, key, value, ttl): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key, value, ttl)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(ttl) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/replicated_map_remove_codec.py b/hazelcast/protocol/codec/replicated_map_remove_codec.py index 8d68ee6b97..d5289eb9a3 100644 --- a/hazelcast/protocol/codec/replicated_map_remove_codec.py +++ b/hazelcast/protocol/codec/replicated_map_remove_codec.py @@ -1,34 +1,23 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = REPLICATEDMAP_REMOVE -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x0D0700 +_REQUEST_MESSAGE_TYPE = 853760 +# hex: 0x0D0701 +_RESPONSE_MESSAGE_TYPE = 853761 - -def calculate_size(name, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(key) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(key) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/replicated_map_remove_entry_listener_codec.py b/hazelcast/protocol/codec/replicated_map_remove_entry_listener_codec.py index 71edc4c1d9..29197dcc1a 100644 --- a/hazelcast/protocol/codec/replicated_map_remove_entry_listener_codec.py +++ b/hazelcast/protocol/codec/replicated_map_remove_entry_listener_codec.py @@ -1,29 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = REPLICATEDMAP_REMOVEENTRYLISTENER -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x0D0E00 +_REQUEST_MESSAGE_TYPE = 855552 +# hex: 0x0D0E01 +_RESPONSE_MESSAGE_TYPE = 855553 - -def calculate_size(name, registration_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(registration_id) - return data_size +_REQUEST_REGISTRATION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REGISTRATION_ID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, registration_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, registration_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(registration_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_REGISTRATION_ID_OFFSET, registration_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -# Empty decode_response because response is not used to determine the return value. +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/replicated_map_size_codec.py b/hazelcast/protocol/codec/replicated_map_size_codec.py index 08d8ae4a48..de73651a23 100644 --- a/hazelcast/protocol/codec/replicated_map_size_codec.py +++ b/hazelcast/protocol/codec/replicated_map_size_codec.py @@ -1,34 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.replicated_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = REPLICATEDMAP_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = True +# hex: 0x0D0200 +_REQUEST_MESSAGE_TYPE = 852480 +# hex: 0x0D0201 +_RESPONSE_MESSAGE_TYPE = 852481 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/replicated_map_values_codec.py b/hazelcast/protocol/codec/replicated_map_values_codec.py index ee06369c84..85f6d85ff0 100644 --- a/hazelcast/protocol/codec/replicated_map_values_codec.py +++ b/hazelcast/protocol/codec/replicated_map_values_codec.py @@ -1,41 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.replicated_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = REPLICATEDMAP_VALUES -RESPONSE_TYPE = 106 -RETRYABLE = True +# hex: 0x0D1000 +_REQUEST_MESSAGE_TYPE = 856064 +# hex: 0x0D1001 +_RESPONSE_MESSAGE_TYPE = 856065 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/ringbuffer_add_all_codec.py b/hazelcast/protocol/codec/ringbuffer_add_all_codec.py index bc7b91f7ed..9982d2f43b 100644 --- a/hazelcast/protocol/codec/ringbuffer_add_all_codec.py +++ b/hazelcast/protocol/codec/ringbuffer_add_all_codec.py @@ -1,39 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.ringbuffer_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = RINGBUFFER_ADDALL -RESPONSE_TYPE = 103 -RETRYABLE = False +# hex: 0x170800 +_REQUEST_MESSAGE_TYPE = 1509376 +# hex: 0x170801 +_RESPONSE_MESSAGE_TYPE = 1509377 - -def calculate_size(name, value_list, overflow_policy): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for value_list_item in value_list: - data_size += calculate_size_data(value_list_item) - data_size += INT_SIZE_IN_BYTES - return data_size +_REQUEST_OVERFLOW_POLICY_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_OVERFLOW_POLICY_OFFSET + INT_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value_list, overflow_policy): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value_list, overflow_policy)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(value_list)) - for value_list_item in value_list: - client_message.append_data(value_list_item) - client_message.append_int(overflow_policy) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_OVERFLOW_POLICY_OFFSET, overflow_policy) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, value_list, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/ringbuffer_add_codec.py b/hazelcast/protocol/codec/ringbuffer_add_codec.py index cfcfadc9f0..827ca3e245 100644 --- a/hazelcast/protocol/codec/ringbuffer_add_codec.py +++ b/hazelcast/protocol/codec/ringbuffer_add_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.ringbuffer_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = RINGBUFFER_ADD -RESPONSE_TYPE = 103 -RETRYABLE = False +# hex: 0x170600 +_REQUEST_MESSAGE_TYPE = 1508864 +# hex: 0x170601 +_RESPONSE_MESSAGE_TYPE = 1508865 - -def calculate_size(name, overflow_policy, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - data_size += calculate_size_data(value) - return data_size +_REQUEST_OVERFLOW_POLICY_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_OVERFLOW_POLICY_OFFSET + INT_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, overflow_policy, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, overflow_policy, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(overflow_policy) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_int(buf, _REQUEST_OVERFLOW_POLICY_OFFSET, overflow_policy) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/ringbuffer_capacity_codec.py b/hazelcast/protocol/codec/ringbuffer_capacity_codec.py index 11d3cc1ddd..c6f66d8b01 100644 --- a/hazelcast/protocol/codec/ringbuffer_capacity_codec.py +++ b/hazelcast/protocol/codec/ringbuffer_capacity_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.ringbuffer_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = RINGBUFFER_CAPACITY -RESPONSE_TYPE = 103 -RETRYABLE = True +# hex: 0x170400 +_REQUEST_MESSAGE_TYPE = 1508352 +# hex: 0x170401 +_RESPONSE_MESSAGE_TYPE = 1508353 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/ringbuffer_head_sequence_codec.py b/hazelcast/protocol/codec/ringbuffer_head_sequence_codec.py index bc5b599630..2d68f84f04 100644 --- a/hazelcast/protocol/codec/ringbuffer_head_sequence_codec.py +++ b/hazelcast/protocol/codec/ringbuffer_head_sequence_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.ringbuffer_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = RINGBUFFER_HEADSEQUENCE -RESPONSE_TYPE = 103 -RETRYABLE = True +# hex: 0x170300 +_REQUEST_MESSAGE_TYPE = 1508096 +# hex: 0x170301 +_RESPONSE_MESSAGE_TYPE = 1508097 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/ringbuffer_message_type.py b/hazelcast/protocol/codec/ringbuffer_message_type.py deleted file mode 100644 index 137891a490..0000000000 --- a/hazelcast/protocol/codec/ringbuffer_message_type.py +++ /dev/null @@ -1,10 +0,0 @@ - -RINGBUFFER_SIZE = 0x1901 -RINGBUFFER_TAILSEQUENCE = 0x1902 -RINGBUFFER_HEADSEQUENCE = 0x1903 -RINGBUFFER_CAPACITY = 0x1904 -RINGBUFFER_REMAININGCAPACITY = 0x1905 -RINGBUFFER_ADD = 0x1906 -RINGBUFFER_READONE = 0x1908 -RINGBUFFER_ADDALL = 0x1909 -RINGBUFFER_READMANY = 0x190a diff --git a/hazelcast/protocol/codec/ringbuffer_read_many_codec.py b/hazelcast/protocol/codec/ringbuffer_read_many_codec.py index f194bb1095..5e9afce8d6 100644 --- a/hazelcast/protocol/codec/ringbuffer_read_many_codec.py +++ b/hazelcast/protocol/codec/ringbuffer_read_many_codec.py @@ -1,51 +1,40 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.ringbuffer_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import LongArrayCodec -REQUEST_TYPE = RINGBUFFER_READMANY -RESPONSE_TYPE = 115 -RETRYABLE = True +# hex: 0x170900 +_REQUEST_MESSAGE_TYPE = 1509632 +# hex: 0x170901 +_RESPONSE_MESSAGE_TYPE = 1509633 - -def calculate_size(name, start_sequence, min_count, max_count, filter): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - if filter is not None: - data_size += calculate_size_data(filter) - return data_size +_REQUEST_START_SEQUENCE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_MIN_COUNT_OFFSET = _REQUEST_START_SEQUENCE_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_MAX_COUNT_OFFSET = _REQUEST_MIN_COUNT_OFFSET + INT_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_MAX_COUNT_OFFSET + INT_SIZE_IN_BYTES +_RESPONSE_READ_COUNT_OFFSET = RESPONSE_HEADER_SIZE +_RESPONSE_NEXT_SEQ_OFFSET = _RESPONSE_READ_COUNT_OFFSET + INT_SIZE_IN_BYTES def encode_request(name, start_sequence, min_count, max_count, filter): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, start_sequence, min_count, max_count, filter)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(start_sequence) - client_message.append_int(min_count) - client_message.append_int(max_count) - client_message.append_bool(filter is None) - if filter is not None: - client_message.append_data(filter) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_START_SEQUENCE_OFFSET, start_sequence) + FixSizedTypesCodec.encode_int(buf, _REQUEST_MIN_COUNT_OFFSET, min_count) + FixSizedTypesCodec.encode_int(buf, _REQUEST_MAX_COUNT_OFFSET, max_count) + StringCodec.encode(buf, name) + CodecUtil.encode_nullable(buf, filter, DataCodec.encode, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(read_count=None, items=None) - parameters['read_count'] = client_message.read_int() - items_size = client_message.read_int() - items = [] - for _ in range(0, items_size): - items_item = client_message.read_data() - items.append(items_item) - parameters['items'] = ImmutableLazyDataList(items, to_object) - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + response = dict() + response["read_count"] = FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_READ_COUNT_OFFSET) + response["next_seq"] = FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_NEXT_SEQ_OFFSET) + response["items"] = ListMultiFrameCodec.decode(msg, DataCodec.decode) + response["item_seqs"] = CodecUtil.decode_nullable(msg, LongArrayCodec.decode) + return response diff --git a/hazelcast/protocol/codec/ringbuffer_read_one_codec.py b/hazelcast/protocol/codec/ringbuffer_read_one_codec.py index a1050ef753..9480196294 100644 --- a/hazelcast/protocol/codec/ringbuffer_read_one_codec.py +++ b/hazelcast/protocol/codec/ringbuffer_read_one_codec.py @@ -1,34 +1,26 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.ringbuffer_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = RINGBUFFER_READONE -RESPONSE_TYPE = 105 -RETRYABLE = True +# hex: 0x170700 +_REQUEST_MESSAGE_TYPE = 1509120 +# hex: 0x170701 +_RESPONSE_MESSAGE_TYPE = 1509121 - -def calculate_size(name, sequence): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_SEQUENCE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_SEQUENCE_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, sequence): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, sequence)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_long(sequence) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_long(buf, _REQUEST_SEQUENCE_OFFSET, sequence) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/ringbuffer_remaining_capacity_codec.py b/hazelcast/protocol/codec/ringbuffer_remaining_capacity_codec.py index 67d9426e55..26b1245789 100644 --- a/hazelcast/protocol/codec/ringbuffer_remaining_capacity_codec.py +++ b/hazelcast/protocol/codec/ringbuffer_remaining_capacity_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.ringbuffer_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = RINGBUFFER_REMAININGCAPACITY -RESPONSE_TYPE = 103 -RETRYABLE = True +# hex: 0x170500 +_REQUEST_MESSAGE_TYPE = 1508608 +# hex: 0x170501 +_RESPONSE_MESSAGE_TYPE = 1508609 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/ringbuffer_size_codec.py b/hazelcast/protocol/codec/ringbuffer_size_codec.py index 29156fae5a..02924b7c85 100644 --- a/hazelcast/protocol/codec/ringbuffer_size_codec.py +++ b/hazelcast/protocol/codec/ringbuffer_size_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.ringbuffer_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = RINGBUFFER_SIZE -RESPONSE_TYPE = 103 -RETRYABLE = True +# hex: 0x170100 +_REQUEST_MESSAGE_TYPE = 1507584 +# hex: 0x170101 +_RESPONSE_MESSAGE_TYPE = 1507585 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/ringbuffer_tail_sequence_codec.py b/hazelcast/protocol/codec/ringbuffer_tail_sequence_codec.py index e945d8876d..c575287018 100644 --- a/hazelcast/protocol/codec/ringbuffer_tail_sequence_codec.py +++ b/hazelcast/protocol/codec/ringbuffer_tail_sequence_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.ringbuffer_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = RINGBUFFER_TAILSEQUENCE -RESPONSE_TYPE = 103 -RETRYABLE = True +# hex: 0x170200 +_REQUEST_MESSAGE_TYPE = 1507840 +# hex: 0x170201 +_RESPONSE_MESSAGE_TYPE = 1507841 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_long() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_long(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/semaphore_acquire_codec.py b/hazelcast/protocol/codec/semaphore_acquire_codec.py deleted file mode 100644 index d4964b70d5..0000000000 --- a/hazelcast/protocol/codec/semaphore_acquire_codec.py +++ /dev/null @@ -1,29 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.semaphore_message_type import * - -REQUEST_TYPE = SEMAPHORE_ACQUIRE -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(name, permits): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - return data_size - - -def encode_request(name, permits): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, permits)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(permits) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/semaphore_available_permits_codec.py b/hazelcast/protocol/codec/semaphore_available_permits_codec.py deleted file mode 100644 index 554b359954..0000000000 --- a/hazelcast/protocol/codec/semaphore_available_permits_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.semaphore_message_type import * - -REQUEST_TYPE = SEMAPHORE_AVAILABLEPERMITS -RESPONSE_TYPE = 102 -RETRYABLE = True - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters diff --git a/hazelcast/protocol/codec/semaphore_drain_permits_codec.py b/hazelcast/protocol/codec/semaphore_drain_permits_codec.py deleted file mode 100644 index d5424c3144..0000000000 --- a/hazelcast/protocol/codec/semaphore_drain_permits_codec.py +++ /dev/null @@ -1,31 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.semaphore_message_type import * - -REQUEST_TYPE = SEMAPHORE_DRAINPERMITS -RESPONSE_TYPE = 102 -RETRYABLE = False - - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size - - -def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters diff --git a/hazelcast/protocol/codec/semaphore_init_codec.py b/hazelcast/protocol/codec/semaphore_init_codec.py deleted file mode 100644 index 142a438690..0000000000 --- a/hazelcast/protocol/codec/semaphore_init_codec.py +++ /dev/null @@ -1,33 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.semaphore_message_type import * - -REQUEST_TYPE = SEMAPHORE_INIT -RESPONSE_TYPE = 101 -RETRYABLE = False - - -def calculate_size(name, permits): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - return data_size - - -def encode_request(name, permits): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, permits)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(permits) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/semaphore_message_type.py b/hazelcast/protocol/codec/semaphore_message_type.py deleted file mode 100644 index 53fa72f7de..0000000000 --- a/hazelcast/protocol/codec/semaphore_message_type.py +++ /dev/null @@ -1,8 +0,0 @@ - -SEMAPHORE_INIT = 0x0d01 -SEMAPHORE_ACQUIRE = 0x0d02 -SEMAPHORE_AVAILABLEPERMITS = 0x0d03 -SEMAPHORE_DRAINPERMITS = 0x0d04 -SEMAPHORE_REDUCEPERMITS = 0x0d05 -SEMAPHORE_RELEASE = 0x0d06 -SEMAPHORE_TRYACQUIRE = 0x0d07 diff --git a/hazelcast/protocol/codec/semaphore_reduce_permits_codec.py b/hazelcast/protocol/codec/semaphore_reduce_permits_codec.py deleted file mode 100644 index 2e7a2f14c2..0000000000 --- a/hazelcast/protocol/codec/semaphore_reduce_permits_codec.py +++ /dev/null @@ -1,29 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.semaphore_message_type import * - -REQUEST_TYPE = SEMAPHORE_REDUCEPERMITS -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(name, reduction): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - return data_size - - -def encode_request(name, reduction): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, reduction)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(reduction) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/semaphore_release_codec.py b/hazelcast/protocol/codec/semaphore_release_codec.py deleted file mode 100644 index 8f010eadb7..0000000000 --- a/hazelcast/protocol/codec/semaphore_release_codec.py +++ /dev/null @@ -1,29 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.semaphore_message_type import * - -REQUEST_TYPE = SEMAPHORE_RELEASE -RESPONSE_TYPE = 100 -RETRYABLE = False - - -def calculate_size(name, permits): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - return data_size - - -def encode_request(name, permits): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, permits)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(permits) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode diff --git a/hazelcast/protocol/codec/semaphore_try_acquire_codec.py b/hazelcast/protocol/codec/semaphore_try_acquire_codec.py deleted file mode 100644 index 2f75086f00..0000000000 --- a/hazelcast/protocol/codec/semaphore_try_acquire_codec.py +++ /dev/null @@ -1,35 +0,0 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.semaphore_message_type import * - -REQUEST_TYPE = SEMAPHORE_TRYACQUIRE -RESPONSE_TYPE = 101 -RETRYABLE = False - - -def calculate_size(name, permits, timeout): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size - - -def encode_request(name, permits, timeout): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, permits, timeout)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(permits) - client_message.append_long(timeout) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters diff --git a/hazelcast/protocol/codec/set_add_all_codec.py b/hazelcast/protocol/codec/set_add_all_codec.py index 08a14a0e0e..9f24d02757 100644 --- a/hazelcast/protocol/codec/set_add_all_codec.py +++ b/hazelcast/protocol/codec/set_add_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = SET_ADDALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x060600 +_REQUEST_MESSAGE_TYPE = 394752 +# hex: 0x060601 +_RESPONSE_MESSAGE_TYPE = 394753 - -def calculate_size(name, value_list): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for value_list_item in value_list: - data_size += calculate_size_data(value_list_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value_list): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value_list)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(value_list)) - for value_list_item in value_list: - client_message.append_data(value_list_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, value_list, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/set_add_codec.py b/hazelcast/protocol/codec/set_add_codec.py index 69a2df6c67..4c39c67e4b 100644 --- a/hazelcast/protocol/codec/set_add_codec.py +++ b/hazelcast/protocol/codec/set_add_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = SET_ADD -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x060400 +_REQUEST_MESSAGE_TYPE = 394240 +# hex: 0x060401 +_RESPONSE_MESSAGE_TYPE = 394241 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/set_add_listener_codec.py b/hazelcast/protocol/codec/set_add_listener_codec.py index 5a210e0b27..987558fdfa 100644 --- a/hazelcast/protocol/codec/set_add_listener_codec.py +++ b/hazelcast/protocol/codec/set_add_listener_codec.py @@ -1,48 +1,44 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * -from hazelcast.protocol.event_response_const import * - -REQUEST_TYPE = SET_ADDLISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False - - -def calculate_size(name, include_value, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil + +# hex: 0x060B00 +_REQUEST_MESSAGE_TYPE = 396032 +# hex: 0x060B01 +_RESPONSE_MESSAGE_TYPE = 396033 +# hex: 0x060B02 +_EVENT_ITEM_MESSAGE_TYPE = 396034 + +_REQUEST_INCLUDE_VALUE_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_LOCAL_ONLY_OFFSET = _REQUEST_INCLUDE_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_ITEM_UUID_OFFSET = EVENT_HEADER_SIZE +_EVENT_ITEM_EVENT_TYPE_OFFSET = _EVENT_ITEM_UUID_OFFSET + UUID_SIZE_IN_BYTES def encode_request(name, include_value, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, include_value, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(include_value) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_item=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_ITEM and handle_event_item is not None: - item = None - if not client_message.read_bool(): - item = client_message.read_data() - uuid = client_message.read_str() - event_type = client_message.read_int() - handle_event_item(item=item, uuid=uuid, event_type=event_type) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_INCLUDE_VALUE_OFFSET, include_value) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_item_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_ITEM_MESSAGE_TYPE and handle_item_event is not None: + initial_frame = msg.next_frame() + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_ITEM_UUID_OFFSET) + event_type = FixSizedTypesCodec.decode_int(initial_frame.buf, _EVENT_ITEM_EVENT_TYPE_OFFSET) + item = CodecUtil.decode_nullable(msg, DataCodec.decode) + handle_item_event(item, uuid, event_type) + return diff --git a/hazelcast/protocol/codec/set_clear_codec.py b/hazelcast/protocol/codec/set_clear_codec.py index 0347731083..93c8ea4229 100644 --- a/hazelcast/protocol/codec/set_clear_codec.py +++ b/hazelcast/protocol/codec/set_clear_codec.py @@ -1,27 +1,15 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = SET_CLEAR -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x060900 +_REQUEST_MESSAGE_TYPE = 395520 +# hex: 0x060901 +_RESPONSE_MESSAGE_TYPE = 395521 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/set_compare_and_remove_all_codec.py b/hazelcast/protocol/codec/set_compare_and_remove_all_codec.py index d6974248b3..ae90977820 100644 --- a/hazelcast/protocol/codec/set_compare_and_remove_all_codec.py +++ b/hazelcast/protocol/codec/set_compare_and_remove_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = SET_COMPAREANDREMOVEALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x060700 +_REQUEST_MESSAGE_TYPE = 395008 +# hex: 0x060701 +_RESPONSE_MESSAGE_TYPE = 395009 - -def calculate_size(name, values): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for values_item in values: - data_size += calculate_size_data(values_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, values): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, values)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(values)) - for values_item in values: - client_message.append_data(values_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, values, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/set_compare_and_retain_all_codec.py b/hazelcast/protocol/codec/set_compare_and_retain_all_codec.py index 6942f40107..b736fca717 100644 --- a/hazelcast/protocol/codec/set_compare_and_retain_all_codec.py +++ b/hazelcast/protocol/codec/set_compare_and_retain_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = SET_COMPAREANDRETAINALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x060800 +_REQUEST_MESSAGE_TYPE = 395264 +# hex: 0x060801 +_RESPONSE_MESSAGE_TYPE = 395265 - -def calculate_size(name, values): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for values_item in values: - data_size += calculate_size_data(values_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, values): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, values)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(values)) - for values_item in values: - client_message.append_data(values_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, values, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/set_contains_all_codec.py b/hazelcast/protocol/codec/set_contains_all_codec.py index 7a143e10b2..40033dfe9b 100644 --- a/hazelcast/protocol/codec/set_contains_all_codec.py +++ b/hazelcast/protocol/codec/set_contains_all_codec.py @@ -1,37 +1,25 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = SET_CONTAINSALL -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x060300 +_REQUEST_MESSAGE_TYPE = 393984 +# hex: 0x060301 +_RESPONSE_MESSAGE_TYPE = 393985 - -def calculate_size(name, items): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += INT_SIZE_IN_BYTES - for items_item in items: - data_size += calculate_size_data(items_item) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, items): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, items)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_int(len(items)) - for items_item in items: - client_message.append_data(items_item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + ListMultiFrameCodec.encode(buf, items, DataCodec.encode, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/set_contains_codec.py b/hazelcast/protocol/codec/set_contains_codec.py index a85cda7c6b..6d3a6bc19e 100644 --- a/hazelcast/protocol/codec/set_contains_codec.py +++ b/hazelcast/protocol/codec/set_contains_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = SET_CONTAINS -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x060200 +_REQUEST_MESSAGE_TYPE = 393728 +# hex: 0x060201 +_RESPONSE_MESSAGE_TYPE = 393729 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/set_get_all_codec.py b/hazelcast/protocol/codec/set_get_all_codec.py index 81dab6c4d9..74c3d86947 100644 --- a/hazelcast/protocol/codec/set_get_all_codec.py +++ b/hazelcast/protocol/codec/set_get_all_codec.py @@ -1,38 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.set_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = SET_GETALL -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x060A00 +_REQUEST_MESSAGE_TYPE = 395776 +# hex: 0x060A01 +_RESPONSE_MESSAGE_TYPE = 395777 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/set_is_empty_codec.py b/hazelcast/protocol/codec/set_is_empty_codec.py index 2c9a52a81a..4fba7a2a71 100644 --- a/hazelcast/protocol/codec/set_is_empty_codec.py +++ b/hazelcast/protocol/codec/set_is_empty_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = SET_ISEMPTY -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x060D00 +_REQUEST_MESSAGE_TYPE = 396544 +# hex: 0x060D01 +_RESPONSE_MESSAGE_TYPE = 396545 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/set_message_type.py b/hazelcast/protocol/codec/set_message_type.py deleted file mode 100644 index a7fc23e8c8..0000000000 --- a/hazelcast/protocol/codec/set_message_type.py +++ /dev/null @@ -1,14 +0,0 @@ - -SET_SIZE = 0x0601 -SET_CONTAINS = 0x0602 -SET_CONTAINSALL = 0x0603 -SET_ADD = 0x0604 -SET_REMOVE = 0x0605 -SET_ADDALL = 0x0606 -SET_COMPAREANDREMOVEALL = 0x0607 -SET_COMPAREANDRETAINALL = 0x0608 -SET_CLEAR = 0x0609 -SET_GETALL = 0x060a -SET_ADDLISTENER = 0x060b -SET_REMOVELISTENER = 0x060c -SET_ISEMPTY = 0x060d diff --git a/hazelcast/protocol/codec/set_remove_codec.py b/hazelcast/protocol/codec/set_remove_codec.py index 798a858b7a..a88decce28 100644 --- a/hazelcast/protocol/codec/set_remove_codec.py +++ b/hazelcast/protocol/codec/set_remove_codec.py @@ -1,33 +1,24 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = SET_REMOVE -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x060500 +_REQUEST_MESSAGE_TYPE = 394496 +# hex: 0x060501 +_RESPONSE_MESSAGE_TYPE = 394497 - -def calculate_size(name, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(value) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/set_remove_listener_codec.py b/hazelcast/protocol/codec/set_remove_listener_codec.py index 7ba6b05eb4..fe0ae06816 100644 --- a/hazelcast/protocol/codec/set_remove_listener_codec.py +++ b/hazelcast/protocol/codec/set_remove_listener_codec.py @@ -1,29 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = SET_REMOVELISTENER -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x060C00 +_REQUEST_MESSAGE_TYPE = 396288 +# hex: 0x060C01 +_RESPONSE_MESSAGE_TYPE = 396289 - -def calculate_size(name, registration_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(registration_id) - return data_size +_REQUEST_REGISTRATION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REGISTRATION_ID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, registration_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, registration_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(registration_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_REGISTRATION_ID_OFFSET, registration_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -# Empty decode_response because response is not used to determine the return value. +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/set_size_codec.py b/hazelcast/protocol/codec/set_size_codec.py index ed766daa5d..d5a7fd1f76 100644 --- a/hazelcast/protocol/codec/set_size_codec.py +++ b/hazelcast/protocol/codec/set_size_codec.py @@ -1,31 +1,22 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = SET_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = False +# hex: 0x060100 +_REQUEST_MESSAGE_TYPE = 393472 +# hex: 0x060101 +_RESPONSE_MESSAGE_TYPE = 393473 - -def calculate_size(name): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/topic_add_message_listener_codec.py b/hazelcast/protocol/codec/topic_add_message_listener_codec.py index 0048780232..f82124d491 100644 --- a/hazelcast/protocol/codec/topic_add_message_listener_codec.py +++ b/hazelcast/protocol/codec/topic_add_message_listener_codec.py @@ -1,44 +1,41 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.topic_message_type import * -from hazelcast.protocol.event_response_const import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE, EVENT_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TOPIC_ADDMESSAGELISTENER -RESPONSE_TYPE = 104 -RETRYABLE = False +# hex: 0x040200 +_REQUEST_MESSAGE_TYPE = 262656 +# hex: 0x040201 +_RESPONSE_MESSAGE_TYPE = 262657 +# hex: 0x040202 +_EVENT_TOPIC_MESSAGE_TYPE = 262658 - -def calculate_size(name, local_only): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += BOOLEAN_SIZE_IN_BYTES - return data_size +_REQUEST_LOCAL_ONLY_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_LOCAL_ONLY_OFFSET + BOOLEAN_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE +_EVENT_TOPIC_PUBLISH_TIME_OFFSET = EVENT_HEADER_SIZE +_EVENT_TOPIC_UUID_OFFSET = _EVENT_TOPIC_PUBLISH_TIME_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, local_only): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, local_only)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_bool(local_only) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters - - -def handle(client_message, handle_event_topic=None, to_object=None): - """ Event handler """ - message_type = client_message.get_message_type() - if message_type == EVENT_TOPIC and handle_event_topic is not None: - item = client_message.read_data() - publish_time = client_message.read_long() - uuid = client_message.read_str() - handle_event_topic(item=item, publish_time=publish_time, uuid=uuid) + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_boolean(buf, _REQUEST_LOCAL_ONLY_OFFSET, local_only) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) + + +def handle(msg, handle_topic_event=None): + message_type = msg.get_message_type() + if message_type == _EVENT_TOPIC_MESSAGE_TYPE and handle_topic_event is not None: + initial_frame = msg.next_frame() + publish_time = FixSizedTypesCodec.decode_long(initial_frame.buf, _EVENT_TOPIC_PUBLISH_TIME_OFFSET) + uuid = FixSizedTypesCodec.decode_uuid(initial_frame.buf, _EVENT_TOPIC_UUID_OFFSET) + item = DataCodec.decode(msg) + handle_topic_event(item, publish_time, uuid) + return diff --git a/hazelcast/protocol/codec/topic_message_type.py b/hazelcast/protocol/codec/topic_message_type.py deleted file mode 100644 index 3126e3fa7f..0000000000 --- a/hazelcast/protocol/codec/topic_message_type.py +++ /dev/null @@ -1,4 +0,0 @@ - -TOPIC_PUBLISH = 0x0401 -TOPIC_ADDMESSAGELISTENER = 0x0402 -TOPIC_REMOVEMESSAGELISTENER = 0x0403 diff --git a/hazelcast/protocol/codec/topic_publish_codec.py b/hazelcast/protocol/codec/topic_publish_codec.py index c123b10205..cf80610184 100644 --- a/hazelcast/protocol/codec/topic_publish_codec.py +++ b/hazelcast/protocol/codec/topic_publish_codec.py @@ -1,29 +1,17 @@ -from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.topic_message_type import * +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TOPIC_PUBLISH -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x040100 +_REQUEST_MESSAGE_TYPE = 262400 +# hex: 0x040101 +_RESPONSE_MESSAGE_TYPE = 262401 - -def calculate_size(name, message): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_data(message) - return data_size +_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE def encode_request(name, message): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, message)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_data(message) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + StringCodec.encode(buf, name) + DataCodec.encode(buf, message, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/topic_remove_message_listener_codec.py b/hazelcast/protocol/codec/topic_remove_message_listener_codec.py index 77b1dc4c65..cdcacd6437 100644 --- a/hazelcast/protocol/codec/topic_remove_message_listener_codec.py +++ b/hazelcast/protocol/codec/topic_remove_message_listener_codec.py @@ -1,29 +1,25 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.topic_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = TOPIC_REMOVEMESSAGELISTENER -RESPONSE_TYPE = 101 -RETRYABLE = True +# hex: 0x040300 +_REQUEST_MESSAGE_TYPE = 262912 +# hex: 0x040301 +_RESPONSE_MESSAGE_TYPE = 262913 - -def calculate_size(name, registration_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(registration_id) - return data_size +_REQUEST_REGISTRATION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_REGISTRATION_ID_OFFSET + UUID_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, registration_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, registration_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(registration_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_REGISTRATION_ID_OFFSET, registration_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, True) -# Empty decode_response because response is not used to determine the return value. +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transaction_commit_codec.py b/hazelcast/protocol/codec/transaction_commit_codec.py index a7c7de347a..6352c1402e 100644 --- a/hazelcast/protocol/codec/transaction_commit_codec.py +++ b/hazelcast/protocol/codec/transaction_commit_codec.py @@ -1,29 +1,19 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transaction_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer -REQUEST_TYPE = TRANSACTION_COMMIT -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x150100 +_REQUEST_MESSAGE_TYPE = 1376512 +# hex: 0x150101 +_RESPONSE_MESSAGE_TYPE = 1376513 - -def calculate_size(transaction_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(transaction_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TRANSACTION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TRANSACTION_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(transaction_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(transaction_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(transaction_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TRANSACTION_ID_OFFSET, transaction_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/transaction_create_codec.py b/hazelcast/protocol/codec/transaction_create_codec.py index 3d24147f18..50346a489d 100644 --- a/hazelcast/protocol/codec/transaction_create_codec.py +++ b/hazelcast/protocol/codec/transaction_create_codec.py @@ -1,37 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transaction_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE -REQUEST_TYPE = TRANSACTION_CREATE -RESPONSE_TYPE = 104 -RETRYABLE = False +# hex: 0x150200 +_REQUEST_MESSAGE_TYPE = 1376768 +# hex: 0x150201 +_RESPONSE_MESSAGE_TYPE = 1376769 - -def calculate_size(timeout, durability, transaction_type, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += LONG_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - data_size += INT_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TIMEOUT_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_DURABILITY_OFFSET = _REQUEST_TIMEOUT_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_TRANSACTION_TYPE_OFFSET = _REQUEST_DURABILITY_OFFSET + INT_SIZE_IN_BYTES +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TRANSACTION_TYPE_OFFSET + INT_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(timeout, durability, transaction_type, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(timeout, durability, transaction_type, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_long(timeout) - client_message.append_int(durability) - client_message.append_int(transaction_type) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMEOUT_OFFSET, timeout) + FixSizedTypesCodec.encode_int(buf, _REQUEST_DURABILITY_OFFSET, durability) + FixSizedTypesCodec.encode_int(buf, _REQUEST_TRANSACTION_TYPE_OFFSET, transaction_type) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_str() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_uuid(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transaction_message_type.py b/hazelcast/protocol/codec/transaction_message_type.py deleted file mode 100644 index bb1f7f7017..0000000000 --- a/hazelcast/protocol/codec/transaction_message_type.py +++ /dev/null @@ -1,4 +0,0 @@ - -TRANSACTION_COMMIT = 0x1701 -TRANSACTION_CREATE = 0x1702 -TRANSACTION_ROLLBACK = 0x1703 diff --git a/hazelcast/protocol/codec/transaction_rollback_codec.py b/hazelcast/protocol/codec/transaction_rollback_codec.py index 91be686a35..49bcf73b6e 100644 --- a/hazelcast/protocol/codec/transaction_rollback_codec.py +++ b/hazelcast/protocol/codec/transaction_rollback_codec.py @@ -1,29 +1,19 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transaction_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer -REQUEST_TYPE = TRANSACTION_ROLLBACK -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x150300 +_REQUEST_MESSAGE_TYPE = 1377024 +# hex: 0x150301 +_RESPONSE_MESSAGE_TYPE = 1377025 - -def calculate_size(transaction_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(transaction_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TRANSACTION_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TRANSACTION_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(transaction_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(transaction_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(transaction_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE, True) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TRANSACTION_ID_OFFSET, transaction_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/transactional_list_add_codec.py b/hazelcast/protocol/codec/transactional_list_add_codec.py index be27e77f3e..a12dbf134b 100644 --- a/hazelcast/protocol/codec/transactional_list_add_codec.py +++ b/hazelcast/protocol/codec/transactional_list_add_codec.py @@ -1,37 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALLIST_ADD -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x110100 +_REQUEST_MESSAGE_TYPE = 1114368 +# hex: 0x110101 +_RESPONSE_MESSAGE_TYPE = 1114369 - -def calculate_size(name, txn_id, thread_id, item): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(item) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, item): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, item)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, item, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_list_message_type.py b/hazelcast/protocol/codec/transactional_list_message_type.py deleted file mode 100644 index 4d7d1b84ec..0000000000 --- a/hazelcast/protocol/codec/transactional_list_message_type.py +++ /dev/null @@ -1,4 +0,0 @@ - -TRANSACTIONALLIST_ADD = 0x1301 -TRANSACTIONALLIST_REMOVE = 0x1302 -TRANSACTIONALLIST_SIZE = 0x1303 diff --git a/hazelcast/protocol/codec/transactional_list_remove_codec.py b/hazelcast/protocol/codec/transactional_list_remove_codec.py index bb284b685a..243fb8eac8 100644 --- a/hazelcast/protocol/codec/transactional_list_remove_codec.py +++ b/hazelcast/protocol/codec/transactional_list_remove_codec.py @@ -1,37 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALLIST_REMOVE -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x110200 +_REQUEST_MESSAGE_TYPE = 1114624 +# hex: 0x110201 +_RESPONSE_MESSAGE_TYPE = 1114625 - -def calculate_size(name, txn_id, thread_id, item): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(item) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, item): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, item)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, item, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_list_size_codec.py b/hazelcast/protocol/codec/transactional_list_size_codec.py index ed916cfe18..24178e3f27 100644 --- a/hazelcast/protocol/codec/transactional_list_size_codec.py +++ b/hazelcast/protocol/codec/transactional_list_size_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_list_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = TRANSACTIONALLIST_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = False +# hex: 0x110300 +_REQUEST_MESSAGE_TYPE = 1114880 +# hex: 0x110301 +_RESPONSE_MESSAGE_TYPE = 1114881 - -def calculate_size(name, txn_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_map_contains_key_codec.py b/hazelcast/protocol/codec/transactional_map_contains_key_codec.py index 1ce37835d4..ef5ee0026c 100644 --- a/hazelcast/protocol/codec/transactional_map_contains_key_codec.py +++ b/hazelcast/protocol/codec/transactional_map_contains_key_codec.py @@ -1,37 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALMAP_CONTAINSKEY -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x0E0100 +_REQUEST_MESSAGE_TYPE = 917760 +# hex: 0x0E0101 +_RESPONSE_MESSAGE_TYPE = 917761 - -def calculate_size(name, txn_id, thread_id, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_map_contains_value_codec.py b/hazelcast/protocol/codec/transactional_map_contains_value_codec.py new file mode 100644 index 0000000000..d1aa58bed6 --- /dev/null +++ b/hazelcast/protocol/codec/transactional_map_contains_value_codec.py @@ -0,0 +1,29 @@ +from hazelcast.serialization.bits import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec + +# hex: 0x0E1200 +_REQUEST_MESSAGE_TYPE = 922112 +# hex: 0x0E1201 +_RESPONSE_MESSAGE_TYPE = 922113 + +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE + + +def encode_request(name, txn_id, thread_id, value): + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_map_delete_codec.py b/hazelcast/protocol/codec/transactional_map_delete_codec.py index 88dcac798b..c5b44716fc 100644 --- a/hazelcast/protocol/codec/transactional_map_delete_codec.py +++ b/hazelcast/protocol/codec/transactional_map_delete_codec.py @@ -1,33 +1,23 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALMAP_DELETE -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x0E0C00 +_REQUEST_MESSAGE_TYPE = 920576 +# hex: 0x0E0C01 +_RESPONSE_MESSAGE_TYPE = 920577 - -def calculate_size(name, txn_id, thread_id, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/transactional_map_get_codec.py b/hazelcast/protocol/codec/transactional_map_get_codec.py index f79a109599..948d6a2c18 100644 --- a/hazelcast/protocol/codec/transactional_map_get_codec.py +++ b/hazelcast/protocol/codec/transactional_map_get_codec.py @@ -1,38 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = TRANSACTIONALMAP_GET -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x0E0200 +_REQUEST_MESSAGE_TYPE = 918016 +# hex: 0x0E0201 +_RESPONSE_MESSAGE_TYPE = 918017 - -def calculate_size(name, txn_id, thread_id, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_map_get_for_update_codec.py b/hazelcast/protocol/codec/transactional_map_get_for_update_codec.py index 4be6adbe9e..7baa3fb460 100644 --- a/hazelcast/protocol/codec/transactional_map_get_for_update_codec.py +++ b/hazelcast/protocol/codec/transactional_map_get_for_update_codec.py @@ -1,38 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = TRANSACTIONALMAP_GETFORUPDATE -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x0E0300 +_REQUEST_MESSAGE_TYPE = 918272 +# hex: 0x0E0301 +_RESPONSE_MESSAGE_TYPE = 918273 - -def calculate_size(name, txn_id, thread_id, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_map_is_empty_codec.py b/hazelcast/protocol/codec/transactional_map_is_empty_codec.py index e3ca251da1..b9698b76b8 100644 --- a/hazelcast/protocol/codec/transactional_map_is_empty_codec.py +++ b/hazelcast/protocol/codec/transactional_map_is_empty_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = TRANSACTIONALMAP_ISEMPTY -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x0E0500 +_REQUEST_MESSAGE_TYPE = 918784 +# hex: 0x0E0501 +_RESPONSE_MESSAGE_TYPE = 918785 - -def calculate_size(name, txn_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_map_key_set_codec.py b/hazelcast/protocol/codec/transactional_map_key_set_codec.py index 5096702af7..9f6685ba60 100644 --- a/hazelcast/protocol/codec/transactional_map_key_set_codec.py +++ b/hazelcast/protocol/codec/transactional_map_key_set_codec.py @@ -1,42 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.transactional_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALMAP_KEYSET -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x0E0E00 +_REQUEST_MESSAGE_TYPE = 921088 +# hex: 0x0E0E01 +_RESPONSE_MESSAGE_TYPE = 921089 - -def calculate_size(name, txn_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_map_key_set_with_predicate_codec.py b/hazelcast/protocol/codec/transactional_map_key_set_with_predicate_codec.py index 7c970c3242..0c1b350216 100644 --- a/hazelcast/protocol/codec/transactional_map_key_set_with_predicate_codec.py +++ b/hazelcast/protocol/codec/transactional_map_key_set_with_predicate_codec.py @@ -1,44 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.transactional_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec -REQUEST_TYPE = TRANSACTIONALMAP_KEYSETWITHPREDICATE -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x0E0F00 +_REQUEST_MESSAGE_TYPE = 921344 +# hex: 0x0E0F01 +_RESPONSE_MESSAGE_TYPE = 921345 - -def calculate_size(name, txn_id, thread_id, predicate): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(predicate) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, predicate): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, predicate)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(predicate) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_map_message_type.py b/hazelcast/protocol/codec/transactional_map_message_type.py deleted file mode 100644 index 3735a774f4..0000000000 --- a/hazelcast/protocol/codec/transactional_map_message_type.py +++ /dev/null @@ -1,18 +0,0 @@ - -TRANSACTIONALMAP_CONTAINSKEY = 0x1001 -TRANSACTIONALMAP_GET = 0x1002 -TRANSACTIONALMAP_GETFORUPDATE = 0x1003 -TRANSACTIONALMAP_SIZE = 0x1004 -TRANSACTIONALMAP_ISEMPTY = 0x1005 -TRANSACTIONALMAP_PUT = 0x1006 -TRANSACTIONALMAP_SET = 0x1007 -TRANSACTIONALMAP_PUTIFABSENT = 0x1008 -TRANSACTIONALMAP_REPLACE = 0x1009 -TRANSACTIONALMAP_REPLACEIFSAME = 0x100a -TRANSACTIONALMAP_REMOVE = 0x100b -TRANSACTIONALMAP_DELETE = 0x100c -TRANSACTIONALMAP_REMOVEIFSAME = 0x100d -TRANSACTIONALMAP_KEYSET = 0x100e -TRANSACTIONALMAP_KEYSETWITHPREDICATE = 0x100f -TRANSACTIONALMAP_VALUES = 0x1010 -TRANSACTIONALMAP_VALUESWITHPREDICATE = 0x1011 diff --git a/hazelcast/protocol/codec/transactional_map_put_codec.py b/hazelcast/protocol/codec/transactional_map_put_codec.py index f439b350fa..168d3226e8 100644 --- a/hazelcast/protocol/codec/transactional_map_put_codec.py +++ b/hazelcast/protocol/codec/transactional_map_put_codec.py @@ -1,45 +1,32 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = TRANSACTIONALMAP_PUT -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x0E0600 +_REQUEST_MESSAGE_TYPE = 919040 +# hex: 0x0E0601 +_RESPONSE_MESSAGE_TYPE = 919041 - -def calculate_size(name, txn_id, thread_id, key, value, ttl): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_TTL_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TTL_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, key, value, ttl): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key, value, ttl)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.append_data(value) - client_message.append_long(ttl) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters - - - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TTL_OFFSET, ttl) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_map_put_if_absent_codec.py b/hazelcast/protocol/codec/transactional_map_put_if_absent_codec.py index f88da4ca79..8347608abc 100644 --- a/hazelcast/protocol/codec/transactional_map_put_if_absent_codec.py +++ b/hazelcast/protocol/codec/transactional_map_put_if_absent_codec.py @@ -1,43 +1,30 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = TRANSACTIONALMAP_PUTIFABSENT -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x0E0800 +_REQUEST_MESSAGE_TYPE = 919552 +# hex: 0x0E0801 +_RESPONSE_MESSAGE_TYPE = 919553 - -def calculate_size(name, txn_id, thread_id, key, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, key, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.append_data(value) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters - - - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) + + +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_map_remove_codec.py b/hazelcast/protocol/codec/transactional_map_remove_codec.py index 9d53d68f3f..8d113267f9 100644 --- a/hazelcast/protocol/codec/transactional_map_remove_codec.py +++ b/hazelcast/protocol/codec/transactional_map_remove_codec.py @@ -1,41 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = TRANSACTIONALMAP_REMOVE -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x0E0B00 +_REQUEST_MESSAGE_TYPE = 920320 +# hex: 0x0E0B01 +_RESPONSE_MESSAGE_TYPE = 920321 - -def calculate_size(name, txn_id, thread_id, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_map_remove_if_same_codec.py b/hazelcast/protocol/codec/transactional_map_remove_if_same_codec.py index 3a71a495a2..5815f2ef4a 100644 --- a/hazelcast/protocol/codec/transactional_map_remove_if_same_codec.py +++ b/hazelcast/protocol/codec/transactional_map_remove_if_same_codec.py @@ -1,39 +1,30 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALMAP_REMOVEIFSAME -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x0E0D00 +_REQUEST_MESSAGE_TYPE = 920832 +# hex: 0x0E0D01 +_RESPONSE_MESSAGE_TYPE = 920833 - -def calculate_size(name, txn_id, thread_id, key, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, key, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_map_replace_codec.py b/hazelcast/protocol/codec/transactional_map_replace_codec.py index 44e60c6bb7..c72b508a7a 100644 --- a/hazelcast/protocol/codec/transactional_map_replace_codec.py +++ b/hazelcast/protocol/codec/transactional_map_replace_codec.py @@ -1,40 +1,30 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = TRANSACTIONALMAP_REPLACE -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x0E0900 +_REQUEST_MESSAGE_TYPE = 919808 +# hex: 0x0E0901 +_RESPONSE_MESSAGE_TYPE = 919809 - -def calculate_size(name, txn_id, thread_id, key, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, key, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_map_replace_if_same_codec.py b/hazelcast/protocol/codec/transactional_map_replace_if_same_codec.py index 9378bb1385..a528dc0e51 100644 --- a/hazelcast/protocol/codec/transactional_map_replace_if_same_codec.py +++ b/hazelcast/protocol/codec/transactional_map_replace_if_same_codec.py @@ -1,41 +1,31 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALMAP_REPLACEIFSAME -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x0E0A00 +_REQUEST_MESSAGE_TYPE = 920064 +# hex: 0x0E0A01 +_RESPONSE_MESSAGE_TYPE = 920065 - -def calculate_size(name, txn_id, thread_id, key, old_value, new_value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - data_size += calculate_size_data(old_value) - data_size += calculate_size_data(new_value) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, key, old_value, new_value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key, old_value, new_value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.append_data(old_value) - client_message.append_data(new_value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, old_value) + DataCodec.encode(buf, new_value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_map_set_codec.py b/hazelcast/protocol/codec/transactional_map_set_codec.py index ea59d13201..03c28abd42 100644 --- a/hazelcast/protocol/codec/transactional_map_set_codec.py +++ b/hazelcast/protocol/codec/transactional_map_set_codec.py @@ -1,35 +1,24 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALMAP_SET -RESPONSE_TYPE = 100 -RETRYABLE = False +# hex: 0x0E0700 +_REQUEST_MESSAGE_TYPE = 919296 +# hex: 0x0E0701 +_RESPONSE_MESSAGE_TYPE = 919297 - -def calculate_size(name, txn_id, thread_id, key, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, key, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.append_data(value) - client_message.update_frame_length() - return client_message - - -# Empty decode_response(client_message), this message has no parameters to decode + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) diff --git a/hazelcast/protocol/codec/transactional_map_size_codec.py b/hazelcast/protocol/codec/transactional_map_size_codec.py index af526c166a..2003a033d0 100644 --- a/hazelcast/protocol/codec/transactional_map_size_codec.py +++ b/hazelcast/protocol/codec/transactional_map_size_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = TRANSACTIONALMAP_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = False +# hex: 0x0E0400 +_REQUEST_MESSAGE_TYPE = 918528 +# hex: 0x0E0401 +_RESPONSE_MESSAGE_TYPE = 918529 - -def calculate_size(name, txn_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_map_values_codec.py b/hazelcast/protocol/codec/transactional_map_values_codec.py index 5f92b290de..719dce5530 100644 --- a/hazelcast/protocol/codec/transactional_map_values_codec.py +++ b/hazelcast/protocol/codec/transactional_map_values_codec.py @@ -1,42 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.transactional_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALMAP_VALUES -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x0E1000 +_REQUEST_MESSAGE_TYPE = 921600 +# hex: 0x0E1001 +_RESPONSE_MESSAGE_TYPE = 921601 - -def calculate_size(name, txn_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_map_values_with_predicate_codec.py b/hazelcast/protocol/codec/transactional_map_values_with_predicate_codec.py index 9962ec67cf..ab1726580a 100644 --- a/hazelcast/protocol/codec/transactional_map_values_with_predicate_codec.py +++ b/hazelcast/protocol/codec/transactional_map_values_with_predicate_codec.py @@ -1,44 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.transactional_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec -REQUEST_TYPE = TRANSACTIONALMAP_VALUESWITHPREDICATE -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x0E1100 +_REQUEST_MESSAGE_TYPE = 921856 +# hex: 0x0E1101 +_RESPONSE_MESSAGE_TYPE = 921857 - -def calculate_size(name, txn_id, thread_id, predicate): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(predicate) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, predicate): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, predicate)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(predicate) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, predicate, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_multi_map_get_codec.py b/hazelcast/protocol/codec/transactional_multi_map_get_codec.py index b14509c309..52aee889f3 100644 --- a/hazelcast/protocol/codec/transactional_multi_map_get_codec.py +++ b/hazelcast/protocol/codec/transactional_multi_map_get_codec.py @@ -1,47 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.transactional_multi_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec -REQUEST_TYPE = TRANSACTIONALMULTIMAP_GET -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x0F0200 +_REQUEST_MESSAGE_TYPE = 983552 +# hex: 0x0F0201 +_RESPONSE_MESSAGE_TYPE = 983553 - -def calculate_size(name, txn_id, thread_id, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.update_frame_length() - return client_message - - -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters - + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_multi_map_message_type.py b/hazelcast/protocol/codec/transactional_multi_map_message_type.py deleted file mode 100644 index f6f24a6dd1..0000000000 --- a/hazelcast/protocol/codec/transactional_multi_map_message_type.py +++ /dev/null @@ -1,7 +0,0 @@ - -TRANSACTIONALMULTIMAP_PUT = 0x1101 -TRANSACTIONALMULTIMAP_GET = 0x1102 -TRANSACTIONALMULTIMAP_REMOVE = 0x1103 -TRANSACTIONALMULTIMAP_REMOVEENTRY = 0x1104 -TRANSACTIONALMULTIMAP_VALUECOUNT = 0x1105 -TRANSACTIONALMULTIMAP_SIZE = 0x1106 diff --git a/hazelcast/protocol/codec/transactional_multi_map_put_codec.py b/hazelcast/protocol/codec/transactional_multi_map_put_codec.py index a1eeaec5b8..6f9708ea77 100644 --- a/hazelcast/protocol/codec/transactional_multi_map_put_codec.py +++ b/hazelcast/protocol/codec/transactional_multi_map_put_codec.py @@ -1,39 +1,30 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALMULTIMAP_PUT -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x0F0100 +_REQUEST_MESSAGE_TYPE = 983296 +# hex: 0x0F0101 +_RESPONSE_MESSAGE_TYPE = 983297 - -def calculate_size(name, txn_id, thread_id, key, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, key, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_multi_map_remove_codec.py b/hazelcast/protocol/codec/transactional_multi_map_remove_codec.py index e59b621ca9..0391af6469 100644 --- a/hazelcast/protocol/codec/transactional_multi_map_remove_codec.py +++ b/hazelcast/protocol/codec/transactional_multi_map_remove_codec.py @@ -1,44 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.util import ImmutableLazyDataList -from hazelcast.protocol.codec.transactional_multi_map_message_type import * -from hazelcast.six.moves import range +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import ListMultiFrameCodec -REQUEST_TYPE = TRANSACTIONALMULTIMAP_REMOVE -RESPONSE_TYPE = 106 -RETRYABLE = False +# hex: 0x0F0300 +_REQUEST_MESSAGE_TYPE = 983808 +# hex: 0x0F0301 +_RESPONSE_MESSAGE_TYPE = 983809 - -def calculate_size(name, txn_id, thread_id, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - response_size = client_message.read_int() - response = [] - for _ in range(0, response_size): - response_item = client_message.read_data() - response.append(response_item) - parameters['response'] = ImmutableLazyDataList(response, to_object) - return parameters +def decode_response(msg): + msg.next_frame() + return ListMultiFrameCodec.decode(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_multi_map_remove_entry_codec.py b/hazelcast/protocol/codec/transactional_multi_map_remove_entry_codec.py index 50e08395ed..bfe777b503 100644 --- a/hazelcast/protocol/codec/transactional_multi_map_remove_entry_codec.py +++ b/hazelcast/protocol/codec/transactional_multi_map_remove_entry_codec.py @@ -1,39 +1,30 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALMULTIMAP_REMOVEENTRY -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x0F0400 +_REQUEST_MESSAGE_TYPE = 984064 +# hex: 0x0F0401 +_RESPONSE_MESSAGE_TYPE = 984065 - -def calculate_size(name, txn_id, thread_id, key, value): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - data_size += calculate_size_data(value) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, key, value): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key, value)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.append_data(value) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key) + DataCodec.encode(buf, value, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_multi_map_size_codec.py b/hazelcast/protocol/codec/transactional_multi_map_size_codec.py index 04a57e0376..e0db226d88 100644 --- a/hazelcast/protocol/codec/transactional_multi_map_size_codec.py +++ b/hazelcast/protocol/codec/transactional_multi_map_size_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = TRANSACTIONALMULTIMAP_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = False +# hex: 0x0F0600 +_REQUEST_MESSAGE_TYPE = 984576 +# hex: 0x0F0601 +_RESPONSE_MESSAGE_TYPE = 984577 - -def calculate_size(name, txn_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_multi_map_value_count_codec.py b/hazelcast/protocol/codec/transactional_multi_map_value_count_codec.py index df0dec7547..be87494996 100644 --- a/hazelcast/protocol/codec/transactional_multi_map_value_count_codec.py +++ b/hazelcast/protocol/codec/transactional_multi_map_value_count_codec.py @@ -1,37 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_multi_map_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALMULTIMAP_VALUECOUNT -RESPONSE_TYPE = 102 -RETRYABLE = False +# hex: 0x0F0500 +_REQUEST_MESSAGE_TYPE = 984320 +# hex: 0x0F0501 +_RESPONSE_MESSAGE_TYPE = 984321 - -def calculate_size(name, txn_id, thread_id, key): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(key) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, key): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, key)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(key) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, key, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_queue_message_type.py b/hazelcast/protocol/codec/transactional_queue_message_type.py deleted file mode 100644 index b77f0197b3..0000000000 --- a/hazelcast/protocol/codec/transactional_queue_message_type.py +++ /dev/null @@ -1,6 +0,0 @@ - -TRANSACTIONALQUEUE_OFFER = 0x1401 -TRANSACTIONALQUEUE_TAKE = 0x1402 -TRANSACTIONALQUEUE_POLL = 0x1403 -TRANSACTIONALQUEUE_PEEK = 0x1404 -TRANSACTIONALQUEUE_SIZE = 0x1405 diff --git a/hazelcast/protocol/codec/transactional_queue_offer_codec.py b/hazelcast/protocol/codec/transactional_queue_offer_codec.py index b46eb5cab2..b21d04d02b 100644 --- a/hazelcast/protocol/codec/transactional_queue_offer_codec.py +++ b/hazelcast/protocol/codec/transactional_queue_offer_codec.py @@ -1,39 +1,31 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALQUEUE_OFFER -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x120100 +_REQUEST_MESSAGE_TYPE = 1179904 +# hex: 0x120101 +_RESPONSE_MESSAGE_TYPE = 1179905 - -def calculate_size(name, txn_id, thread_id, item, timeout): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(item) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_TIMEOUT_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TIMEOUT_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, item, timeout): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, item, timeout)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(item) - client_message.append_long(timeout) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMEOUT_OFFSET, timeout) + StringCodec.encode(buf, name) + DataCodec.encode(buf, item, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_queue_peek_codec.py b/hazelcast/protocol/codec/transactional_queue_peek_codec.py index aaad450f13..ca7f22a95c 100644 --- a/hazelcast/protocol/codec/transactional_queue_peek_codec.py +++ b/hazelcast/protocol/codec/transactional_queue_peek_codec.py @@ -1,38 +1,30 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = TRANSACTIONALQUEUE_PEEK -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x120400 +_REQUEST_MESSAGE_TYPE = 1180672 +# hex: 0x120401 +_RESPONSE_MESSAGE_TYPE = 1180673 - -def calculate_size(name, txn_id, thread_id, timeout): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_TIMEOUT_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TIMEOUT_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, timeout): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, timeout)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_long(timeout) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMEOUT_OFFSET, timeout) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_queue_poll_codec.py b/hazelcast/protocol/codec/transactional_queue_poll_codec.py index f61076e8a1..924d9bc671 100644 --- a/hazelcast/protocol/codec/transactional_queue_poll_codec.py +++ b/hazelcast/protocol/codec/transactional_queue_poll_codec.py @@ -1,38 +1,30 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = TRANSACTIONALQUEUE_POLL -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x120300 +_REQUEST_MESSAGE_TYPE = 1180416 +# hex: 0x120301 +_RESPONSE_MESSAGE_TYPE = 1180417 - -def calculate_size(name, txn_id, thread_id, timeout): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_TIMEOUT_OFFSET = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_TIMEOUT_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id, timeout): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, timeout)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_long(timeout) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_TIMEOUT_OFFSET, timeout) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_queue_size_codec.py b/hazelcast/protocol/codec/transactional_queue_size_codec.py index 3750843764..20039dd1dd 100644 --- a/hazelcast/protocol/codec/transactional_queue_size_codec.py +++ b/hazelcast/protocol/codec/transactional_queue_size_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = TRANSACTIONALQUEUE_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = False +# hex: 0x120500 +_REQUEST_MESSAGE_TYPE = 1180928 +# hex: 0x120501 +_RESPONSE_MESSAGE_TYPE = 1180929 - -def calculate_size(name, txn_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_queue_take_codec.py b/hazelcast/protocol/codec/transactional_queue_take_codec.py index b33c9be134..0dfcfcbec7 100644 --- a/hazelcast/protocol/codec/transactional_queue_take_codec.py +++ b/hazelcast/protocol/codec/transactional_queue_take_codec.py @@ -1,36 +1,28 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_queue_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec +from hazelcast.protocol.builtin import CodecUtil -REQUEST_TYPE = TRANSACTIONALQUEUE_TAKE -RESPONSE_TYPE = 105 -RETRYABLE = False +# hex: 0x120200 +_REQUEST_MESSAGE_TYPE = 1180160 +# hex: 0x120201 +_RESPONSE_MESSAGE_TYPE = 1180161 - -def calculate_size(name, txn_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES def encode_request(name, txn_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - if not client_message.read_bool(): - parameters['response'] = to_object(client_message.read_data()) - return parameters +def decode_response(msg): + msg.next_frame() + return CodecUtil.decode_nullable(msg, DataCodec.decode) diff --git a/hazelcast/protocol/codec/transactional_set_add_codec.py b/hazelcast/protocol/codec/transactional_set_add_codec.py index e3fead381f..7680876274 100644 --- a/hazelcast/protocol/codec/transactional_set_add_codec.py +++ b/hazelcast/protocol/codec/transactional_set_add_codec.py @@ -1,37 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALSET_ADD -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x100100 +_REQUEST_MESSAGE_TYPE = 1048832 +# hex: 0x100101 +_RESPONSE_MESSAGE_TYPE = 1048833 - -def calculate_size(name, txn_id, thread_id, item): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(item) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, item): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, item)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, item, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_set_message_type.py b/hazelcast/protocol/codec/transactional_set_message_type.py deleted file mode 100644 index d642335a69..0000000000 --- a/hazelcast/protocol/codec/transactional_set_message_type.py +++ /dev/null @@ -1,4 +0,0 @@ - -TRANSACTIONALSET_ADD = 0x1201 -TRANSACTIONALSET_REMOVE = 0x1202 -TRANSACTIONALSET_SIZE = 0x1203 diff --git a/hazelcast/protocol/codec/transactional_set_remove_codec.py b/hazelcast/protocol/codec/transactional_set_remove_codec.py index 2d43b948ef..a6b6a21460 100644 --- a/hazelcast/protocol/codec/transactional_set_remove_codec.py +++ b/hazelcast/protocol/codec/transactional_set_remove_codec.py @@ -1,37 +1,29 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec +from hazelcast.protocol.builtin import DataCodec -REQUEST_TYPE = TRANSACTIONALSET_REMOVE -RESPONSE_TYPE = 101 -RETRYABLE = False +# hex: 0x100200 +_REQUEST_MESSAGE_TYPE = 1049088 +# hex: 0x100201 +_RESPONSE_MESSAGE_TYPE = 1049089 - -def calculate_size(name, txn_id, thread_id, item): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - data_size += calculate_size_data(item) - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id, item): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id, item)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.append_data(item) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name) + DataCodec.encode(buf, item, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_bool() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_boolean(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/codec/transactional_set_size_codec.py b/hazelcast/protocol/codec/transactional_set_size_codec.py index 6d30db5466..f20e63e3d8 100644 --- a/hazelcast/protocol/codec/transactional_set_size_codec.py +++ b/hazelcast/protocol/codec/transactional_set_size_codec.py @@ -1,35 +1,27 @@ from hazelcast.serialization.bits import * -from hazelcast.protocol.client_message import ClientMessage -from hazelcast.protocol.codec.transactional_set_message_type import * +from hazelcast.protocol.builtin import FixSizedTypesCodec +from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer, RESPONSE_HEADER_SIZE +from hazelcast.protocol.builtin import StringCodec -REQUEST_TYPE = TRANSACTIONALSET_SIZE -RESPONSE_TYPE = 102 -RETRYABLE = False +# hex: 0x100300 +_REQUEST_MESSAGE_TYPE = 1049344 +# hex: 0x100301 +_RESPONSE_MESSAGE_TYPE = 1049345 - -def calculate_size(name, txn_id, thread_id): - """ Calculates the request payload size""" - data_size = 0 - data_size += calculate_size_str(name) - data_size += calculate_size_str(txn_id) - data_size += LONG_SIZE_IN_BYTES - return data_size +_REQUEST_TXN_ID_OFFSET = REQUEST_HEADER_SIZE +_REQUEST_THREAD_ID_OFFSET = _REQUEST_TXN_ID_OFFSET + UUID_SIZE_IN_BYTES +_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_THREAD_ID_OFFSET + LONG_SIZE_IN_BYTES +_RESPONSE_RESPONSE_OFFSET = RESPONSE_HEADER_SIZE def encode_request(name, txn_id, thread_id): - """ Encode request into client_message""" - client_message = ClientMessage(payload_size=calculate_size(name, txn_id, thread_id)) - client_message.set_message_type(REQUEST_TYPE) - client_message.set_retryable(RETRYABLE) - client_message.append_str(name) - client_message.append_str(txn_id) - client_message.append_long(thread_id) - client_message.update_frame_length() - return client_message + buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE) + FixSizedTypesCodec.encode_uuid(buf, _REQUEST_TXN_ID_OFFSET, txn_id) + FixSizedTypesCodec.encode_long(buf, _REQUEST_THREAD_ID_OFFSET, thread_id) + StringCodec.encode(buf, name, True) + return OutboundMessage(buf, False) -def decode_response(client_message, to_object=None): - """ Decode response from client message""" - parameters = dict(response=None) - parameters['response'] = client_message.read_int() - return parameters +def decode_response(msg): + initial_frame = msg.next_frame() + return FixSizedTypesCodec.decode_int(initial_frame.buf, _RESPONSE_RESPONSE_OFFSET) diff --git a/hazelcast/protocol/custom_codec.py b/hazelcast/protocol/custom_codec.py deleted file mode 100644 index 33ce5f7948..0000000000 --- a/hazelcast/protocol/custom_codec.py +++ /dev/null @@ -1,142 +0,0 @@ -""" -Hazelcast client protocol codecs -""" - -from collections import namedtuple -from hazelcast.core import Member, DistributedObjectInfo, EntryView, Address -from hazelcast.six.moves import range - -EXCEPTION_MESSAGE_TYPE = 109 - - -class MemberCodec(object): - @classmethod - def encode(cls, client_message, member): - AddressCodec.encode(client_message, member.address) - client_message.append_str(member.uuid) - client_message.append_bool(member.is_lite_member) - client_message.append_int(len(member.attributes)) - for key, value in member.attributes: - client_message.append_str(key) - client_message.append_str(value) - - @classmethod - def decode(cls, client_message, to_object=None): - address = AddressCodec.decode(client_message) - uuid = client_message.read_str() - lite_member = client_message.read_bool() - attribute_size = client_message.read_int() - attributes = {} - for i in range(0, attribute_size, 1): - key = client_message.read_str() - value = client_message.read_str() - attributes[key] = value - return Member(address, uuid, lite_member, attributes) - - -class AddressCodec(object): - @classmethod - def encode(cls, client_message, obj): - client_message.append_str(obj.host).append_int(obj.port) - - @classmethod - def decode(cls, client_message, to_object=None): - host = client_message.read_str() - port = client_message.read_int() - return Address(host, port) - - -class DistributedObjectInfoCodec(object): - @classmethod - def encode(cls, client_message, obj): - client_message.append_str(obj.service_name).append_str(obj.name) - - @classmethod - def decode(cls, client_message): - service_name = client_message.read_str() - name = client_message.read_str() - return DistributedObjectInfo(name, service_name) - - -class EntryViewCodec(object): - @classmethod - def encode(cls, client_message, entry_view): - client_message.append_data(entry_view.key) - client_message.append_data(entry_view.value) - client_message.append_long(entry_view.cost) - client_message.append_long(entry_view.cost) - client_message.append_long(entry_view.creationTime) - client_message.append_long(entry_view.expirationTime) - client_message.append_long(entry_view.hits) - client_message.append_long(entry_view.lastAccessTime) - client_message.append_long(entry_view.lastStoredTime) - client_message.append_long(entry_view.lastUpdateTime) - client_message.append_long(entry_view.version) - client_message.append_long(entry_view.evictionCriteriaNumber) - client_message.append_long(entry_view.ttl) - - @classmethod - def decode(cls, client_message, to_object): - entry_view = EntryView() - entry_view.key = to_object(client_message.read_data()) - entry_view.value = to_object(client_message.read_data()) - entry_view.cost = client_message.read_long() - entry_view.creation_time = client_message.read_long() - entry_view.expiration_time = client_message.read_long() - entry_view.hits = client_message.read_long() - entry_view.last_access_time = client_message.read_long() - entry_view.last_stored_time = client_message.read_long() - entry_view.last_update_time = client_message.read_long() - entry_view.version = client_message.read_long() - entry_view.eviction_criteria_number = client_message.read_long() - entry_view.ttl = client_message.read_long() - return entry_view - - -class QueryCacheEventDataCodec(object): - @classmethod - def encode(cls, client_message, obj): - pass - - @classmethod - def decode(cls, client_message, to_object=None): - pass - - -StackTraceElement = namedtuple('StackTraceElement', ['declaring_class', 'method_name', 'file_name', 'line_number']) - - -class ErrorCodec(object): - message = None - cause_class_name = None - - def __init__(self, client_message): - self.error_code = client_message.read_int() - self.class_name = client_message.read_str() - if not client_message.read_bool(): - self.message = client_message.read_str() - - self.stack_trace = [] - stack_trace_count = client_message.read_int() - for _ in range(stack_trace_count): - self.stack_trace.append(self.decode_stack_trace(client_message)) - - self.cause_error_code = client_message.read_int() - if not client_message.read_bool(): - self.cause_class_name = client_message.read_str() - - @staticmethod - def decode_stack_trace(client_message): - declaring_class = client_message.read_str() - method_name = client_message.read_str() - file_name = None - if not client_message.read_bool(): - file_name = client_message.read_str() - line_number = client_message.read_int() - return StackTraceElement(declaring_class=declaring_class, - method_name=method_name, file_name=file_name, line_number=line_number) - - def __repr__(self): - return 'ErrorCodec(error_code="%s", class_name="%s", message="%s", cause_error_code="%s", ' \ - 'cause_class_name="%s' % (self.error_code, self.class_name, self.message, self.cause_error_code, - self.cause_class_name) diff --git a/hazelcast/protocol/error_codes.py b/hazelcast/protocol/error_codes.py deleted file mode 100644 index 6f5ce0b032..0000000000 --- a/hazelcast/protocol/error_codes.py +++ /dev/null @@ -1,97 +0,0 @@ -""" -Each exception that are defined in client protocol have unique identifier which are error code. -All error codes defined in protocol are listed in this class. -""" - -# Message type indicating exception -EXCEPTION_MESSAGE_TYPE = 109 - -UNDEFINED = 0 -ARRAY_INDEX_OUT_OF_BOUNDS = 1 -ARRAY_STORE = 2 -AUTHENTICATION = 3 -CACHE = 4 -CACHE_LOADER = 5 -CACHE_NOT_EXISTS = 6 -CACHE_WRITER = 7 -CALLER_NOT_MEMBER = 8 -CANCELLATION = 9 -CLASS_CAST = 10 -CLASS_NOT_FOUND = 11 -CONCURRENT_MODIFICATION = 12 -CONFIG_MISMATCH = 13 -CONFIGURATION = 14 -DISTRIBUTED_OBJECT_DESTROYED = 15 -DUPLICATE_INSTANCE_NAME = 16 -EOF = 17 -ENTRY_PROCESSOR = 18 -EXECUTION = 19 -HAZELCAST = 20 -HAZELCAST_INSTANCE_NOT_ACTIVE = 21 -HAZELCAST_OVERLOAD = 22 -HAZELCAST_SERIALIZATION = 23 -IO = 24 -ILLEGAL_ARGUMENT = 25 -ILLEGAL_ACCESS_EXCEPTION = 26 -ILLEGAL_ACCESS_ERROR = 27 -ILLEGAL_MONITOR_STATE = 28 -ILLEGAL_STATE = 29 -ILLEGAL_THREAD_STATE = 30 -INDEX_OUT_OF_BOUNDS = 31 -INTERRUPTED = 32 -INVALID_ADDRESS = 33 -INVALID_CONFIGURATION = 34 -MEMBER_LEFT = 35 -NEGATIVE_ARRAY_SIZE = 36 -NO_SUCH_ELEMENT = 37 -NOT_SERIALIZABLE = 38 -NULL_POINTER = 39 -OPERATION_TIMEOUT = 40 -PARTITION_MIGRATING = 41 -QUERY = 42 -QUERY_RESULT_SIZE_EXCEEDED = 43 -QUORUM = 44 -REACHED_MAX_SIZE = 45 -REJECTED_EXECUTION = 46 -REMOTE_MAP_REDUCE = 47 -RESPONSE_ALREADY_SENT = 48 -RETRYABLE_HAZELCAST = 49 -RETRYABLE_IO = 50 -RUNTIME = 51 -SECURITY = 52 -SOCKET = 53 -STALE_SEQUENCE = 54 -TARGET_DISCONNECTED = 55 -TARGET_NOT_MEMBER = 56 -TIMEOUT = 57 -TOPIC_OVERLOAD = 58 -TOPOLOGY_CHANGED = 59 -TRANSACTION = 60 -TRANSACTION_NOT_ACTIVE = 61 -TRANSACTION_TIMED_OUT = 62 -URI_SYNTAX = 63 -UTF_DATA_FORMAT = 64 -UNSUPPORTED_OPERATION = 65 -WRONG_TARGET = 66 -XA = 67 -ACCESS_CONTROL = 68 -LOGIN = 69 -UNSUPPORTED_CALLBACK = 70 -NO_DATA_MEMBER = 71 -REPLICATED_MAP_CANT_BE_CREATED = 72 -MAX_MESSAGE_SIZE_EXCEEDED = 73 -WAN_REPLICATION_QUEUE_FULL = 74 -ASSERTION_ERROR = 75 -OUT_OF_MEMORY_ERROR = 76 -STACK_OVERFLOW_ERROR = 77 -NATIVE_OUT_OF_MEMORY_ERROR = 78 -SERVICE_NOT_FOUND = 79 -STALE_TASK_ID = 80 -DUPLICATE_TASK = 81 -STALE_TASK = 82 -LOCAL_MEMBER_RESET = 83 -INDETERMINATE_OPERATION_STATE = 84 -FLAKE_ID_NODE_ID_OUT_OF_RANGE_EXCEPTION = 85 -TARGET_NOT_REPLICA_EXCEPTION = 86 -MUTATION_DISALLOWED_EXCEPTION = 87 -CONSISTENCY_LOST_EXCEPTION = 88 diff --git a/hazelcast/protocol/event_response_const.py b/hazelcast/protocol/event_response_const.py deleted file mode 100644 index e1e2a0d242..0000000000 --- a/hazelcast/protocol/event_response_const.py +++ /dev/null @@ -1,23 +0,0 @@ -""" -Event Response Constants -""" - -EVENT_MEMBER = 200 -EVENT_MEMBERLIST = 201 -EVENT_MEMBERATTRIBUTECHANGE = 202 -EVENT_ENTRY = 203 -EVENT_ITEM = 204 -EVENT_TOPIC = 205 -EVENT_PARTITIONLOST = 206 -EVENT_DISTRIBUTEDOBJECT = 207 -EVENT_CACHEINVALIDATION = 208 -EVENT_MAPPARTITIONLOST = 209 -EVENT_CACHE = 210 -EVENT_CACHEBATCHINVALIDATION = 211 -# ENTERPRISE -EVENT_QUERYCACHESINGLE = 212 -EVENT_QUERYCACHEBATCH = 213 - -EVENT_CACHEPARTITIONLOST = 214 -EVENT_IMAPINVALIDATION = 215 -EVENT_IMAPBATCHINVALIDATION = 216 diff --git a/hazelcast/proxy/__init__.py b/hazelcast/proxy/__init__.py index 030cbaa245..8de49f104b 100644 --- a/hazelcast/proxy/__init__.py +++ b/hazelcast/proxy/__init__.py @@ -1,62 +1,41 @@ -from hazelcast.core import DistributedObjectEvent -from hazelcast.protocol.codec import client_create_proxy_codec, client_destroy_proxy_codec, \ - client_add_distributed_object_listener_codec, client_remove_distributed_object_listener_codec -from hazelcast.proxy.atomic_long import AtomicLong -from hazelcast.proxy.atomic_reference import AtomicReference -from hazelcast.proxy.count_down_latch import CountDownLatch +from hazelcast.invocation import Invocation +from hazelcast.protocol.codec import client_create_proxy_codec, client_destroy_proxy_codec from hazelcast.proxy.executor import Executor -from hazelcast.proxy.id_generator import IdGenerator from hazelcast.proxy.list import List -from hazelcast.proxy.lock import Lock from hazelcast.proxy.map import create_map_proxy from hazelcast.proxy.multi_map import MultiMap from hazelcast.proxy.queue import Queue from hazelcast.proxy.reliable_topic import ReliableTopic from hazelcast.proxy.replicated_map import ReplicatedMap from hazelcast.proxy.ringbuffer import Ringbuffer -from hazelcast.proxy.semaphore import Semaphore from hazelcast.proxy.set import Set from hazelcast.proxy.topic import Topic from hazelcast.proxy.pn_counter import PNCounter from hazelcast.proxy.flake_id_generator import FlakeIdGenerator from hazelcast.util import to_list -ATOMIC_LONG_SERVICE = "hz:impl:atomicLongService" -ATOMIC_REFERENCE_SERVICE = "hz:impl:atomicReferenceService" -COUNT_DOWN_LATCH_SERVICE = "hz:impl:countDownLatchService" -ID_GENERATOR_SERVICE = "hz:impl:idGeneratorService" EXECUTOR_SERVICE = "hz:impl:executorService" -LOCK_SERVICE = "hz:impl:lockService" LIST_SERVICE = "hz:impl:listService" MULTI_MAP_SERVICE = "hz:impl:multiMapService" MAP_SERVICE = "hz:impl:mapService" RELIABLE_TOPIC_SERVICE = "hz:impl:reliableTopicService" REPLICATED_MAP_SERVICE = "hz:impl:replicatedMapService" RINGBUFFER_SERVICE = "hz:impl:ringbufferService" -SEMAPHORE_SERVICE = "hz:impl:semaphoreService" SET_SERVICE = "hz:impl:setService" QUEUE_SERVICE = "hz:impl:queueService" TOPIC_SERVICE = "hz:impl:topicService" PN_COUNTER_SERVICE = "hz:impl:PNCounterService" FLAKE_ID_GENERATOR_SERVICE = "hz:impl:flakeIdGeneratorService" -ID_GENERATOR_ATOMIC_LONG_PREFIX = "hz:atomic:idGenerator:" - _proxy_init = { - ATOMIC_LONG_SERVICE: AtomicLong, - ATOMIC_REFERENCE_SERVICE: AtomicReference, - COUNT_DOWN_LATCH_SERVICE: CountDownLatch, - ID_GENERATOR_SERVICE: IdGenerator, EXECUTOR_SERVICE: Executor, LIST_SERVICE: List, - LOCK_SERVICE: Lock, MAP_SERVICE: create_map_proxy, MULTI_MAP_SERVICE: MultiMap, QUEUE_SERVICE: Queue, RELIABLE_TOPIC_SERVICE: ReliableTopic, REPLICATED_MAP_SERVICE: ReplicatedMap, RINGBUFFER_SERVICE: Ringbuffer, - SEMAPHORE_SERVICE: Semaphore, SET_SERVICE: Set, TOPIC_SERVICE: Topic, PN_COUNTER_SERVICE: PNCounter, @@ -65,64 +44,42 @@ class ProxyManager(object): - def __init__(self, client): - self._client = client + def __init__(self, context): + self._context = context self._proxies = {} - def get_or_create(self, service_name, name, create_on_remote=True, **kwargs): + def get_or_create(self, service_name, name, create_on_remote=True): ns = (service_name, name) if ns in self._proxies: return self._proxies[ns] - proxy = self.create_proxy(service_name, name, create_on_remote, **kwargs) + proxy = self._create_proxy(service_name, name, create_on_remote) self._proxies[ns] = proxy return proxy - def create_proxy(self, service_name, name, create_on_remote, **kwargs): + def _create_proxy(self, service_name, name, create_on_remote): if create_on_remote: - message = client_create_proxy_codec.encode_request(name=name, service_name=service_name, - target=self._find_next_proxy_address()) - self._client.invoker.invoke_on_random_target(message).result() + request = client_create_proxy_codec.encode_request(name, service_name) + invocation = Invocation(request) + invocation_service = self._context.invocation_service + invocation_service.invoke(invocation) + invocation.future.result() - return _proxy_init[service_name](client=self._client, service_name=service_name, name=name, **kwargs) + return _proxy_init[service_name](service_name, name, self._context) def destroy_proxy(self, service_name, name, destroy_on_remote=True): ns = (service_name, name) try: self._proxies.pop(ns) if destroy_on_remote: - message = client_destroy_proxy_codec.encode_request(name=name, service_name=service_name) - self._client.invoker.invoke_on_random_target(message).result() + request = client_destroy_proxy_codec.encode_request(name, service_name) + invocation = Invocation(request) + invocation_service = self._context.invocation_service + invocation_service.invoke(invocation) + invocation.future.result() return True except KeyError: return False def get_distributed_objects(self): return to_list(self._proxies.values()) - - def add_distributed_object_listener(self, listener_func): - is_smart = self._client.config.network_config.smart_routing - request = client_add_distributed_object_listener_codec.encode_request(is_smart) - - def handle_distributed_object_event(**kwargs): - event = DistributedObjectEvent(**kwargs) - listener_func(event) - - def event_handler(client_message): - return client_add_distributed_object_listener_codec.handle(client_message, handle_distributed_object_event) - - def decode_add_listener(response): - return client_add_distributed_object_listener_codec.decode_response(response)["response"] - - def encode_remove_listener(registration_id): - return client_remove_distributed_object_listener_codec.encode_request(registration_id) - - return self._client.listener.register_listener(request, decode_add_listener, - encode_remove_listener, event_handler) - - def remove_distributed_object_listener(self, registration_id): - return self._client.listener.deregister_listener(registration_id) - - def _find_next_proxy_address(self): - # TODO: filter out lite members - return self._client.load_balancer.next_address() diff --git a/hazelcast/proxy/atomic_long.py b/hazelcast/proxy/atomic_long.py deleted file mode 100644 index 90b6c0ac66..0000000000 --- a/hazelcast/proxy/atomic_long.py +++ /dev/null @@ -1,140 +0,0 @@ -from hazelcast.protocol.codec import atomic_long_add_and_get_codec, atomic_long_compare_and_set_codec, \ - atomic_long_decrement_and_get_codec, atomic_long_get_and_add_codec, atomic_long_get_and_increment_codec, \ - atomic_long_get_and_set_codec, atomic_long_get_codec, atomic_long_increment_and_get_codec, atomic_long_set_codec, \ - atomic_long_alter_and_get_codec, atomic_long_alter_codec, atomic_long_apply_codec, atomic_long_get_and_alter_codec -from hazelcast.proxy.base import PartitionSpecificProxy -from hazelcast.util import check_not_none - - -class AtomicLong(PartitionSpecificProxy): - """ - AtomicLong is a redundant and highly available distributed long value which can be updated atomically. - """ - def add_and_get(self, delta): - """ - Atomically adds the given delta value to the currently stored value. - - :param delta: (long), the value to add to the currently stored value. - :return: (long), the updated value. - """ - return self._encode_invoke(atomic_long_add_and_get_codec, delta=delta) - - def alter(self, function): - """ - Alters the currently stored value by applying a function on it. - - :param function: (Function), A stateful serializable object which represents the Function defined on - server side. - This object must have a serializable Function counter part registered on server side with the actual - ``org.hazelcast.core.IFunction`` implementation. - """ - check_not_none(function, "function can't be None") - return self._encode_invoke(atomic_long_alter_codec, function=self._to_data(function)) - - def alter_and_get(self, function): - """ - Alters the currently stored value by applying a function on it and gets the result. - - :param function: (Function), A stateful serializable object which represents the Function defined on - server side. - This object must have a serializable Function counter part registered on server side with the actual - ``org.hazelcast.core.IFunction`` implementation. - :return: (long), the new value. - """ - check_not_none(function, "function can't be None") - return self._encode_invoke(atomic_long_alter_and_get_codec, function=self._to_data(function)) - - def apply(self, function): - """ - Applies a function on the value, the actual stored value will not change. - - :param function: (Function), A stateful serializable object which represents the Function defined on - server side. - This object must have a serializable Function counter part registered on server side with the actual - ``org.hazelcast.core.IFunction`` implementation. - :return: (object), the result of the function application. - """ - check_not_none(function, "function can't be None") - return self._encode_invoke(atomic_long_apply_codec, function=self._to_data(function)) - - def compare_and_set(self, expected, updated): - """ - Atomically sets the value to the given updated value only if the current value == the expected value. - - :param expected: (long), the expected value. - :param updated: (long), the new value. - :return: (bool), ``true`` if successful; or ``false`` if the actual value was not equal to the expected value. - """ - return self._encode_invoke(atomic_long_compare_and_set_codec, expected=expected, - updated=updated) - - def decrement_and_get(self): - """ - Atomically decrements the current value by one. - - :return: (long), the updated value, the current value decremented by one. - """ - return self._encode_invoke(atomic_long_decrement_and_get_codec) - - def get(self): - """ - Gets the current value. - - :return: (long), gets the current value. - """ - return self._encode_invoke(atomic_long_get_codec) - - def get_and_add(self, delta): - """ - Atomically adds the given value to the current value. - - :param delta: (long), the value to add to the current value. - :return: (long), the old value before the addition. - """ - return self._encode_invoke(atomic_long_get_and_add_codec, delta=delta) - - def get_and_alter(self, function): - """ - Alters the currently stored value by applying a function on it on and gets the old value. - - :param function: (Function), A stateful serializable object which represents the Function defined on - server side. - This object must have a serializable Function counter part registered on server side with the actual - ``org.hazelcast.core.IFunction`` implementation. - :return: (long), the old value. - """ - check_not_none(function, "function can't be None") - return self._encode_invoke(atomic_long_get_and_alter_codec, function=self._to_data(function)) - - def get_and_set(self, new_value): - """ - Atomically sets the given value and returns the old value. - - :param new_value: (long), the new value. - :return: (long), the old value. - """ - return self._encode_invoke(atomic_long_get_and_set_codec, new_value=new_value) - - def increment_and_get(self): - """ - Atomically increments the current value by one. - - :return: (long), the updated value, the current value incremented by one. - """ - return self._encode_invoke(atomic_long_increment_and_get_codec) - - def get_and_increment(self): - """ - Atomically increments the current value by one. - - :return: (long), the old value. - """ - return self._encode_invoke(atomic_long_get_and_increment_codec) - - def set(self, new_value): - """ - Atomically sets the given value. - - :param new_value: (long), the new value. - """ - return self._encode_invoke(atomic_long_set_codec, new_value=new_value) diff --git a/hazelcast/proxy/atomic_reference.py b/hazelcast/proxy/atomic_reference.py deleted file mode 100644 index 3b95ea011c..0000000000 --- a/hazelcast/proxy/atomic_reference.py +++ /dev/null @@ -1,137 +0,0 @@ -from hazelcast.protocol.codec import atomic_reference_compare_and_set_codec, atomic_reference_clear_codec, \ - atomic_reference_contains_codec, atomic_reference_get_and_set_codec, atomic_reference_set_and_get_codec, \ - atomic_reference_get_codec, atomic_reference_is_null_codec, atomic_reference_set_codec, \ - atomic_reference_alter_and_get_codec, atomic_reference_alter_codec, atomic_reference_apply_codec, \ - atomic_reference_get_and_alter_codec -from hazelcast.proxy.base import PartitionSpecificProxy -from hazelcast.util import check_not_none - - -class AtomicReference(PartitionSpecificProxy): - """ - AtomicReference is a atomically updated reference to an object. - """ - def alter(self, function): - """ - Alters the currently stored reference by applying a function on it. - - :param function: (Function), A stateful serializable object which represents the Function defined on - server side. - This object must have a serializable Function counter part registered on server side with the actual - ``org.hazelcast.core.IFunction`` implementation. - """ - check_not_none(function, "function can't be None") - return self._encode_invoke(atomic_reference_alter_codec, function=self._to_data(function)) - - def apply(self, function): - """ - Applies a function on the value, the actual stored value will not change. - - :param function: (Function), A stateful serializable object which represents the Function defined on - server side. - This object must have a serializable Function counter part registered on server side with the actual - ``org.hazelcast.core.IFunction`` implementation. - :return: (object), the result of the function application. - """ - check_not_none(function, "function can't be None") - return self._encode_invoke(atomic_reference_apply_codec, function=self._to_data(function)) - - def alter_and_get(self, function): - """ - Alters the currently stored reference by applying a function on it and gets the result. - - :param function: (Function), A stateful serializable object which represents the Function defined on - server side. - This object must have a serializable Function counter part registered on server side with the actual - ``org.hazelcast.core.IFunction`` implementation. - :return: (object), the new value, the result of the applied function. - """ - check_not_none(function, "function can't be None") - return self._encode_invoke(atomic_reference_alter_and_get_codec, function=self._to_data(function)) - - def compare_and_set(self, expected, updated): - """ - Atomically sets the value to the given updated value only if the current value == the expected value. - - :param expected: (object), the expected value. - :param updated: (object), the new value. - :return: (bool), ``true`` if successful; or ``false`` if the actual value was not equal to the expected value. - """ - return self._encode_invoke(atomic_reference_compare_and_set_codec, - expected=self._to_data(expected), updated=self._to_data(updated)) - - def clear(self): - """ - Clears the current stored reference. - """ - return self._encode_invoke(atomic_reference_clear_codec) - - def contains(self, expected): - """ - Checks if the reference contains the value. - - :param expected: (object), the value to check (is allowed to be ``None``). - :return: (bool), ``true`` if the value is found, ``false`` otherwise. - """ - - return self._encode_invoke(atomic_reference_contains_codec, - expected=self._to_data(expected)) - - def get(self): - """ - Gets the current value. - - :return: (object), the current value. - """ - return self._encode_invoke(atomic_reference_get_codec) - - def get_and_alter(self, function): - """ - Alters the currently stored reference by applying a function on it on and gets the old value. - - :param function: (Function), A stateful serializable object which represents the Function defined on - server side. - This object must have a serializable Function counter part registered on server side with the actual - ``org.hazelcast.core.IFunction`` implementation. - :return: (object), the old value, the value before the function is applied. - """ - - check_not_none(function, "function can't be None") - return self._encode_invoke(atomic_reference_get_and_alter_codec, function=self._to_data(function)) - - def get_and_set(self, new_value): - """ - Gets the old value and sets the new value. - - :param new_value: (object), the new value. - :return: (object), the old value. - """ - return self._encode_invoke(atomic_reference_get_and_set_codec, - new_value=self._to_data(new_value)) - - def is_null(self): - """ - Checks if the stored reference is null. - - :return: (bool), ``true`` if null, ``false`` otherwise. - """ - return self._encode_invoke(atomic_reference_is_null_codec) - - def set(self, new_value): - """ - Atomically sets the given value. - - :param new_value: (object), the new value. - """ - return self._encode_invoke(atomic_reference_set_codec, - new_value=self._to_data(new_value)) - - def set_and_get(self, new_value): - """ - Sets and gets the value. - - :param new_value: (object), the new value. - :return: (object), the new value. - """ - return self._encode_invoke(atomic_reference_set_and_get_codec, - new_value=self._to_data(new_value)) diff --git a/hazelcast/proxy/base.py b/hazelcast/proxy/base.py index b521037fee..5c39cf962e 100644 --- a/hazelcast/proxy/base.py +++ b/hazelcast/proxy/base.py @@ -1,42 +1,36 @@ import logging from hazelcast.future import make_blocking +from hazelcast.invocation import Invocation from hazelcast.partition import string_partition_strategy -from hazelcast.util import enum, thread_id +from hazelcast.util import enum from hazelcast import six MAX_SIZE = float('inf') -def default_response_handler(future, codec, to_object): - response = future.result() - if response: - try: - codec.decode_response - except AttributeError: - return - decoded_response = codec.decode_response(response, to_object) - try: - return decoded_response['response'] - except AttributeError: - pass +def _no_op_response_handler(_): + return None class Proxy(object): """ Provides basic functionality for Hazelcast Proxies. """ - def __init__(self, client, service_name, name): + def __init__(self, service_name, name, context): self.service_name = service_name self.name = name - self.partition_key = string_partition_strategy(self.name) - self._client = client + self._context = context + self._invocation_service = context.invocation_service + self._partition_service = context.partition_service + serialization_service = context.serialization_service + self._to_object = serialization_service.to_object + self._to_data = serialization_service.to_data + listener_service = context.listener_service + self._register_listener = listener_service.register_listener + self._deregister_listener = listener_service.deregister_listener self.logger = logging.getLogger("HazelcastClient.%s(%s)" % (type(self).__name__, name)) - self._to_object = client.serialization_service.to_object - self._to_data = client.serialization_service.to_data - self._register_listener = client.listener.register_listener - self._deregister_listener = client.listener.deregister_listener - self._is_smart = client.listener.is_smart + self._is_smart = context.config.network.smart_routing def destroy(self): """ @@ -45,7 +39,7 @@ def destroy(self): :return: (bool), ``true`` if this proxy is deleted successfully, ``false`` otherwise. """ self._on_destroy() - return self._client.proxy.destroy_proxy(self.service_name, self.name) + return self._context.proxy_manager.destroy_proxy(self.service_name, self.name) def _on_destroy(self): pass @@ -53,23 +47,26 @@ def _on_destroy(self): def __repr__(self): return '%s(name="%s")' % (type(self).__name__, self.name) - def _encode_invoke(self, codec, response_handler=default_response_handler, **kwargs): - request = codec.encode_request(name=self.name, **kwargs) - return self._client.invoker.invoke_on_random_target(request).continue_with(response_handler, codec, self._to_object) + def _invoke(self, request, response_handler=_no_op_response_handler): + invocation = Invocation(request, response_handler=response_handler) + self._invocation_service.invoke(invocation) + return invocation.future - def _encode_invoke_on_target(self, codec, _address, response_handler=default_response_handler, **kwargs): - request = codec.encode_request(name=self.name, **kwargs) - return self._client.invoker.invoke_on_target(request, _address).continue_with(response_handler, codec, self._to_object) + def _invoke_on_target(self, request, uuid, response_handler=_no_op_response_handler): + invocation = Invocation(request, uuid=uuid, response_handler=response_handler) + self._invocation_service.invoke(invocation) + return invocation.future - def _encode_invoke_on_key(self, codec, key_data, invocation_timeout=None, **kwargs): - partition_id = self._client.partition_service.get_partition_id(key_data) - return self._encode_invoke_on_partition(codec, partition_id, invocation_timeout=invocation_timeout, **kwargs) + def _invoke_on_key(self, request, key_data, response_handler=_no_op_response_handler): + partition_id = self._partition_service.get_partition_id(key_data) + invocation = Invocation(request, partition_id=partition_id, response_handler=response_handler) + self._invocation_service.invoke(invocation) + return invocation.future - def _encode_invoke_on_partition(self, codec, _partition_id, response_handler=default_response_handler, - invocation_timeout=None, **kwargs): - request = codec.encode_request(name=self.name, **kwargs) - return self._client.invoker.invoke_on_partition(request, _partition_id, invocation_timeout).continue_with(response_handler, - codec, self._to_object) + def _invoke_on_partition(self, request, partition_id, response_handler=_no_op_response_handler): + invocation = Invocation(request, partition_id=partition_id, response_handler=response_handler) + self._invocation_service.invoke(invocation) + return invocation.future def blocking(self): """ @@ -83,37 +80,40 @@ class PartitionSpecificProxy(Proxy): """ Provides basic functionality for Partition Specific Proxies. """ - def __init__(self, client, service_name, name): - super(PartitionSpecificProxy, self).__init__(client, service_name, name) - self._partition_id = self._client.partition_service.get_partition_id(self.partition_key) + def __init__(self, service_name, name, context): + super(PartitionSpecificProxy, self).__init__(service_name, name, context) + partition_key = context.serialization_service.to_data(string_partition_strategy(self.name)) + self._partition_id = context.partition_service.get_partition_id(partition_key) - def _encode_invoke(self, codec, response_handler=default_response_handler, invocation_timeout=None, **kwargs): - return super(PartitionSpecificProxy, self)._encode_invoke_on_partition(codec, self._partition_id, - response_handler=response_handler, - invocation_timeout=invocation_timeout, **kwargs) + def _invoke(self, request, response_handler=_no_op_response_handler): + invocation = Invocation(request, partition_id=self._partition_id, response_handler=response_handler) + self._invocation_service.invoke(invocation) + return invocation.future class TransactionalProxy(object): """ Provides an interface for all transactional distributed objects. """ - def __init__(self, name, transaction): + def __init__(self, name, transaction, context): self.name = name self.transaction = transaction - self._to_object = transaction.client.serialization_service.to_object - self._to_data = transaction.client.serialization_service.to_data + self._invocation_service = context.invocation_service + serialization_service = context.serialization_service + self._to_object = serialization_service.to_object + self._to_data = serialization_service.to_data - def _encode_invoke(self, codec, response_handler=default_response_handler, **kwargs): - request = codec.encode_request(name=self.name, txn_id=self.transaction.id, thread_id=thread_id(), **kwargs) - return self.transaction.client.invoker.invoke_on_connection(request, self.transaction.connection).continue_with( - response_handler, codec, self._to_object) + def _invoke(self, request, response_handler=_no_op_response_handler): + invocation = Invocation(request, connection=self.transaction.connection, response_handler=response_handler) + self._invocation_service.invoke(invocation) + return invocation.future def __repr__(self): return '%s(name="%s")' % (type(self).__name__, self.name) ItemEventType = enum(added=1, removed=2) -EntryEventType = enum(added=1, removed=2, updated=4, evicted=8, evict_all=16, clear_all=32, merged=64, expired=128, +EntryEventType = enum(added=1, removed=2, updated=4, evicted=8, expired=16, evict_all=32, clear_all=64, merged=128, invalidation=256, loaded=512) @@ -138,8 +138,9 @@ class EntryEvent(object): """ Map Entry event. """ - def __init__(self, to_object, key, old_value, value, merging_value, event_type, uuid, + def __init__(self, to_object, key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries): + self._to_object = to_object self._key_data = key self._value_data = value self._old_value_data = old_value @@ -147,7 +148,6 @@ def __init__(self, to_object, key, old_value, value, merging_value, event_type, self.event_type = event_type self.uuid = uuid self.number_of_affected_entries = number_of_affected_entries - self._to_object = to_object @property def key(self): @@ -170,10 +170,10 @@ def merging_value(self): return self._to_object(self._merging_value_data) def __repr__(self): - return "EntryEvent(key=%s, old_value=%s, value=%s, merging_value=%s, event_type=%s, uuid=%s, " \ + return "EntryEvent(key=%s, value=%s, old_value=%s, merging_value=%s, event_type=%s, uuid=%s, " \ "number_of_affected_entries=%s)" % ( - self.key, self.old_value, self.value, self.merging_value, self.event_type, self.uuid, - self.number_of_affected_entries) + self.key, self.value, self.old_value, self.merging_value, EntryEventType.reverse[self.event_type], + self.uuid, self.number_of_affected_entries) class TopicMessage(object): diff --git a/hazelcast/proxy/count_down_latch.py b/hazelcast/proxy/count_down_latch.py deleted file mode 100644 index 9ab8c3e9a8..0000000000 --- a/hazelcast/proxy/count_down_latch.py +++ /dev/null @@ -1,75 +0,0 @@ -from hazelcast.protocol.codec import \ - count_down_latch_await_codec, \ - count_down_latch_count_down_codec, \ - count_down_latch_get_count_codec, \ - count_down_latch_try_set_count_codec - -from hazelcast.proxy.base import PartitionSpecificProxy -from hazelcast.util import check_not_negative, to_millis -from hazelcast import six - - -class CountDownLatch(PartitionSpecificProxy): - """ - CountDownLatch is a backed-up, distributed, cluster-wide synchronization aid that allows one or more threads to wait until a - set of operations being performed in other threads completes - """ - if six.PY2: - six.exec_("""def await(self, timeout): - return self.await_latch(timeout) - """) - - def await_latch(self, timeout): - """ - Causes the current thread to wait until the latch has counted down to zero, or the specified waiting time - elapses. - - If the current count is zero then this method returns immediately with the value ``true``. - - If the current count is greater than zero, then the current thread becomes disabled for thread scheduling - purposes and lies dormant until one of following happens: - - * the count reaches zero due to invocations of the countDown() method, - * this CountDownLatch instance is destroyed, - * the countdown owner becomes disconnected, - * some other thread interrupts the current thread, or - * the specified waiting time elapses. - - If the count reaches zero, then the method returns with the value ``true``. - - :param timeout: (long), the maximum time in seconds to wait. - :return: (bool), ``true`` if the count reached zero, ``false`` if the waiting time elapsed before the count reached zero. - """ - return self._encode_invoke(count_down_latch_await_codec, timeout=to_millis(timeout)) - - def count_down(self): - """ - Decrements the count of the latch, releasing all waiting threads if the count reaches zero. - - If the current count is greater than zero, then it is decremented. If the new count is zero: - * All waiting threads are re-enabled for thread scheduling purposes, and - * Countdown owner is set to ``None``. - - If the current count equals zero, nothing happens. - """ - return self._encode_invoke(count_down_latch_count_down_codec) - - def get_count(self): - """ - Returns the current count - - :return: (int), the current count. - """ - return self._encode_invoke(count_down_latch_get_count_codec) - - def try_set_count(self, count): - """ - Sets the count to the given value if the current count is zero. If count is not zero, this method does nothing - and returns ``false``. - - :param count: (int), the number of times count_down() must be invoked before threads can pass through await(). - :return: (bool), ``true`` if the new count was set, ``false`` if the current count is not zero. - """ - check_not_negative(count, "count can't be negative") - return self._encode_invoke(count_down_latch_try_set_count_codec, count=count) - diff --git a/hazelcast/proxy/executor.py b/hazelcast/proxy/executor.py index 121f7531fd..6fabb5a51a 100644 --- a/hazelcast/proxy/executor.py +++ b/hazelcast/proxy/executor.py @@ -1,8 +1,8 @@ from uuid import uuid4 from hazelcast import future -from hazelcast.protocol.codec import executor_service_submit_to_address_codec, executor_service_shutdown_codec, \ - executor_service_is_shutdown_codec, executor_service_cancel_on_address_codec, \ - executor_service_cancel_on_partition_codec, executor_service_submit_to_partition_codec +from hazelcast.protocol.codec import executor_service_shutdown_codec, \ + executor_service_is_shutdown_codec, \ + executor_service_submit_to_partition_codec, executor_service_submit_to_member_codec from hazelcast.proxy.base import Proxy from hazelcast.util import check_not_none @@ -21,13 +21,18 @@ def execute_on_key_owner(self, key, task): :return: (:class:`~hazelcast.future.Future`), future representing pending completion of the task. """ check_not_none(key, "key can't be None") + check_not_none(task, "task can't be None") + + def handler(message): + return self._to_object(executor_service_submit_to_partition_codec.decode_response(message)) + key_data = self._to_data(key) - partition_id = self._client.partition_service.get_partition_id(key_data) + task_data = self._to_data(task) - uuid = self._get_uuid() - return self._encode_invoke_on_partition(executor_service_submit_to_partition_codec, partition_id, - uuid=uuid, callable=self._to_data(task), - partition_id=partition_id) + partition_id = self._context.partition_service.get_partition_id(key_data) + uuid = uuid4() + request = executor_service_submit_to_partition_codec.encode_request(self.name, uuid, task_data) + return self._invoke_on_partition(request, partition_id, handler) def execute_on_member(self, member, task): """ @@ -37,9 +42,10 @@ def execute_on_member(self, member, task): :param task: (Task), the task executed on the specified member. :return: (:class:`~hazelcast.future.Future`), Future representing pending completion of the task. """ - uuid = self._get_uuid() - address = member.address - return self._execute_on_member(address, uuid, self._to_data(task)) + check_not_none(task, "task can't be None") + task_data = self._to_data(task) + uuid = uuid4() + return self._execute_on_member(uuid, task_data, member.uuid) def execute_on_members(self, members, task): """ @@ -51,9 +57,9 @@ def execute_on_members(self, members, task): """ task_data = self._to_data(task) futures = [] - uuid = self._get_uuid() + uuid = uuid4() for member in members: - f = self._execute_on_member(member.address, uuid, task_data) + f = self._execute_on_member(uuid, task_data, member.uuid) futures.append(f) return future.combine_futures(*futures) @@ -64,7 +70,7 @@ def execute_on_all_members(self, task): :param task: (Task), the task executed on the all of the members. :return: (Map), :class:`~hazelcast.future.Future` tuples representing pending completion of the task on each member. """ - return self.execute_on_members(self._client.cluster.get_member_list(), task) + return self.execute_on_members(self._context.cluster_service.get_members(), task) def is_shutdown(self): """ @@ -72,18 +78,20 @@ def is_shutdown(self): :return: (bool), ``true`` if this executor has been shut down. """ - return self._encode_invoke(executor_service_is_shutdown_codec) + request = executor_service_is_shutdown_codec.encode_request(self.name) + return self._invoke(request, executor_service_is_shutdown_codec.decode_response) def shutdown(self): """ Initiates a shutdown process which works orderly. Tasks that were submitted before shutdown are executed but new task will not be accepted. """ - return self._encode_invoke(executor_service_shutdown_codec) + request = executor_service_shutdown_codec.encode_request(self.name) + return self._invoke(request) - def _execute_on_member(self, address, uuid, task_data): - return self._encode_invoke_on_target(executor_service_submit_to_address_codec, address, uuid=uuid, - callable=task_data, address=address) + def _execute_on_member(self, uuid, task_data, member_uuid): + def handler(message): + return self._to_object(executor_service_submit_to_member_codec.decode_response(message)) - def _get_uuid(self): - return str(uuid4()) + request = executor_service_submit_to_member_codec.encode_request(self.name, uuid, task_data, member_uuid) + return self._invoke_on_target(request, member_uuid, handler) diff --git a/hazelcast/proxy/flake_id_generator.py b/hazelcast/proxy/flake_id_generator.py index baaf5996c6..624589483b 100644 --- a/hazelcast/proxy/flake_id_generator.py +++ b/hazelcast/proxy/flake_id_generator.py @@ -38,10 +38,10 @@ class FlakeIdGenerator(Proxy): _BITS_NODE_ID = 16 _BITS_SEQUENCE = 6 - def __init__(self, client, service_name, name): - super(FlakeIdGenerator, self).__init__(client, service_name, name) + def __init__(self, service_name, name, context): + super(FlakeIdGenerator, self).__init__(service_name, name, context) - config = client.config.flake_id_generator_configs.get(name, None) + config = context.config.flake_id_generators.get(name, None) if config is None: config = FlakeIdGeneratorConfig() @@ -88,21 +88,13 @@ def init(self, id): return self.new_id().continue_with(lambda f: f.result() >= (id + reserve)) def _new_id_batch(self, batch_size): - future = self._encode_invoke(flake_id_generator_new_id_batch_codec, self._response_handler, - batch_size=batch_size) - - return future.continue_with(self._response_to_id_batch) - - def _response_handler(self, future, codec, to_object): - response = future.result() - if response: - return codec.decode_response(response, to_object) - - def _response_to_id_batch(self, future): - response = future.result() - if response: + def handler(message): + response = flake_id_generator_new_id_batch_codec.decode_response(message) return _IdBatch(response["base"], response["increment"], response["batch_size"]) + request = flake_id_generator_new_id_batch_codec.encode_request(self.name, batch_size) + return self._invoke(request, handler) + class _AutoBatcher(object): def __init__(self, batch_size, validity_in_millis, id_generator): diff --git a/hazelcast/proxy/id_generator.py b/hazelcast/proxy/id_generator.py deleted file mode 100644 index 29e3a6e196..0000000000 --- a/hazelcast/proxy/id_generator.py +++ /dev/null @@ -1,64 +0,0 @@ -import threading - -from hazelcast.proxy.base import Proxy -from hazelcast.util import AtomicInteger - -BLOCK_SIZE = 10000 - - -class IdGenerator(Proxy): - """ - The IdGenerator is responsible for creating unique ids (a long) in a cluster. In theory, an - AtomicLong.increment_and_get() could be used to provide the same functionality. The big difference is that the - increment_and_get() requires one or more remote calls for every invocation which cause a performance and - scalability bottleneck. The IdGenerator uses an AtomicLong under the hood, but instead of doing remote call for - every call to new_id(), it does it less frequently. It checks out a chunk, e.g. 1..1000 and as long as it has not - yet consumed all the ids in its chunk, then no remote call is done. IDs generated by different cluster members may - get out of order because each member will get its own chunk. It can be that member 1 has chunk 1..1000 and member - 2 has 1001..2000. Therefore, member 2 will automatically have ids that are out of order with the ids generated by - member 1. - """ - def __init__(self, client, service_name, name, atomic_long): - super(IdGenerator, self).__init__(client, service_name, name) - self._atomic_long = atomic_long - self._residue = BLOCK_SIZE - self._local = -1 - self._lock = threading.RLock() - - def _on_destroy(self): - self._atomic_long.destroy() - - def init(self, initial): - """ - Try to initialize this IdGenerator instance with the given id. The first generated id will be 1 greater than id. - - :param initial: (long), the given id. - :return: (bool), ``true`` if initialization succeeded, ``false`` if id is less than 0. - """ - if initial <= 0: - return False - step = initial // BLOCK_SIZE - with self._lock: - init = self._atomic_long.compare_and_set(0, step + 1).result() - if init: - self._local = step - self._residue = (initial % BLOCK_SIZE) + 1 - return init - - def new_id(self): - """ - Generates and returns a cluster-wide unique id. Generated ids are guaranteed to be unique for the entire cluster - as long as the cluster is live. If the cluster restarts, then id generation will start from 0. - - :return: (long), cluster-wide new unique id. - """ - with self._lock: - curr = self._residue - self._residue += 1 - if self._residue >= BLOCK_SIZE: - increment = self._atomic_long.get_and_increment().result() - self._local = increment - self._residue = 0 - return self.new_id() - return self._local * BLOCK_SIZE + curr - diff --git a/hazelcast/proxy/list.py b/hazelcast/proxy/list.py index 4027e39156..4ff65e4565 100644 --- a/hazelcast/proxy/list.py +++ b/hazelcast/proxy/list.py @@ -22,7 +22,7 @@ list_size_codec, \ list_sub_codec from hazelcast.proxy.base import PartitionSpecificProxy, ItemEvent, ItemEventType -from hazelcast.util import check_not_none +from hazelcast.util import check_not_none, ImmutableLazyDataList class List(PartitionSpecificProxy): @@ -41,7 +41,8 @@ def add(self, item): """ check_not_none(item, "Value can't be None") element_data = self._to_data(item) - return self._encode_invoke(list_add_codec, value=element_data) + request = list_add_codec.encode_request(self.name, element_data) + return self._invoke(request, list_add_codec.decode_response) def add_at(self, index, item): """ @@ -53,7 +54,9 @@ def add_at(self, index, item): """ check_not_none(item, "Value can't be None") element_data = self._to_data(item) - return self._encode_invoke(list_add_with_index_codec, index=index, value=element_data) + + request = list_add_with_index_codec.encode_request(self.name, index, element_data) + return self._invoke(request) def add_all(self, items): """ @@ -68,7 +71,9 @@ def add_all(self, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(list_add_all_codec, value_list=data_items) + + request = list_add_all_codec.encode_request(self.name, data_items) + return self._invoke(request, list_add_all_codec.decode_response) def add_all_at(self, index, items): """ @@ -85,7 +90,9 @@ def add_all_at(self, index, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(list_add_all_with_index_codec, index=index, value_list=data_items) + + request = list_add_all_with_index_codec.encode_request(self.name, index, data_items) + return self._invoke(request, list_add_all_with_index_codec.decode_response) def add_listener(self, include_value=False, item_added_func=None, item_removed_func=None): """ @@ -100,7 +107,7 @@ def add_listener(self, include_value=False, item_added_func=None, item_removed_f def handle_event_item(item, uuid, event_type): item = item if include_value else None - member = self._client.cluster.get_member_by_uuid(uuid) + member = self._context.cluster_service.get_member(uuid) item_event = ItemEvent(self.name, item, event_type, member, self._to_object) if event_type == ItemEventType.added: @@ -110,7 +117,7 @@ def handle_event_item(item, uuid, event_type): if item_removed_func: item_removed_func(item_event) - return self._register_listener(request, lambda r: list_add_listener_codec.decode_response(r)['response'], + return self._register_listener(request, lambda r: list_add_listener_codec.decode_response(r), lambda reg_id: list_remove_listener_codec.encode_request(self.name, reg_id), lambda m: list_add_listener_codec.handle(m, handle_event_item)) @@ -118,7 +125,8 @@ def clear(self): """ Clears the list. List will be empty with this call. """ - return self._encode_invoke(list_clear_codec) + request = list_clear_codec.encode_request(self.name) + return self._invoke(request) def contains(self, item): """ @@ -129,7 +137,9 @@ def contains(self, item): """ check_not_none(item, "Value can't be None") item_data = self._to_data(item) - return self._encode_invoke(list_contains_codec, value=item_data) + + request = list_contains_codec.encode_request(self.name, item_data) + return self._invoke(request, list_contains_codec.decode_response) def contains_all(self, items): """ @@ -143,7 +153,9 @@ def contains_all(self, items): for item in items: check_not_none(item, "item can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(list_contains_all_codec, values=data_items) + + request = list_contains_all_codec.encode_request(self.name, data_items) + return self._invoke(request, list_contains_all_codec.decode_response) def get(self, index): """ @@ -152,7 +164,11 @@ def get(self, index): :param index: (int), the specified index of the item to be returned. :return: (object), the item in the specified position in this list. """ - return self._encode_invoke(list_get_codec, index=index) + def handler(message): + return self._to_object(list_get_codec.decode_response(message)) + + request = list_get_codec.encode_request(self.name, index) + return self._invoke(request, handler) def get_all(self): """ @@ -160,7 +176,11 @@ def get_all(self): :return: (Sequence), list that includes all of the items in this list. """ - return self._encode_invoke(list_get_all_codec) + def handler(message): + return ImmutableLazyDataList(list_get_all_codec.decode_response(message), self._to_object) + + request = list_get_all_codec.encode_request(self.name) + return self._invoke(request, handler) def iterator(self): """ @@ -168,7 +188,11 @@ def iterator(self): :return: (Sequence), an iterator over the elements in this list in proper sequence. """ - return self._encode_invoke(list_iterator_codec) + def handler(message): + return ImmutableLazyDataList(list_iterator_codec.decode_response(message), self._to_object) + + request = list_iterator_codec.encode_request(self.name) + return self._invoke(request, handler) def index_of(self, item): """ @@ -180,7 +204,9 @@ def index_of(self, item): """ check_not_none(item, "Value can't be None") item_data = self._to_data(item) - return self._encode_invoke(list_index_of_codec, value=item_data) + + request = list_index_of_codec.encode_request(self.name, item_data) + return self._invoke(request, list_index_of_codec.decode_response) def is_empty(self): """ @@ -188,7 +214,9 @@ def is_empty(self): :return: (bool), ``true`` if this list contains no elements. """ - return self._encode_invoke(list_is_empty_codec) + + request = list_is_empty_codec.encode_request(self.name) + return self._invoke(request, list_is_empty_codec.decode_response) def last_index_of(self, item): """ @@ -200,7 +228,9 @@ def last_index_of(self, item): """ check_not_none(item, "Value can't be None") item_data = self._to_data(item) - return self._encode_invoke(list_last_index_of_codec, value=item_data) + + request = list_last_index_of_codec.encode_request(self.name, item_data) + return self._invoke(request, list_last_index_of_codec.decode_response) def list_iterator(self, index=0): """ @@ -209,7 +239,11 @@ def list_iterator(self, index=0): :param index: (int), index of first element to be returned from the list iterator (optional). :return: (Sequence), a list iterator of the elements in this list. """ - return self._encode_invoke(list_list_iterator_codec, index=index) + def handler(message): + return ImmutableLazyDataList(list_list_iterator_codec.decode_response(message), self._to_object) + + request = list_list_iterator_codec.encode_request(self.name, index) + return self._invoke(request, handler) def remove(self, item): """ @@ -220,7 +254,9 @@ def remove(self, item): """ check_not_none(item, "Value can't be None") item_data = self._to_data(item) - return self._encode_invoke(list_remove_codec, value=item_data) + + request = list_remove_codec.encode_request(self.name, item_data) + return self._invoke(request, list_remove_codec.decode_response) def remove_at(self, index): """ @@ -230,7 +266,11 @@ def remove_at(self, index): :param index: (int), index of the item to be removed. :return: (object), the item previously at the specified index. """ - return self._encode_invoke(list_remove_with_index_codec, index=index) + def handler(message): + return self._to_object(list_remove_with_index_codec.decode_response(message)) + + request = list_remove_with_index_codec.encode_request(self.name, index) + return self._invoke(request, handler) def remove_all(self, items): """ @@ -244,7 +284,9 @@ def remove_all(self, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(list_compare_and_remove_all_codec, values=data_items) + + request = list_compare_and_remove_all_codec.encode_request(self.name, data_items) + return self._invoke(request, list_compare_and_remove_all_codec.decode_response) def remove_listener(self, registration_id): """ @@ -268,7 +310,9 @@ def retain_all(self, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(list_compare_and_retain_all_codec, values=data_items) + + request = list_compare_and_retain_all_codec.encode_request(self.name, data_items) + return self._invoke(request, list_compare_and_retain_all_codec.decode_response) def size(self): """ @@ -276,7 +320,9 @@ def size(self): :return: (int), number of the elements in this list. """ - return self._encode_invoke(list_size_codec) + + request = list_size_codec.encode_request(self.name) + return self._invoke(request, list_size_codec.decode_response) def set_at(self, index, item): """ @@ -288,7 +334,12 @@ def set_at(self, index, item): """ check_not_none(item, "Value can't be None") element_data = self._to_data(item) - return self._encode_invoke(list_set_codec, index=index, value=element_data) + + def handler(message): + return self._to_object(list_set_codec.decode_response(message)) + + request = list_set_codec.encode_request(self.name, index, element_data) + return self._invoke(request, handler) def sub_list(self, from_index, to_index): """ @@ -300,4 +351,8 @@ def sub_list(self, from_index, to_index): :param to_index: (int), th end point(exclusive) of the sub_list. :return: (Sequence), a view of the specified range within this list. """ - return self._encode_invoke(list_sub_codec, from_=from_index, to=to_index) + def handler(message): + return ImmutableLazyDataList(list_sub_codec.decode_response(message), self._to_object) + + request = list_sub_codec.encode_request(self.name, from_index, to_index) + return self._invoke(request, handler) diff --git a/hazelcast/proxy/lock.py b/hazelcast/proxy/lock.py deleted file mode 100644 index a3204d4a76..0000000000 --- a/hazelcast/proxy/lock.py +++ /dev/null @@ -1,95 +0,0 @@ -from hazelcast.protocol.codec import lock_force_unlock_codec, lock_get_lock_count_codec, \ - lock_get_remaining_lease_time_codec, lock_is_locked_by_current_thread_codec, lock_is_locked_codec, lock_lock_codec, \ - lock_try_lock_codec, lock_unlock_codec -from hazelcast.proxy.base import PartitionSpecificProxy, MAX_SIZE -from hazelcast.util import thread_id, to_millis - - -class Lock(PartitionSpecificProxy): - """ - Distributed implementation of `asyncio.Lock `_ - """ - - def __init__(self, client, service_name, name): - super(Lock, self).__init__(client, service_name, name) - self.reference_id_generator = self._client.lock_reference_id_generator - - def force_unlock(self): - """ - Releases the lock regardless of the lock owner. It always successfully unlocks, never blocks, and returns - immediately. - """ - return self._encode_invoke(lock_force_unlock_codec, - reference_id=self.reference_id_generator.get_and_increment()) - - def get_lock_count(self): - """ - Returns re-entrant lock hold count, regardless of lock ownership. - - :return: (int), the lock hold count. - """ - return self._encode_invoke(lock_get_lock_count_codec) - - def get_remaining_lease_time(self): - """ - Returns remaining lease time in milliseconds. If the lock is not locked then -1 will be returned. - - :return: (long), remaining lease time in milliseconds. - """ - return self._encode_invoke(lock_get_remaining_lease_time_codec) - - def is_locked(self): - """ - Returns whether this lock is locked or not. - - :return: (bool), ``true`` if this lock is locked, ``false`` otherwise. - """ - return self._encode_invoke(lock_is_locked_codec) - - def is_locked_by_current_thread(self): - """ - Returns whether this lock is locked by current thread or not. - - :return: (bool), ``true`` if this lock is locked by current thread, ``false`` otherwise. - """ - return self._encode_invoke(lock_is_locked_by_current_thread_codec, - thread_id=thread_id()) - - def lock(self, lease_time=-1): - """ - Acquires the lock. If a lease time is specified, lock will be released after this lease time. - - If the lock is not available, the current thread becomes disabled for thread scheduling purposes and lies - dormant until the lock has been acquired. - - :param lease_time: (long), time to wait before releasing the lock (optional). - """ - return self._encode_invoke(lock_lock_codec, invocation_timeout=MAX_SIZE, lease_time=to_millis(lease_time), - thread_id=thread_id(), reference_id=self.reference_id_generator.get_and_increment()) - - def try_lock(self, timeout=0, lease_time=-1): - """ - Tries to acquire the lock. When the lock is not available, - - * If timeout is not provided, the current thread doesn't wait and returns ``false`` immediately. - * If a timeout is provided, the current thread becomes disabled for thread scheduling purposes and lies - dormant until one of the followings happens: - * the lock is acquired by the current thread, or - * the specified waiting time elapses. - - If lease time is provided, lock will be released after this time elapses. - - :param timeout: (long), maximum time in seconds to wait for the lock (optional). - :param lease_time: (long), time in seconds to wait before releasing the lock (optional). - :return: (bool), ``true`` if the lock was acquired and otherwise, ``false``. - """ - return self._encode_invoke(lock_try_lock_codec, invocation_timeout=MAX_SIZE, lease=to_millis(lease_time), - thread_id=thread_id(), timeout=to_millis(timeout), - reference_id=self.reference_id_generator.get_and_increment()) - - def unlock(self): - """ - Releases the lock. - """ - return self._encode_invoke(lock_unlock_codec, thread_id=thread_id(), - reference_id=self.reference_id_generator.get_and_increment()) diff --git a/hazelcast/proxy/map.py b/hazelcast/proxy/map.py index 14ddd457dc..47b3fc415b 100644 --- a/hazelcast/proxy/map.py +++ b/hazelcast/proxy/map.py @@ -1,21 +1,23 @@ import itertools +from hazelcast.config import _IndexUtil from hazelcast.future import combine_futures, ImmediateFuture -from hazelcast.near_cache import NearCache +from hazelcast.invocation import Invocation from hazelcast.protocol.codec import map_add_entry_listener_codec, map_add_entry_listener_to_key_codec, \ map_add_entry_listener_with_predicate_codec, map_add_entry_listener_to_key_with_predicate_codec, \ - map_add_index_codec, map_clear_codec, map_contains_key_codec, map_contains_value_codec, map_delete_codec, \ + map_clear_codec, map_contains_key_codec, map_contains_value_codec, map_delete_codec, \ map_entry_set_codec, map_entries_with_predicate_codec, map_evict_codec, map_evict_all_codec, map_flush_codec, \ map_force_unlock_codec, map_get_codec, map_get_all_codec, map_get_entry_view_codec, map_is_empty_codec, \ map_is_locked_codec, map_key_set_codec, map_key_set_with_predicate_codec, map_load_all_codec, \ map_load_given_keys_codec, map_lock_codec, map_put_codec, map_put_all_codec, map_put_if_absent_codec, \ map_put_transient_codec, map_size_codec, map_remove_codec, map_remove_if_same_codec, \ - map_remove_entry_listener_codec, map_replace_codec, map_replace_if_same_codec, map_set_codec, map_set_ttl_codec, \ - map_try_lock_codec, map_try_put_codec, map_try_remove_codec, map_unlock_codec, map_values_codec, \ - map_values_with_predicate_codec, map_add_interceptor_codec, map_execute_on_all_keys_codec, map_execute_on_key_codec, \ - map_execute_on_keys_codec, map_execute_with_predicate_codec, map_add_near_cache_entry_listener_codec + map_remove_entry_listener_codec, map_replace_codec, map_replace_if_same_codec, map_set_codec, map_try_lock_codec, \ + map_try_put_codec, map_try_remove_codec, map_unlock_codec, map_values_codec, map_values_with_predicate_codec, \ + map_add_interceptor_codec, map_execute_on_all_keys_codec, map_execute_on_key_codec, map_execute_on_keys_codec, \ + map_execute_with_predicate_codec, map_add_near_cache_invalidation_listener_codec, map_add_index_codec, \ + map_set_ttl_codec from hazelcast.proxy.base import Proxy, EntryEvent, EntryEventType, get_entry_listener_flags, MAX_SIZE -from hazelcast.util import check_not_none, thread_id, to_millis +from hazelcast.util import check_not_none, thread_id, to_millis, ImmutableLazyDataList from hazelcast import six @@ -53,9 +55,9 @@ class Map(Proxy): This class does not allow ``None`` to be used as a key or value. """ - def __init__(self, client, service_name, name): - super(Map, self).__init__(client, service_name, name) - self.reference_id_generator = self._client.lock_reference_id_generator + def __init__(self, service_name, name, context): + super(Map, self).__init__(service_name, name, context) + self._reference_id_generator = context.lock_reference_id_generator def add_entry_listener(self, include_value=False, key=None, predicate=None, added_func=None, removed_func=None, updated_func=None, evicted_func=None, evict_all_func=None, clear_all_func=None, @@ -85,23 +87,26 @@ def add_entry_listener(self, include_value=False, key=None, predicate=None, adde merged=merged_func, expired=expired_func, loaded=loaded_func) if key and predicate: + codec = map_add_entry_listener_to_key_with_predicate_codec key_data = self._to_data(key) predicate_data = self._to_data(predicate) - request = map_add_entry_listener_to_key_with_predicate_codec.encode_request( - self.name, key_data, predicate_data, include_value, flags, self._is_smart) + request = codec.encode_request(self.name, key_data, predicate_data, include_value, flags, self._is_smart) elif key and not predicate: + codec = map_add_entry_listener_to_key_codec key_data = self._to_data(key) - request = map_add_entry_listener_to_key_codec.encode_request( - self.name, key_data, include_value, flags, self._is_smart) + request = codec.encode_request(self.name, key_data, include_value, flags, self._is_smart) elif not key and predicate: + codec = map_add_entry_listener_with_predicate_codec predicate = self._to_data(predicate) - request = map_add_entry_listener_with_predicate_codec.encode_request( - self.name, predicate, include_value, flags, self._is_smart) + request = codec.encode_request(self.name, predicate, include_value, flags, self._is_smart) else: - request = map_add_entry_listener_codec.encode_request(self.name, include_value, flags, self._is_smart) + codec = map_add_entry_listener_codec + request = codec.encode_request(self.name, include_value, flags, self._is_smart) + + def handle_event_entry(key_, value, old_value, merging_value, event_type, uuid, number_of_affected_entries): + event = EntryEvent(self._to_object, key_, value, old_value, merging_value, + event_type, uuid, number_of_affected_entries) - def handle_event_entry(**_kwargs): - event = EntryEvent(self._to_object, **_kwargs) if event.event_type == EntryEventType.added: added_func(event) elif event.event_type == EntryEventType.removed: @@ -121,11 +126,11 @@ def handle_event_entry(**_kwargs): elif event.event_type == EntryEventType.loaded: loaded_func(event) - return self._register_listener(request, lambda r: map_add_entry_listener_codec.decode_response(r)['response'], + return self._register_listener(request, lambda r: codec.decode_response(r), lambda reg_id: map_remove_entry_listener_codec.encode_request(self.name, reg_id), - lambda m: map_add_entry_listener_codec.handle(m, handle_event_entry)) + lambda m: codec.handle(m, handle_event_entry)) - def add_index(self, attribute, ordered=False): + def add_index(self, index_config): """ Adds an index to this map for the specified entries so that queries can run faster. @@ -140,15 +145,27 @@ def add_index(self, attribute, ordered=False): >>> #methods If you query your values mostly based on age and active fields, you should consider indexing these. - >>> map = self.client.get_map("employees") - >>> map.add_index("age" , true) #ordered, since we have ranged queries for this field - >>> map.add_index("active", false) #not ordered, because boolean field cannot have range + >>> employees = self.client.get_map("employees") + >>> employees.add_index(IndexConfig("age")) # Sorted index for range queries + >>> employees.add_index(IndexConfig("active", INDEX_TYPE.HASH)) # Hash index for equality predicates + + Index attribute should either have a getter method or be public. + You should also make sure to add the indexes before adding + entries to this map. + + Indexing time is executed in parallel on each partition by operation threads. The Map + is not blocked during this operation. + The time taken in proportional to the size of the Map and the number Members. + Until the index finishes being created, any searches for the attribute will use a full Map scan, + thus avoiding using a partially built index and returning incorrect results. - :param attribute: (str), index attribute of the value. - :param ordered: (bool), for ordering the index or not (optional). + :param index_config: (:class:`~hazelcast.config.IndexConfig`), index config. """ - return self._encode_invoke(map_add_index_codec, attribute=attribute, ordered=ordered) + check_not_none(index_config, "Index config cannot be None") + validated = _IndexUtil.validate_and_normalize(self.name, index_config) + request = map_add_index_codec.encode_request(self.name, validated) + return self._invoke(request) def add_interceptor(self, interceptor): """ @@ -157,7 +174,10 @@ def add_interceptor(self, interceptor): :param interceptor: (object), interceptor for the map which includes user defined methods. :return: (str),id of registered interceptor. """ - return self._encode_invoke(map_add_interceptor_codec, interceptor=self._to_data(interceptor)) + interceptor_data = self._to_data(interceptor) + + request = map_add_interceptor_codec.encode_request(self.name, interceptor_data) + return self._invoke(request, map_add_interceptor_codec.decode_response) def clear(self): """ @@ -165,7 +185,8 @@ def clear(self): The MAP_CLEARED event is fired for any registered listeners. """ - return self._encode_invoke(map_clear_codec) + request = map_clear_codec.encode_request(self.name) + return self._invoke(request) def contains_key(self, key): """ @@ -190,7 +211,9 @@ def contains_value(self, value): """ check_not_none(value, "value can't be None") value_data = self._to_data(value) - return self._encode_invoke(map_contains_value_codec, value=value_data) + + request = map_contains_value_codec.encode_request(self.name, value_data) + return self._invoke(request, map_contains_value_codec.decode_response) def delete(self, key): """ @@ -226,10 +249,18 @@ def entry_set(self, predicate=None): .. seealso:: :class:`~hazelcast.serialization.predicate.Predicate` for more info about predicates. """ if predicate: + def handler(message): + return ImmutableLazyDataList(map_entries_with_predicate_codec.decode_response(message), self._to_object) + predicate_data = self._to_data(predicate) - return self._encode_invoke(map_entries_with_predicate_codec, predicate=predicate_data) + request = map_entries_with_predicate_codec.encode_request(self.name, predicate_data) else: - return self._encode_invoke(map_entry_set_codec) + def handler(message): + return ImmutableLazyDataList(map_entry_set_codec.decode_response(message), self._to_object) + + request = map_entry_set_codec.encode_request(self.name) + + return self._invoke(request, handler) def evict(self, key): """ @@ -251,7 +282,8 @@ def evict_all(self): The EVICT_ALL event is fired for any registered listeners. """ - return self._encode_invoke(map_evict_all_codec) + request = map_evict_all_codec.encode_request(self.name) + return self._invoke(request) def execute_on_entries(self, entry_processor, predicate=None): """ @@ -268,9 +300,20 @@ def execute_on_entries(self, entry_processor, predicate=None): .. seealso:: :class:`~hazelcast.serialization.predicate.Predicate` for more info about predicates. """ if predicate: - return self._encode_invoke(map_execute_with_predicate_codec, entry_processor=self._to_data(entry_processor), - predicate=self._to_data(predicate)) - return self._encode_invoke(map_execute_on_all_keys_codec, entry_processor=self._to_data(entry_processor)) + def handler(message): + return ImmutableLazyDataList(map_execute_with_predicate_codec.decode_response(message), self._to_object) + + entry_processor_data = self._to_data(entry_processor) + predicate_data = self._to_data(predicate) + request = map_execute_with_predicate_codec.encode_request(self.name, entry_processor_data, predicate_data) + else: + def handler(message): + return ImmutableLazyDataList(map_execute_on_all_keys_codec.decode_response(message), self._to_object) + + entry_processor_data = self._to_data(entry_processor) + request = map_execute_on_all_keys_codec.encode_request(self.name, entry_processor_data) + + return self._invoke(request, handler) def execute_on_key(self, key, entry_processor): """ @@ -308,14 +351,20 @@ def execute_on_keys(self, keys, entry_processor): if len(keys) == 0: return ImmediateFuture([]) - return self._encode_invoke(map_execute_on_keys_codec, entry_processor=self._to_data(entry_processor), - keys=key_list) + def handler(message): + return ImmutableLazyDataList(map_execute_on_keys_codec.decode_response(message), self._to_object) + + entry_processor_data = self._to_data(entry_processor) + request = map_execute_on_keys_codec.encode_request(self.name, entry_processor_data, key_list) + return self._invoke(request, handler) def flush(self): """ Flushes all the local dirty entries. """ - return self._encode_invoke(map_flush_codec) + + request = map_flush_codec.encode_request(self.name) + return self._invoke(request) def force_unlock(self, key): """ @@ -329,8 +378,10 @@ def force_unlock(self, key): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(map_force_unlock_codec, key_data, key=key_data, - reference_id=self.reference_id_generator.get_and_increment()) + + request = map_force_unlock_codec.encode_request(self.name, key_data, + self._reference_id_generator.get_and_increment()) + return self._invoke_on_key(request, key_data) def get(self, key): """ @@ -371,7 +422,7 @@ def get_all(self, keys): if not keys: return ImmediateFuture({}) - partition_service = self._client.partition_service + partition_service = self._context.partition_service partition_to_keys = {} for key in keys: @@ -402,8 +453,20 @@ def get_entry_view(self, key): .. seealso:: :class:`~hazelcast.core.EntryView` for more info about EntryView. """ check_not_none(key, "key can't be None") + + def handler(message): + response = map_get_entry_view_codec.decode_response(message) + entry_view = response["response"] + if not entry_view: + return None + + entry_view.key = self._to_object(entry_view.key) + entry_view.value = self._to_object(entry_view.value) + return entry_view + key_data = self._to_data(key) - return self._encode_invoke_on_key(map_get_entry_view_codec, key_data, key=key_data, thread_id=thread_id()) + request = map_get_entry_view_codec.encode_request(self.name, key_data, thread_id()) + return self._invoke_on_key(request, key_data, handler) def is_empty(self): """ @@ -411,7 +474,8 @@ def is_empty(self): :return: (bool), ``true`` if this map contains no key-value mappings. """ - return self._encode_invoke(map_is_empty_codec) + request = map_is_empty_codec.encode_request(self.name) + return self._invoke(request, map_is_empty_codec.decode_response) def is_locked(self, key): """ @@ -425,7 +489,9 @@ def is_locked(self, key): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(map_is_locked_codec, key_data, key=key_data) + + request = map_is_locked_codec.encode_request(self.name, key_data) + return self._invoke_on_key(request, key_data, map_is_locked_codec.decode_response) def key_set(self, predicate=None): """ @@ -441,10 +507,18 @@ def key_set(self, predicate=None): .. seealso:: :class:`~hazelcast.serialization.predicate.Predicate` for more info about predicates. """ if predicate: + def handler(message): + return ImmutableLazyDataList(map_key_set_with_predicate_codec.decode_response(message), self._to_object) + predicate_data = self._to_data(predicate) - return self._encode_invoke(map_key_set_with_predicate_codec, predicate=predicate_data) + request = map_key_set_with_predicate_codec.encode_request(self.name, predicate_data) else: - return self._encode_invoke(map_key_set_codec) + def handler(message): + return ImmutableLazyDataList(map_key_set_codec.decode_response(message), self._to_object) + + request = map_key_set_codec.encode_request(self.name) + + return self._invoke(request, handler) def load_all(self, keys=None, replace_existing_values=True): """ @@ -458,7 +532,8 @@ def load_all(self, keys=None, replace_existing_values=True): key_data_list = list(map(self._to_data, keys)) return self._load_all_internal(key_data_list, replace_existing_values) else: - return self._encode_invoke(map_load_all_codec, replace_existing_values=replace_existing_values) + request = map_load_all_codec.encode_request(self.name, replace_existing_values) + return self._invoke(request) def lock(self, key, ttl=-1): """ @@ -485,9 +560,13 @@ def lock(self, key, ttl=-1): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(map_lock_codec, key_data, invocation_timeout=MAX_SIZE, key=key_data, - thread_id=thread_id(), ttl=to_millis(ttl), - reference_id=self.reference_id_generator.get_and_increment()) + + request = map_lock_codec.encode_request(self.name, key_data, thread_id(), to_millis(ttl), + self._reference_id_generator.get_and_increment()) + partition_id = self._context.partition_service.get_partition_id(key_data) + invocation = Invocation(request, partition_id=partition_id, timeout=MAX_SIZE) + self._invocation_service.invoke(invocation) + return invocation.future def put(self, key, value, ttl=-1): """ @@ -525,7 +604,7 @@ def put_all(self, map): if not map: return ImmediateFuture(None) - partition_service = self._client.partition_service + partition_service = self._context.partition_service partition_map = {} for key, value in six.iteritems(map): @@ -540,7 +619,8 @@ def put_all(self, map): futures = [] for partition_id, entry_list in six.iteritems(partition_map): - future = self._encode_invoke_on_partition(map_put_all_codec, partition_id, entries=dict(entry_list)) + request = map_put_all_codec.encode_request(self.name, entry_list, False) # TODO trigger map loader + future = self._invoke_on_partition(request, partition_id) futures.append(future) return combine_futures(*futures) @@ -727,10 +807,9 @@ def set(self, key, value, ttl=-1): def set_ttl(self, key, ttl): """ - Updates the TTL (time to live) value of the entry specified by the given key with a new TTL value. New TTL + Updates the TTL (time to live) value of the entry specified by the given key with a new TTL value. New TTL value is valid starting from the time this operation is invoked, not since the time the entry was created. If the entry does not exist or is already expired, this call has no effect. - :param key: (object), the key of the map entry. :param ttl: (int), maximum time for this entry to stay in the map (0 means infinite, negative means map config default) @@ -746,7 +825,8 @@ def size(self): :return: (int), number of entries in this map. """ - return self._encode_invoke(map_size_codec) + request = map_size_codec.encode_request(self.name) + return self._invoke(request, map_size_codec.decode_response) def try_lock(self, key, ttl=-1, timeout=0): """ @@ -768,10 +848,14 @@ def try_lock(self, key, ttl=-1, timeout=0): check_not_none(key, "key can't be None") key_data = self._to_data(key) - - return self._encode_invoke_on_key(map_try_lock_codec, key_data, invocation_timeout=MAX_SIZE, key=key_data, - thread_id=thread_id(), lease=to_millis(ttl), timeout=to_millis(timeout), - reference_id=self.reference_id_generator.get_and_increment()) + request = map_try_lock_codec.encode_request(self.name, key_data, thread_id(), + to_millis(ttl), to_millis(timeout), + self._reference_id_generator.get_and_increment()) + partition_id = self._context.partition_service.get_partition_id(key_data) + invocation = Invocation(request, partition_id=partition_id, timeout=MAX_SIZE, + response_handler=map_try_lock_codec.decode_response) + self._invocation_service.invoke(invocation) + return invocation.future def try_put(self, key, value, timeout=0): """ @@ -815,9 +899,9 @@ def unlock(self, key): check_not_none(key, "key can't be None") key_data = self._to_data(key) - - return self._encode_invoke_on_key(map_unlock_codec, key_data, key=key_data, thread_id=thread_id(), - reference_id=self.reference_id_generator.get_and_increment()) + request = map_unlock_codec.encode_request(self.name, key_data, thread_id(), + self._reference_id_generator.get_and_increment()) + return self._invoke_on_key(request, key_data) def values(self, predicate=None): """ @@ -834,23 +918,41 @@ def values(self, predicate=None): .. seealso:: :class:`~hazelcast.serialization.predicate.Predicate` for more info about predicates. """ if predicate: + def handler(message): + return ImmutableLazyDataList(map_values_with_predicate_codec.decode_response(message), self._to_object) + predicate_data = self._to_data(predicate) - return self._encode_invoke(map_values_with_predicate_codec, predicate=predicate_data) + request = map_values_with_predicate_codec.encode_request(self.name, predicate_data) else: - return self._encode_invoke(map_values_codec) + def handler(message): + return ImmutableLazyDataList(map_values_codec.decode_response(message), self._to_object) + + request = map_values_codec.encode_request(self.name) + + return self._invoke(request, handler) # internals def _contains_key_internal(self, key_data): - return self._encode_invoke_on_key(map_contains_key_codec, key_data, key=key_data, thread_id=thread_id()) + request = map_contains_key_codec.encode_request(self.name, key_data, thread_id()) + return self._invoke_on_key(request, key_data, map_contains_key_codec.decode_response) def _get_internal(self, key_data): - return self._encode_invoke_on_key(map_get_codec, key_data, key=key_data, thread_id=thread_id()) + def handler(message): + return self._to_object(map_get_codec.decode_response(message)) + + request = map_get_codec.encode_request(self.name, key_data, thread_id()) + return self._invoke_on_key(request, key_data, handler) def _get_all_internal(self, partition_to_keys, futures=None): if futures is None: futures = [] + + def handler(message): + return ImmutableLazyDataList(map_get_all_codec.decode_response(message), self._to_object) + for partition_id, key_dict in six.iteritems(partition_to_keys): - future = self._encode_invoke_on_partition(map_get_all_codec, partition_id, keys=list(key_dict.values())) + request = map_get_all_codec.encode_request(self.name, six.itervalues(key_dict)) + future = self._invoke_on_partition(request, partition_id, handler) futures.append(future) def merge(f): @@ -859,68 +961,91 @@ def merge(f): return combine_futures(*futures).continue_with(merge) def _remove_internal(self, key_data): - return self._encode_invoke_on_key(map_remove_codec, key_data, key=key_data, thread_id=thread_id()) + def handler(message): + return self._to_object(map_remove_codec.decode_response(message)) + + request = map_remove_codec.encode_request(self.name, key_data, thread_id()) + return self._invoke_on_key(request, key_data, handler) def _remove_if_same_internal_(self, key_data, value_data): - return self._encode_invoke_on_key(map_remove_if_same_codec, key_data, key=key_data, value=value_data, - thread_id=thread_id()) + request = map_remove_if_same_codec.encode_request(self.name, key_data, value_data, thread_id()) + return self._invoke_on_key(request, key_data, response_handler=map_remove_if_same_codec.decode_response) def _delete_internal(self, key_data): - return self._encode_invoke_on_key(map_delete_codec, key_data, key=key_data, thread_id=thread_id()) + request = map_delete_codec.encode_request(self.name, key_data, thread_id()) + return self._invoke_on_key(request, key_data) def _put_internal(self, key_data, value_data, ttl): - return self._encode_invoke_on_key(map_put_codec, key_data, key=key_data, value=value_data, thread_id=thread_id(), - ttl=to_millis(ttl)) + def handler(message): + return self._to_object(map_put_codec.decode_response(message)) + + request = map_put_codec.encode_request(self.name, key_data, value_data, thread_id(), to_millis(ttl)) + return self._invoke_on_key(request, key_data, handler) def _set_internal(self, key_data, value_data, ttl): - return self._encode_invoke_on_key(map_set_codec, key_data, key=key_data, value=value_data, thread_id=thread_id(), - ttl=to_millis(ttl)) + request = map_set_codec.encode_request(self.name, key_data, value_data, thread_id(), to_millis(ttl)) + return self._invoke_on_key(request, key_data) def _set_ttl_internal(self, key_data, ttl): - return self._encode_invoke_on_key(map_set_ttl_codec, key_data, key=key_data, ttl=to_millis(ttl)) - + request = map_set_ttl_codec.encode_request(self.name, key_data, to_millis(ttl)) + return self._invoke_on_key(request, key_data, map_set_ttl_codec.decode_response) + def _try_remove_internal(self, key_data, timeout): - return self._encode_invoke_on_key(map_try_remove_codec, key_data, key=key_data, thread_id=thread_id(), - timeout=to_millis(timeout)) + request = map_try_remove_codec.encode_request(self.name, key_data, thread_id(), to_millis(timeout)) + return self._invoke_on_key(request, key_data, map_try_remove_codec.decode_response) def _try_put_internal(self, key_data, value_data, timeout): - return self._encode_invoke_on_key(map_try_put_codec, key_data, key=key_data, value=value_data, - thread_id=thread_id(), timeout=to_millis(timeout)) + request = map_try_put_codec.encode_request(self.name, key_data, value_data, thread_id(), to_millis(timeout)) + return self._invoke_on_key(request, key_data, map_try_put_codec.decode_response) def _put_transient_internal(self, key_data, value_data, ttl): - return self._encode_invoke_on_key(map_put_transient_codec, key_data, key=key_data, value=value_data, - thread_id=thread_id(), ttl=to_millis(ttl)) + request = map_put_transient_codec.encode_request(self.name, key_data, value_data, thread_id(), to_millis(ttl)) + return self._invoke_on_key(request, key_data) def _put_if_absent_internal(self, key_data, value_data, ttl): - return self._encode_invoke_on_key(map_put_if_absent_codec, key_data, key=key_data, value=value_data, - thread_id=thread_id(), ttl=to_millis(ttl)) + def handler(message): + return self._to_object(map_put_if_absent_codec.decode_response(message)) + + request = map_put_if_absent_codec.encode_request(self.name, key_data, value_data, thread_id(), to_millis(ttl)) + return self._invoke_on_key(request, key_data, handler) def _replace_if_same_internal(self, key_data, old_value_data, new_value_data): - return self._encode_invoke_on_key(map_replace_if_same_codec, key_data, key=key_data, test_value=old_value_data, - value=new_value_data, thread_id=thread_id()) + request = map_replace_if_same_codec.encode_request(self.name, key_data, old_value_data, new_value_data, + thread_id()) + return self._invoke_on_key(request, key_data, map_replace_if_same_codec.decode_response) def _replace_internal(self, key_data, value_data): - return self._encode_invoke_on_key(map_replace_codec, key_data, key=key_data, value=value_data, thread_id=thread_id()) + def handler(message): + return self._to_object(map_replace_codec.decode_response(message)) + + request = map_replace_codec.encode_request(self.name, key_data, value_data, thread_id()) + return self._invoke_on_key(request, key_data, handler) def _evict_internal(self, key_data): - return self._encode_invoke_on_key(map_evict_codec, key_data, key=key_data, thread_id=thread_id()) + request = map_evict_codec.encode_request(self.name, key_data, thread_id()) + return self._invoke_on_key(request, key_data, map_evict_codec.decode_response) def _load_all_internal(self, key_data_list, replace_existing_values): - return self._encode_invoke(map_load_given_keys_codec, keys=key_data_list, replace_existing_values=replace_existing_values) + request = map_load_given_keys_codec.encode_request(self.name, key_data_list, replace_existing_values) + return self._invoke(request) def _execute_on_key_internal(self, key_data, entry_processor): - return self._encode_invoke_on_key(map_execute_on_key_codec, key_data, key=key_data, - entry_processor=self._to_data(entry_processor), thread_id=thread_id()) + def handler(message): + return self._to_object(map_execute_on_key_codec.decode_response(message)) + + entry_processor_data = self._to_data(entry_processor) + request = map_execute_on_key_codec.encode_request(self.name, entry_processor_data, key_data, thread_id()) + return self._invoke_on_key(request, key_data, handler) class MapFeatNearCache(Map): """ Map proxy implementation featuring Near Cache """ - def __init__(self, client, service_name, name): - super(MapFeatNearCache, self).__init__(client, service_name, name) + def __init__(self, service_name, name, context): + super(MapFeatNearCache, self).__init__(service_name, name, context) self._invalidation_listener_id = None - self._near_cache = client.near_cache_manager.get_or_create_near_cache(name) + self._near_cache = context.near_cache_manager.get_or_create_near_cache(name) if self._near_cache.invalidate_on_change: self._add_near_cache_invalidation_listener() @@ -944,21 +1069,20 @@ def _on_destroy(self): def _add_near_cache_invalidation_listener(self): try: - request = map_add_near_cache_entry_listener_codec.encode_request(self.name, EntryEventType.invalidation, - self._is_smart) + codec = map_add_near_cache_invalidation_listener_codec + request = codec.encode_request(self.name, EntryEventType.invalidation, self._is_smart) self._invalidation_listener_id = self._register_listener( - request, lambda r: map_add_near_cache_entry_listener_codec.decode_response(r)['response'], + request, lambda r: codec.decode_response(r), lambda reg_id: map_remove_entry_listener_codec.encode_request(self.name, reg_id), - lambda m: map_add_near_cache_entry_listener_codec.handle(m, self._handle_invalidation, - self._handle_batch_invalidation)) + lambda m: codec.handle(m, self._handle_invalidation, self._handle_batch_invalidation)) except: - self.logger.severe("-----------------\n Near Cache is not initialized!!! \n-----------------") + pass def _remove_near_cache_invalidation_listener(self): if self._invalidation_listener_id: self.remove_entry_listener(self._invalidation_listener_id) - def _handle_invalidation(self, key): + def _handle_invalidation(self, key, source_uuid, partition_uuid, sequence): # key is always ``Data`` # null key means near cache has to remove all entries in it. # see MapAddNearCacheEntryListenerMessageTask. @@ -967,7 +1091,7 @@ def _handle_invalidation(self, key): else: self._invalidate_cache(key) - def _handle_batch_invalidation(self, keys): + def _handle_batch_invalidation(self, keys, source_uuids, partition_uuids, sequences): # key_list is always list of ``Data`` for key_data in keys: self._invalidate_cache(key_data) @@ -1074,9 +1198,9 @@ def _delete_internal(self, key_data): return super(MapFeatNearCache, self)._delete_internal(key_data) -def create_map_proxy(client, service_name, name, **kwargs): - near_cache_config = client.config.near_cache_configs.get(name, None) +def create_map_proxy(service_name, name, context): + near_cache_config = context.config.near_caches.get(name, None) if near_cache_config is None: - return Map(client=client, service_name=service_name, name=name) + return Map(service_name, name, context) else: - return MapFeatNearCache(client=client, service_name=service_name, name=name) + return MapFeatNearCache(service_name, name, context) diff --git a/hazelcast/proxy/multi_map.py b/hazelcast/proxy/multi_map.py index bcbd6fdbed..663534b1f7 100644 --- a/hazelcast/proxy/multi_map.py +++ b/hazelcast/proxy/multi_map.py @@ -5,16 +5,16 @@ multi_map_remove_entry_codec, multi_map_remove_entry_listener_codec, multi_map_size_codec, multi_map_try_lock_codec, \ multi_map_unlock_codec, multi_map_value_count_codec, multi_map_values_codec from hazelcast.proxy.base import Proxy, EntryEvent, EntryEventType -from hazelcast.util import check_not_none, thread_id, to_millis +from hazelcast.util import check_not_none, thread_id, to_millis, ImmutableLazyDataList class MultiMap(Proxy): """ A specialized map whose keys can be associated with multiple values. """ - def __init__(self, client, service_name, name): - super(MultiMap, self).__init__(client, service_name, name) - self.reference_id_generator = self._client.lock_reference_id_generator + def __init__(self, service_name, name, context): + super(MultiMap, self).__init__(service_name, name, context) + self._reference_id_generator = context.lock_reference_id_generator def add_entry_listener(self, include_value=False, key=None, added_func=None, removed_func=None, clear_all_func=None): """ @@ -29,15 +29,16 @@ def add_entry_listener(self, include_value=False, key=None, added_func=None, rem :return: (str), a registration id which is used as a key to remove the listener. """ if key: + codec = multi_map_add_entry_listener_to_key_codec key_data = self._to_data(key) - request = multi_map_add_entry_listener_to_key_codec.encode_request( - name=self.name, key=key_data, include_value=include_value, local_only=False) + request = codec.encode_request(self.name, key_data, include_value, False) else: - request = multi_map_add_entry_listener_codec.encode_request( - name=self.name, include_value=include_value, local_only=False) + codec = multi_map_add_entry_listener_codec + request = codec.encode_request(self.name, include_value, False) - def handle_event_entry(**_kwargs): - event = EntryEvent(self._to_object, **_kwargs) + def handle_event_entry(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries): + event = EntryEvent(self._to_object, key, value, old_value, merging_value, + event_type, uuid, number_of_affected_entries) if event.event_type == EntryEventType.added and added_func: added_func(event) elif event.event_type == EntryEventType.removed and removed_func: @@ -46,9 +47,9 @@ def handle_event_entry(**_kwargs): clear_all_func(event) return self._register_listener( - request, lambda r: multi_map_add_entry_listener_codec.decode_response(r)['response'], + request, lambda r: codec.decode_response(r), lambda reg_id: multi_map_remove_entry_listener_codec.encode_request(self.name, reg_id), - lambda m: multi_map_add_entry_listener_codec.handle(m, handle_event_entry)) + lambda m: codec.handle(m, handle_event_entry)) def contains_key(self, key): """ @@ -62,8 +63,9 @@ def contains_key(self, key): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(multi_map_contains_key_codec, key_data, key=key_data, - thread_id=thread_id()) + + request = multi_map_contains_key_codec.encode_request(self.name, key_data, thread_id()) + return self._invoke_on_key(request, key_data, multi_map_contains_key_codec.decode_response) def contains_value(self, value): """ @@ -74,7 +76,8 @@ def contains_value(self, value): """ check_not_none(value, "value can't be None") value_data = self._to_data(value) - return self._encode_invoke(multi_map_contains_value_codec, value=value_data) + request = multi_map_contains_value_codec.encode_request(self.name, value_data) + return self._invoke(request, multi_map_contains_value_codec.decode_response) def contains_entry(self, key, value): """ @@ -88,14 +91,16 @@ def contains_entry(self, key, value): check_not_none(value, "value can't be None") key_data = self._to_data(key) value_data = self._to_data(value) - return self._encode_invoke_on_key(multi_map_contains_entry_codec, key_data, key=key_data, - value=value_data, thread_id=thread_id()) + + request = multi_map_contains_entry_codec.encode_request(self.name, key_data, value_data, thread_id()) + return self._invoke_on_key(request, key_data, multi_map_contains_entry_codec.decode_response) def clear(self): """ Clears the multimap. Removes all key-value tuples. """ - return self._encode_invoke(multi_map_clear_codec) + request = multi_map_clear_codec.encode_request(self.name) + return self._invoke(request) def entry_set(self): """ @@ -106,7 +111,11 @@ def entry_set(self): :return: (Sequence), the list of key-value tuples in the multimap. """ - return self._encode_invoke(multi_map_entry_set_codec) + def handler(message): + return ImmutableLazyDataList(multi_map_entry_set_codec.decode_response(message), self._to_object) + + request = multi_map_entry_set_codec.encode_request(self.name) + return self._invoke(request, handler) def get(self, key): """ @@ -124,9 +133,13 @@ def get(self, key): :return: (Sequence), the list of the values associated with the specified key. """ check_not_none(key, "key can't be None") + + def handler(message): + return ImmutableLazyDataList(multi_map_get_codec.decode_response(message), self._to_object) + key_data = self._to_data(key) - return self._encode_invoke_on_key(multi_map_get_codec, key_data, key=key_data, - thread_id=thread_id()) + request = multi_map_get_codec.encode_request(self.name, key_data, thread_id()) + return self._invoke_on_key(request, key_data, handler) def is_locked(self, key): """ @@ -140,7 +153,9 @@ def is_locked(self, key): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(multi_map_is_locked_codec, key_data, key=key_data) + + request = multi_map_is_locked_codec.encode_request(self.name, key_data) + return self._invoke_on_key(request, key_data, multi_map_is_locked_codec.decode_response) def force_unlock(self, key): """ @@ -154,8 +169,9 @@ def force_unlock(self, key): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(multi_map_force_unlock_codec, key_data, key=key_data, - reference_id=self.reference_id_generator.get_and_increment()) + request = multi_map_force_unlock_codec.encode_request(self.name, key_data, + self._reference_id_generator.get_and_increment()) + return self._invoke_on_key(request, key_data) def key_set(self): """ @@ -166,7 +182,11 @@ def key_set(self): :return: (Sequence), a list of the clone of the keys. """ - return self._encode_invoke(multi_map_key_set_codec) + def handler(message): + return ImmutableLazyDataList(multi_map_key_set_codec.decode_response(message), self._to_object) + + request = multi_map_key_set_codec.encode_request(self.name) + return self._invoke(request, handler) def lock(self, key, lease_time=-1): """ @@ -188,9 +208,9 @@ def lock(self, key, lease_time=-1): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(multi_map_lock_codec, key_data, key=key_data, - thread_id=thread_id(), ttl=to_millis(lease_time), - reference_id=self.reference_id_generator.get_and_increment()) + request = multi_map_lock_codec.encode_request(self.name, key_data, thread_id(), to_millis(lease_time), + self._reference_id_generator.get_and_increment()) + return self._invoke_on_key(request, key_data) def remove(self, key, value): """ @@ -207,8 +227,8 @@ def remove(self, key, value): check_not_none(key, "value can't be None") key_data = self._to_data(key) value_data = self._to_data(value) - return self._encode_invoke_on_key(multi_map_remove_entry_codec, key_data, key=key_data, - value=value_data, thread_id=thread_id()) + request = multi_map_remove_entry_codec.encode_request(self.name, key_data, value_data, thread_id()) + return self._invoke_on_key(request, key_data, multi_map_remove_entry_codec.decode_response) def remove_all(self, key): """ @@ -225,9 +245,13 @@ def remove_all(self, key): :return: (Sequence), the collection of removed values associated with the given key. """ check_not_none(key, "key can't be None") + + def handler(message): + return ImmutableLazyDataList(multi_map_remove_codec.decode_response(message), self._to_object) + key_data = self._to_data(key) - return self._encode_invoke_on_key(multi_map_remove_codec, key_data, key=key_data, - thread_id=thread_id()) + request = multi_map_remove_codec.encode_request(self.name, key_data, thread_id()) + return self._invoke_on_key(request, key_data, handler) def put(self, key, value): """ @@ -245,8 +269,8 @@ def put(self, key, value): check_not_none(value, "value can't be None") key_data = self._to_data(key) value_data = self._to_data(value) - return self._encode_invoke_on_key(multi_map_put_codec, key_data, key=key_data, value=value_data, - thread_id=thread_id()) + request = multi_map_put_codec.encode_request(self.name, key_data, value_data, thread_id()) + return self._invoke_on_key(request, key_data, multi_map_put_codec.decode_response) def remove_entry_listener(self, registration_id): """ @@ -263,7 +287,8 @@ def size(self): :return: (int), number of entries in this multimap. """ - return self._encode_invoke(multi_map_size_codec) + request = multi_map_size_codec.encode_request(self.name) + return self._invoke(request, multi_map_size_codec.decode_response) def value_count(self, key): """ @@ -277,8 +302,8 @@ def value_count(self, key): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(multi_map_value_count_codec, key_data, key=key_data, - thread_id=thread_id()) + request = multi_map_value_count_codec.encode_request(self.name, key_data, thread_id()) + return self._invoke_on_key(request, key_data, multi_map_value_count_codec.decode_response) def values(self): """ @@ -290,7 +315,11 @@ def values(self): :return: (Sequence), the list of values in the multimap. """ - return self._encode_invoke(multi_map_values_codec) + def handler(message): + return ImmutableLazyDataList(multi_map_values_codec.decode_response(message), self._to_object) + + request = multi_map_values_codec.encode_request(self.name) + return self._invoke(request, handler) def try_lock(self, key, lease_time=-1, timeout=-1): """ @@ -311,10 +340,10 @@ def try_lock(self, key, lease_time=-1, timeout=-1): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(multi_map_try_lock_codec, key_data, key=key_data, - thread_id=thread_id(), lease=to_millis(lease_time), - timeout=to_millis(timeout), - reference_id=self.reference_id_generator.get_and_increment()) + request = multi_map_try_lock_codec.encode_request(self.name, key_data, thread_id(), + to_millis(lease_time), to_millis(timeout), + self._reference_id_generator.get_and_increment()) + return self._invoke_on_key(request, key_data, multi_map_try_lock_codec.decode_response) def unlock(self, key): """ @@ -327,6 +356,6 @@ def unlock(self, key): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(multi_map_unlock_codec, key_data, key=key_data, - thread_id=thread_id(), - reference_id=self.reference_id_generator.get_and_increment()) + request = multi_map_unlock_codec.encode_request(self.name, key_data, thread_id(), + self._reference_id_generator.get_and_increment()) + return self._invoke_on_key(request, key_data) \ No newline at end of file diff --git a/hazelcast/proxy/pn_counter.py b/hazelcast/proxy/pn_counter.py index 8ad1af2ff6..bd3d07a252 100644 --- a/hazelcast/proxy/pn_counter.py +++ b/hazelcast/proxy/pn_counter.py @@ -1,13 +1,13 @@ import functools +import logging import random -from hazelcast.core import DataMemberSelector from hazelcast.future import Future from hazelcast.proxy.base import Proxy from hazelcast.cluster import VectorClock from hazelcast.protocol.codec import pn_counter_add_codec, pn_counter_get_codec, \ pn_counter_get_configured_replica_count_codec -from hazelcast.exception import NoDataMemberInClusterError +from hazelcast.errors import NoDataMemberInClusterError from hazelcast.six.moves import range @@ -55,8 +55,8 @@ class PNCounter(Proxy): _EMPTY_ADDRESS_LIST = [] - def __init__(self, client, service_name, name): - super(PNCounter, self).__init__(client, service_name, name) + def __init__(self, service_name, name, context): + super(PNCounter, self).__init__(service_name, name, context) self._observed_clock = VectorClock() self._max_replica_count = 0 self._current_target_replica_address = None @@ -71,7 +71,6 @@ def get(self): :return: (int), the current value of the counter. """ - return self._invoke_internal(pn_counter_get_codec) def get_and_add(self, delta): @@ -198,18 +197,17 @@ def _invoke_internal(self, codec, **kwargs): def _set_result_or_error(self, delegated_future, excluded_addresses, last_error, codec, **kwargs): target = self._get_crdt_operation_target(excluded_addresses) - if target is None: + if not target: if last_error: delegated_future.set_exception(last_error) return delegated_future.set_exception(NoDataMemberInClusterError("Cannot invoke operations on a CRDT because " "the cluster does not contain any data members")) return + request = codec.encode_request(name=self.name, replica_timestamps=self._observed_clock.entry_set(), + target_replica_uuid=target.uuid, **kwargs) - future = self._encode_invoke_on_target(codec, target, self._response_handler, - replica_timestamps=self._observed_clock.entry_set(), - target_replica=target, - **kwargs) + future = self._invoke_on_target(request, target.uuid, codec.decode_response) checker_func = functools.partial(self._check_invocation_result, delegated_future=delegated_future, excluded_addresses=excluded_addresses, target=target, codec=codec, **kwargs) @@ -221,11 +219,9 @@ def _check_invocation_result(self, future, delegated_future, excluded_addresses, self._update_observed_replica_timestamp(result["replica_timestamps"]) delegated_future.set_result(result["value"]) except Exception as ex: - self.logger.debug("Exception occurred while invoking operation on target {}, " - "choosing different target. Cause: {}".format(target, ex), - extra={"client_name": self._client.name, - "group_name": self._client.config.group_config.name} -) + self.logger.exception("Exception occurred while invoking operation on target %s, " + "choosing different target" % target, + extra=self._context.logger_extras) if excluded_addresses == PNCounter._EMPTY_ADDRESS_LIST: excluded_addresses = [] @@ -250,14 +246,14 @@ def _choose_target_replica(self, excluded_addresses): return replica_addresses[random_replica_index] def _get_replica_addresses(self, excluded_addresses): - data_members = self._client.cluster.get_members(DataMemberSelector()) + data_members = self._context.cluster_service.get_members(lambda member: not member.lite_member) replica_count = self._get_max_configured_replica_count() current_count = min(replica_count, len(data_members)) replica_addresses = [] for i in range(current_count): - member_address = data_members[i].address + member_address = data_members[i] if member_address not in excluded_addresses: replica_addresses.append(member_address) @@ -267,7 +263,8 @@ def _get_max_configured_replica_count(self): if self._max_replica_count > 0: return self._max_replica_count - count = self._encode_invoke(pn_counter_get_configured_replica_count_codec).result() + request = pn_counter_get_configured_replica_count_codec.encode_request(self.name) + count = self._invoke(request, pn_counter_get_configured_replica_count_codec.decode_response).result() self._max_replica_count = count return self._max_replica_count @@ -282,8 +279,3 @@ def _to_vector_clock(self, timestamps): vector_clock.set_replica_timestamp(replica_id, timestamp) return vector_clock - - def _response_handler(self, future, codec, to_object): - response = future.result() - if response: - return codec.decode_response(response, to_object) diff --git a/hazelcast/proxy/queue.py b/hazelcast/proxy/queue.py index d7f918ff0e..c49587fc91 100644 --- a/hazelcast/proxy/queue.py +++ b/hazelcast/proxy/queue.py @@ -19,7 +19,7 @@ queue_size_codec, \ queue_take_codec from hazelcast.proxy.base import PartitionSpecificProxy, ItemEvent, ItemEventType -from hazelcast.util import check_not_none, to_millis +from hazelcast.util import check_not_none, to_millis, ImmutableLazyDataList class Empty(Exception): @@ -62,7 +62,9 @@ def add_all(self, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(queue_add_all_codec, data_list=data_items) + + request = queue_add_all_codec.encode_request(self.name, data_items) + return self._invoke(request, queue_add_all_codec.decode_response) def add_listener(self, include_value=False, item_added_func=None, item_removed_func=None): """ @@ -73,11 +75,12 @@ def add_listener(self, include_value=False, item_added_func=None, item_removed_f :param item_removed_func: Function to be called when an item is deleted from this set (optional). :return: (str), a registration id which is used as a key to remove the listener. """ - request = queue_add_listener_codec.encode_request(self.name, include_value, self._is_smart) + codec = queue_add_listener_codec + request = codec.encode_request(self.name, include_value, self._is_smart) def handle_event_item(item, uuid, event_type): item = item if include_value else None - member = self._client.cluster.get_member_by_uuid(uuid) + member = self._context.cluster_service.get_member(uuid) item_event = ItemEvent(self.name, item, event_type, member, self._to_object) if event_type == ItemEventType.added: @@ -87,15 +90,16 @@ def handle_event_item(item, uuid, event_type): if item_removed_func: item_removed_func(item_event) - return self._register_listener(request, lambda r: queue_add_listener_codec.decode_response(r)['response'], + return self._register_listener(request, lambda r: codec.decode_response(r), lambda reg_id: queue_remove_listener_codec.encode_request(self.name, reg_id), - lambda m: queue_add_listener_codec.handle(m, handle_event_item)) + lambda m: codec.handle(m, handle_event_item)) def clear(self): """ Clears this queue. Queue will be empty after this call. """ - return self._encode_invoke(queue_clear_codec) + request = queue_clear_codec.encode_request(self.name) + return self._invoke(request) def contains(self, item): """ @@ -106,7 +110,8 @@ def contains(self, item): """ check_not_none(item, "Item can't be None") item_data = self._to_data(item) - return self._encode_invoke(queue_contains_codec, value=item_data) + request = queue_contains_codec.encode_request(self.name, item_data) + return self._invoke(request, queue_contains_codec.decode_response) def contains_all(self, items): """ @@ -120,29 +125,30 @@ def contains_all(self, items): for item in items: check_not_none(item, "item can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(queue_contains_all_codec, data_list=data_items) - def drain_to(self, list, max_size=-1): + request = queue_contains_all_codec.encode_request(self.name, data_items) + return self._invoke(request, queue_contains_all_codec.decode_response) + + def drain_to(self, target_list, max_size=-1): """ - Transfers all available items to the given `list`_ and removes these items from this queue. If a max_size is + Transfers all available items to the given `target_list` and removes these items from this queue. If a max_size is specified, it transfers at most the given number of items. In case of a failure, an item can exist in both collections or none of them. This operation may be more efficient than polling elements repeatedly and putting into collection. - :param list: (`list`_), the list where the items in this queue will be transferred. + :param target_list: (`list`), the list where the items in this queue will be transferred. :param max_size: (int), the maximum number items to transfer (optional). :return: (int), number of transferred items. - .. _list: https://docs.python.org/2/library/functions.html#list """ - def drain_result(f): - resp = f.result() - list.extend(resp) - return len(resp) - - return self._encode_invoke(queue_drain_to_max_size_codec, max_size=max_size).continue_with( - drain_result) + def handler(message): + response = queue_drain_to_max_size_codec.decode_response(message) + target_list.extend(map(self._to_object, response)) + return len(response) + + request = queue_drain_to_max_size_codec.encode_request(self.name, max_size) + return self._invoke(request, handler) def iterator(self): """ @@ -150,7 +156,11 @@ def iterator(self): :return: (Sequence), collection of items in this queue. """ - return self._encode_invoke(queue_iterator_codec) + def handler(message): + return ImmutableLazyDataList(queue_iterator_codec.decode_response(message), self._to_object) + + request = queue_iterator_codec.encode_request(self.name) + return self._invoke(request, handler) def is_empty(self): """ @@ -158,7 +168,8 @@ def is_empty(self): :return: (bool), ``true`` if this queue is empty, ``false`` otherwise. """ - return self._encode_invoke(queue_is_empty_codec) + request = queue_is_empty_codec.encode_request(self.name) + return self._invoke(request, queue_is_empty_codec.decode_response) def offer(self, item, timeout=0): """ @@ -173,7 +184,8 @@ def offer(self, item, timeout=0): """ check_not_none(item, "Value can't be None") element_data = self._to_data(item) - return self._encode_invoke(queue_offer_codec, value=element_data, timeout_millis=to_millis(timeout)) + request = queue_offer_codec.encode_request(self.name, element_data, to_millis(timeout)) + return self._invoke(request, queue_offer_codec.decode_response) def peek(self): """ @@ -181,7 +193,11 @@ def peek(self): :return: (object), the head of this queue, or ``None`` if this queue is empty. """ - return self._encode_invoke(queue_peek_codec) + def handler(message): + return self._to_object(queue_peek_codec.decode_response(message)) + + request = queue_peek_codec.encode_request(self.name) + return self._invoke(request, handler) def poll(self, timeout=0): """ @@ -193,7 +209,11 @@ def poll(self, timeout=0): :return: (object), the head of this queue, or ``None`` if this queue is empty or specified timeout elapses before an item is added to the queue. """ - return self._encode_invoke(queue_poll_codec, timeout_millis=to_millis(timeout)) + def handler(message): + return self._to_object(queue_poll_codec.decode_response(message)) + + request = queue_poll_codec.encode_request(self.name, to_millis(timeout)) + return self._invoke(request, handler) def put(self, item): """ @@ -204,7 +224,8 @@ def put(self, item): """ check_not_none(item, "Value can't be None") element_data = self._to_data(item) - return self._encode_invoke(queue_put_codec, value=element_data) + request = queue_put_codec.encode_request(self.name, element_data) + return self._invoke(request) def remaining_capacity(self): """ @@ -212,7 +233,8 @@ def remaining_capacity(self): :return: (int), remaining capacity of this queue. """ - return self._encode_invoke(queue_remaining_capacity_codec) + request = queue_remaining_capacity_codec.encode_request(self.name) + return self._invoke(request, queue_remaining_capacity_codec.decode_response) def remove(self, item): """ @@ -223,7 +245,8 @@ def remove(self, item): """ check_not_none(item, "Value can't be None") item_data = self._to_data(item) - return self._encode_invoke(queue_remove_codec, value=item_data) + request = queue_remove_codec.encode_request(self.name, item_data) + return self._invoke(request, queue_remove_codec.decode_response) def remove_all(self, items): """ @@ -237,7 +260,9 @@ def remove_all(self, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(queue_compare_and_remove_all_codec, data_list=data_items) + + request = queue_compare_and_remove_all_codec.encode_request(self.name, data_items) + return self._invoke(request, queue_compare_and_remove_all_codec.decode_response) def remove_listener(self, registration_id): """ @@ -261,7 +286,8 @@ def retain_all(self, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(queue_compare_and_retain_all_codec, data_list=data_items) + request = queue_compare_and_retain_all_codec.encode_request(self.name, data_items) + return self._invoke(request, queue_compare_and_retain_all_codec.decode_response) def size(self): """ @@ -270,7 +296,8 @@ def size(self): :return: (int), size of the queue. """ - return self._encode_invoke(queue_size_codec) + request = queue_size_codec.encode_request(self.name) + return self._invoke(request, queue_size_codec.decode_response) def take(self): """ @@ -278,5 +305,8 @@ def take(self): :return: (object), the head of this queue. """ + def handler(message): + return self._to_object(queue_take_codec.decode_response(message)) - return self._encode_invoke(queue_take_codec) + request = queue_take_codec.encode_request(self.name) + return self._invoke(request, handler) diff --git a/hazelcast/proxy/replicated_map.py b/hazelcast/proxy/replicated_map.py index 3c2f9db757..201b21e67e 100644 --- a/hazelcast/proxy/replicated_map.py +++ b/hazelcast/proxy/replicated_map.py @@ -6,8 +6,8 @@ replicated_map_is_empty_codec, replicated_map_key_set_codec, replicated_map_put_all_codec, replicated_map_put_codec, \ replicated_map_remove_codec, replicated_map_remove_entry_listener_codec, replicated_map_size_codec, \ replicated_map_values_codec -from hazelcast.proxy.base import Proxy, default_response_handler, EntryEvent, EntryEventType -from hazelcast.util import to_millis, check_not_none +from hazelcast.proxy.base import Proxy, EntryEvent, EntryEventType +from hazelcast.util import to_millis, check_not_none, ImmutableLazyDataList from hazelcast import six @@ -22,7 +22,10 @@ class ReplicatedMap(Proxy): When a new node joins the cluster, the new node initially will request existing values from older nodes and replicate them locally. """ - _partition_id = None + def __init__(self, service_name, name, context): + super(ReplicatedMap, self).__init__(service_name, name, context) + partition_service = context.partition_service + self._partition_id = randint(0, partition_service.partition_count - 1) def add_entry_listener(self, key=None, predicate=None, added_func=None, removed_func=None, updated_func=None, evicted_func=None, clear_all_func=None): @@ -42,22 +45,26 @@ def add_entry_listener(self, key=None, predicate=None, added_func=None, removed_ .. seealso:: :class:`~hazelcast.serialization.predicate.Predicate` for more info about predicates. """ if key and predicate: + codec = replicated_map_add_entry_listener_to_key_with_predicate_codec key_data = self._to_data(key) predicate_data = self._to_data(predicate) - request = replicated_map_add_entry_listener_to_key_with_predicate_codec.encode_request( - self.name, key_data, predicate_data, self._is_smart) + request = codec.encode_request(self.name, key_data, predicate_data, self._is_smart) elif key and not predicate: + codec = replicated_map_add_entry_listener_to_key_codec key_data = self._to_data(key) - request = replicated_map_add_entry_listener_to_key_codec.encode_request(self.name, key_data, self._is_smart) + request = codec.encode_request(self.name, key_data, self._is_smart) elif not key and predicate: + codec = replicated_map_add_entry_listener_with_predicate_codec predicate = self._to_data(predicate) - request = replicated_map_add_entry_listener_with_predicate_codec.encode_request( + request = codec.encode_request( self.name, predicate, self._is_smart) else: - request = replicated_map_add_entry_listener_codec.encode_request(self.name, self._is_smart) + codec = replicated_map_add_entry_listener_codec + request = codec.encode_request(self.name, self._is_smart) - def handle_event_entry(**_kwargs): - event = EntryEvent(self._to_object, **_kwargs) + def handle_event_entry(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries): + event = EntryEvent(self._to_object, key, value, old_value, merging_value, + event_type, uuid, number_of_affected_entries) if event.event_type == EntryEventType.added and added_func: added_func(event) elif event.event_type == EntryEventType.removed and removed_func: @@ -70,15 +77,16 @@ def handle_event_entry(**_kwargs): clear_all_func(event) return self._register_listener( - request, lambda r: replicated_map_add_entry_listener_codec.decode_response(r)['response'], + request, lambda r: codec.decode_response(r), lambda reg_id: replicated_map_remove_entry_listener_codec.encode_request(self.name, reg_id), - lambda m: replicated_map_add_entry_listener_codec.handle(m, handle_event_entry)) + lambda m: codec.handle(m, handle_event_entry)) def clear(self): """ Wipes data out of the replicated map. """ - return self._encode_invoke(replicated_map_clear_codec) + request = replicated_map_clear_codec.encode_request(self.name) + return self._invoke(request) def contains_key(self, key): """ @@ -92,7 +100,8 @@ def contains_key(self, key): """ check_not_none(key, "key can't be None") key_data = self._to_data(key) - return self._encode_invoke_on_key(replicated_map_contains_key_codec, key_data, key=key_data) + request = replicated_map_contains_key_codec.encode_request(self.name, key_data) + return self._invoke_on_key(request, key_data, replicated_map_contains_key_codec.decode_response) def contains_value(self, value): """ @@ -102,7 +111,10 @@ def contains_value(self, value): :return: (bool), ``true`` if this map contains an entry for the specified value. """ check_not_none(value, "value can't be None") - return self._encode_invoke_on_target_partition(replicated_map_contains_value_codec, value=self._to_data(value)) + value_data = self._to_data(value) + request = replicated_map_contains_value_codec.encode_request(self.name, value_data) + return self._invoke_on_partition(request, self._partition_id, +replicated_map_contains_value_codec.decode_response) def entry_set(self): """ @@ -113,7 +125,11 @@ def entry_set(self): :return: (Sequence), the list of key-value tuples in the map. """ - return self._encode_invoke_on_target_partition(replicated_map_entry_set_codec) + def handler(message): + return ImmutableLazyDataList(replicated_map_entry_set_codec.decode_response(message), self._to_object) + + request = replicated_map_entry_set_codec.encode_request(self.name) + return self._invoke_on_partition(request, self._partition_id, handler) def get(self, key): """ @@ -124,11 +140,16 @@ def get(self, key): and equals defined in the key's class.** :param key: (object), the specified key. - :return: (Sequence), the list of the values associated with the specified key. + :return: (object), the value associated with the specified key. """ check_not_none(key, "key can't be None") + + def handler(message): + return self._to_object(replicated_map_get_codec.decode_response(message)) + key_data = self._to_data(key) - return self._encode_invoke_on_key(replicated_map_get_codec, key_data, key=key_data) + request = replicated_map_get_codec.encode_request(self.name, key_data) + return self._invoke_on_key(request, key_data, handler) def is_empty(self): """ @@ -136,7 +157,8 @@ def is_empty(self): :return: (bool), ``true`` if this map contains no key-value mappings. """ - return self._encode_invoke_on_target_partition(replicated_map_is_empty_codec) + request = replicated_map_is_empty_codec.encode_request(self.name) + return self._invoke_on_partition(request, self._partition_id, replicated_map_is_empty_codec.decode_response) def key_set(self): """ @@ -147,7 +169,11 @@ def key_set(self): :return: (Sequence), a list of the clone of the keys. """ - return self._encode_invoke_on_target_partition(replicated_map_key_set_codec) + def handler(message): + return ImmutableLazyDataList(replicated_map_key_set_codec.decode_response(message), self._to_object) + + request = replicated_map_key_set_codec.encode_request(self.name) + return self._invoke_on_partition(request, self._partition_id, handler) def put(self, key, value, ttl=0): """ @@ -163,24 +189,30 @@ def put(self, key, value, ttl=0): """ check_not_none(key, "key can't be None") check_not_none(key, "value can't be None") + + def handler(message): + return self._to_object(replicated_map_put_codec.decode_response(message)) + key_data = self._to_data(key) value_data = self._to_data(value) - return self._encode_invoke_on_key(replicated_map_put_codec, key_data, key=key_data, value=value_data, - ttl=to_millis(ttl)) + request = replicated_map_put_codec.encode_request(self.name, key_data, value_data, to_millis(ttl)) + return self._invoke_on_key(request, key_data, handler) - def put_all(self, map): + def put_all(self, source): """ Copies all of the mappings from the specified map to this map. No atomicity guarantees are given. In the case of a failure, some of the key-value tuples may get written, while others are not. - :param map: (dict), map which includes mappings to be stored in this map. + :param source: (dict), map which includes mappings to be stored in this map. """ - entries = {} - for key, value in six.iteritems(map): + entries = [] + for key, value in six.iteritems(source): check_not_none(key, "key can't be None") check_not_none(value, "value can't be None") - entries[self._to_data(key)] = self._to_data(value) - self._encode_invoke(replicated_map_put_all_codec, entries=entries) + entries.append((self._to_data(key), self._to_data(value))) + + request = replicated_map_put_all_codec.encode_request(self.name, entries) + return self._invoke(request) def remove(self, key): """ @@ -194,8 +226,13 @@ def remove(self, key): :return: (object), the previous value associated with key, or None if there was no mapping for key. """ check_not_none(key, "key can't be None") + + def handler(message): + return self._to_object(replicated_map_remove_codec.decode_response(message)) + key_data = self._to_data(key) - return self._encode_invoke_on_key(replicated_map_remove_codec, key_data, key=key_data) + request = replicated_map_remove_codec.encode_request(self.name, key_data) + return self._invoke_on_key(request, key_data, handler) def remove_entry_listener(self, registration_id): """ @@ -212,7 +249,8 @@ def size(self): :return: (int), number of entries in this multimap. """ - return self._encode_invoke_on_target_partition(replicated_map_size_codec) + request = replicated_map_size_codec.encode_request(self.name) + return self._invoke_on_partition(request, self._partition_id, replicated_map_size_codec.decode_response) def values(self): """ @@ -224,12 +262,8 @@ def values(self): :return: (Sequence), the list of values in the map. """ - return self._encode_invoke_on_target_partition(replicated_map_values_codec) - - def _get_partition_id(self): - if not self._partition_id: - self._partition_id = randint(0, self._client.partition_service.get_partition_count() - 1) - return self._partition_id + def handler(message): + return ImmutableLazyDataList(replicated_map_values_codec.decode_response(message), self._to_object) - def _encode_invoke_on_target_partition(self, codec, response_handler=default_response_handler, **kwargs): - return self._encode_invoke_on_partition(codec, self._get_partition_id(), response_handler, **kwargs) + request = replicated_map_values_codec.encode_request(self.name) + return self._invoke_on_partition(request, self._partition_id, handler) \ No newline at end of file diff --git a/hazelcast/proxy/ringbuffer.py b/hazelcast/proxy/ringbuffer.py index c5e37ef890..06698add6a 100644 --- a/hazelcast/proxy/ringbuffer.py +++ b/hazelcast/proxy/ringbuffer.py @@ -1,9 +1,9 @@ -from hazelcast.future import ImmediateFuture +from hazelcast.future import ImmediateFuture, Future from hazelcast.protocol.codec import ringbuffer_add_all_codec, ringbuffer_add_codec, ringbuffer_capacity_codec, \ ringbuffer_head_sequence_codec, ringbuffer_read_many_codec, ringbuffer_read_one_codec, \ ringbuffer_remaining_capacity_codec, ringbuffer_size_codec, ringbuffer_tail_sequence_codec from hazelcast.proxy.base import PartitionSpecificProxy -from hazelcast.util import check_not_negative, check_not_none, check_not_empty, check_true +from hazelcast.util import check_not_negative, check_not_none, check_not_empty, check_true, ImmutableLazyDataList OVERFLOW_POLICY_OVERWRITE = 0 """ @@ -48,7 +48,9 @@ class Ringbuffer(PartitionSpecificProxy): meaning that only 1 thread is able to take an item. A Ringbuffer.read is not destructive, so you can have multiple threads reading the same item multiple times. """ - _capacity = None + def __init__(self, service_name, name, context): + super(Ringbuffer, self).__init__(service_name, name, context) + self._capacity = None def capacity(self): """ @@ -57,11 +59,12 @@ def capacity(self): :return: (long), the capacity of Ringbuffer. """ if not self._capacity: - def cache_capacity(f): - self._capacity = f.result() - return f.result() + def handler(message): + self._capacity = ringbuffer_capacity_codec.decode_response(message) + return self._capacity - return self._encode_invoke(ringbuffer_capacity_codec).continue_with(cache_capacity) + request = ringbuffer_capacity_codec.encode_request(self.name) + return self._invoke(request, handler) return ImmediateFuture(self._capacity) def size(self): @@ -70,7 +73,8 @@ def size(self): :return: (long), the size of Ringbuffer. """ - return self._encode_invoke(ringbuffer_size_codec) + request = ringbuffer_size_codec.encode_request(self.name) + return self._invoke(request, ringbuffer_size_codec.decode_response) def tail_sequence(self): """ @@ -79,7 +83,8 @@ def tail_sequence(self): :return: (long), the sequence of the tail. """ - return self._encode_invoke(ringbuffer_tail_sequence_codec) + request = ringbuffer_tail_sequence_codec.encode_request(self.name) + return self._invoke(request, ringbuffer_tail_sequence_codec.decode_response) def head_sequence(self): """ @@ -89,7 +94,8 @@ def head_sequence(self): :return: (long), the sequence of the head. """ - return self._encode_invoke(ringbuffer_head_sequence_codec) + request = ringbuffer_head_sequence_codec.encode_request(self.name) + return self._invoke(request, ringbuffer_head_sequence_codec.decode_response) def remaining_capacity(self): """ @@ -97,7 +103,8 @@ def remaining_capacity(self): :return: (long), the remaining capacity of Ringbuffer. """ - return self._encode_invoke(ringbuffer_remaining_capacity_codec) + request = ringbuffer_remaining_capacity_codec.encode_request(self.name) + return self._invoke(request, ringbuffer_remaining_capacity_codec.decode_response) def add(self, item, overflow_policy=OVERFLOW_POLICY_OVERWRITE): """ @@ -108,7 +115,9 @@ def add(self, item, overflow_policy=OVERFLOW_POLICY_OVERWRITE): :param overflow_policy: (int), the OverflowPolicy to be used when there is no space (optional). :return: (long), the sequenceId of the added item, or -1 if the add failed. """ - return self._encode_invoke(ringbuffer_add_codec, value=self._to_data(item), overflow_policy=overflow_policy) + item_data = self._to_data(item) + request = ringbuffer_add_codec.encode_request(self.name, overflow_policy, item_data) + return self._invoke(request, ringbuffer_add_codec.decode_response) def add_all(self, items, overflow_policy=OVERFLOW_POLICY_OVERWRITE): """ @@ -126,11 +135,14 @@ def add_all(self, items, overflow_policy=OVERFLOW_POLICY_OVERWRITE): check_not_empty(items, "items can't be empty") if len(items) > MAX_BATCH_SIZE: raise AssertionError("Batch size can't be greater than %d" % MAX_BATCH_SIZE) + + item_data_list = [] for item in items: check_not_none(item, "item can't be None") + item_data_list.append(self._to_data(item)) - item_list = [self._to_data(x) for x in items] - return self._encode_invoke(ringbuffer_add_all_codec, value_list=item_list, overflow_policy=overflow_policy) + request = ringbuffer_add_all_codec.encode_request(self.name, item_data_list, overflow_policy) + return self._invoke(request, ringbuffer_add_all_codec.decode_response) def read_one(self, sequence): """ @@ -141,7 +153,12 @@ def read_one(self, sequence): :return: (object), the read item. """ check_not_negative(sequence, "sequence can't be smaller than 0") - return self._encode_invoke(ringbuffer_read_one_codec, sequence=sequence) + + def handler(message): + return self._to_object(ringbuffer_read_one_codec.decode_response(message)) + + request = ringbuffer_read_one_codec.encode_request(self.name, sequence) + return self._invoke(request, handler) def read_many(self, start_sequence, min_count, max_count): """ @@ -157,13 +174,29 @@ def read_many(self, start_sequence, min_count, max_count): """ check_not_negative(start_sequence, "sequence can't be smaller than 0") check_true(max_count >= min_count, "max count should be greater or equal to min count") - check_true(min_count <= self.capacity().result(), "min count should be smaller or equal to capacity") check_true(max_count < MAX_BATCH_SIZE, "max count can't be greater than %d" % MAX_BATCH_SIZE) - return self._encode_invoke(ringbuffer_read_many_codec, response_handler=self._read_many_response_handler, - start_sequence=start_sequence, min_count=min_count, - max_count=max_count, filter=None) - - @staticmethod - def _read_many_response_handler(future, codec, to_object): - return codec.decode_response(future.result(), to_object)['items'] + future = Future() + request = ringbuffer_read_many_codec.encode_request(self.name, start_sequence, min_count, max_count, None) + + def handler(message): + return ImmutableLazyDataList(ringbuffer_read_many_codec.decode_response(message)["items"], self._to_object) + + def check_capacity(capacity): + try: + capacity = capacity.result() + check_true(min_count <= capacity, "min count: %d should be smaller or equal to capacity: %d" + % (min_count, capacity)) + f = self._invoke(request, handler) + f.add_done_callback(set_result) + except Exception as e: + future.set_exception(e) + + def set_result(f): + try: + future.set_result(f.result()) + except Exception as e: + future.set_exception(e) + + self.capacity().add_done_callback(check_capacity) + return future diff --git a/hazelcast/proxy/semaphore.py b/hazelcast/proxy/semaphore.py deleted file mode 100644 index 79113ad539..0000000000 --- a/hazelcast/proxy/semaphore.py +++ /dev/null @@ -1,120 +0,0 @@ -from hazelcast.protocol.codec import \ - semaphore_acquire_codec, \ - semaphore_available_permits_codec, \ - semaphore_drain_permits_codec, \ - semaphore_init_codec, \ - semaphore_reduce_permits_codec, \ - semaphore_release_codec, \ - semaphore_try_acquire_codec -from hazelcast.proxy.base import PartitionSpecificProxy -from hazelcast.util import check_not_negative, to_millis - - -class Semaphore(PartitionSpecificProxy): - """ - Semaphore is a backed-up distributed alternative to the Python `asyncio.Semaphore `_ - - Semaphore is a cluster-wide counting semaphore. Conceptually, it maintains a set of permits. Each acquire() blocks - if necessary until a permit is available, and then takes it. Each release() adds a permit, potentially releasing a - blocking acquirer. However, no actual permit objects are used; the semaphore just keeps a count of the number - available and acts accordingly. - - The Hazelcast distributed semaphore implementation guarantees that threads invoking any of the acquire methods are - selected to obtain permits in the order in which their invocation of those methods was processed(first-in-first-out; - FIFO). Note that FIFO ordering necessarily applies to specific internal points of execution within the cluster. - Therefore, it is possible for one member to invoke acquire before another, but reach the ordering point after the - other, and similarly upon return from the method. - - This class also provides convenience methods to acquire and release multiple permits at a time. Beware of the - increased risk of indefinite postponement when using the multiple acquire. If a single permit is released to a - semaphore that is currently blocking, a thread waiting for one permit will acquire it before a thread waiting for - multiple permits regardless of the call order. - """ - def init(self, permits): - """ - Try to initialize this Semaphore instance with the given permit count. - - :param permits: (int), the given permit count. - :return: (bool), ``true`` if initialization success. - """ - check_not_negative(permits, "Permits cannot be negative!") - return self._encode_invoke(semaphore_init_codec, permits=permits) - - def acquire(self, permits=1): - """ - Acquires one or specified amount of permits if available, and returns immediately, reducing the number of - available permits by one or given amount. - - If insufficient permits are available then the current thread becomes disabled for thread scheduling purposes - and lies dormant until one of following happens: - - * some other thread invokes one of the release methods for this semaphore, the current thread is next to be - assigned permits and the number of available permits satisfies this request, - * this Semaphore instance is destroyed, or - * some other thread interrupts the current thread. - - :param permits: (int), the number of permits to acquire (optional). - """ - check_not_negative(permits, "Permits cannot be negative!") - return self._encode_invoke(semaphore_acquire_codec, permits=permits) - - def available_permits(self): - """ - Returns the current number of permits currently available in this semaphore. - - * This method is typically used for debugging and testing purposes. - :return: (int), the number of available permits in this semaphore. - """ - return self._encode_invoke(semaphore_available_permits_codec) - - def drain_permits(self): - """ - Acquires and returns all permits that are immediately available. - - :return: (int), the number of permits drained. - """ - return self._encode_invoke(semaphore_drain_permits_codec) - - def reduce_permits(self, reduction): - """ - Shrinks the number of available permits by the indicated reduction. This method differs from acquire in that it - does not block waiting for permits to become available. - - :param reduction: (int), the number of permits to remove. - """ - check_not_negative(reduction, "Reduction cannot be negative!") - return self._encode_invoke(semaphore_reduce_permits_codec, reduction=reduction) - - def release(self, permits=1): - """ - Releases one or given number of permits, increasing the number of available permits by one or that amount. - - There is no requirement that a thread that releases a permit must have acquired that permit by calling one of - the acquire methods. Correct usage of a semaphore is established by programming convention in the application. - - :param permits: (int), the number of permits to release (optional). - """ - check_not_negative(permits, "Permits cannot be negative!") - return self._encode_invoke(semaphore_release_codec, permits=permits) - - def try_acquire(self, permits=1, timeout=0): - """ - Tries to acquire one or the given number of permits, if they are available, and returns immediately, with the - value ``true``, reducing the number of available permits by the given amount. - - If there are insufficient permits and a timeout is provided, the current thread becomes disabled for thread - scheduling purposes and lies dormant until one of following happens: - * some other thread invokes the release() method for this semaphore and the current thread is next to be - assigned a permit, or - * some other thread interrupts the current thread, or - * the specified waiting time elapses. - - If there are insufficient permits and no timeout is provided, this method will return immediately with the value - ``false`` and the number of available permits is unchanged. - - :param permits: (int), the number of permits to acquire (optional). - :param timeout: (long), the maximum time in seconds to wait for the permit(s) (optional). - :return: (bool), ``true`` if desired amount of permits was acquired, ``false`` otherwise. - """ - check_not_negative(permits, "Permits cannot be negative!") - return self._encode_invoke(semaphore_try_acquire_codec, permits=permits, timeout=to_millis(timeout)) diff --git a/hazelcast/proxy/set.py b/hazelcast/proxy/set.py index 2416fa2a3a..0431b65f06 100644 --- a/hazelcast/proxy/set.py +++ b/hazelcast/proxy/set.py @@ -14,7 +14,7 @@ set_size_codec from hazelcast.proxy.base import PartitionSpecificProxy, ItemEvent, ItemEventType -from hazelcast.util import check_not_none +from hazelcast.util import check_not_none, ImmutableLazyDataList class Set(PartitionSpecificProxy): @@ -30,7 +30,8 @@ def add(self, item): """ check_not_none(item, "Value can't be None") element_data = self._to_data(item) - return self._encode_invoke(set_add_codec, value=element_data) + request = set_add_codec.encode_request(self.name, element_data) + return self._invoke(request, set_add_codec.decode_response) def add_all(self, items): """ @@ -44,7 +45,9 @@ def add_all(self, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(set_add_all_codec, value_list=data_items) + + request = set_add_all_codec.encode_request(self.name, data_items) + return self._invoke(request, set_add_all_codec.decode_response) def add_listener(self, include_value=False, item_added_func=None, item_removed_func=None): """ @@ -59,7 +62,7 @@ def add_listener(self, include_value=False, item_added_func=None, item_removed_f def handle_event_item(item, uuid, event_type): item = item if include_value else None - member = self._client.cluster.get_member_by_uuid(uuid) + member = self._context.cluster_service.get_member(uuid) item_event = ItemEvent(self.name, item, event_type, member, self._to_object) if event_type == ItemEventType.added: @@ -69,7 +72,7 @@ def handle_event_item(item, uuid, event_type): if item_removed_func: item_removed_func(item_event) - return self._register_listener(request, lambda r: set_add_listener_codec.decode_response(r)['response'], + return self._register_listener(request, lambda r: set_add_listener_codec.decode_response(r), lambda reg_id: set_remove_listener_codec.encode_request(self.name, reg_id), lambda m: set_add_listener_codec.handle(m, handle_event_item)) @@ -77,7 +80,8 @@ def clear(self): """ Clears the set. Set will be empty with this call. """ - return self._encode_invoke(set_clear_codec) + request = set_clear_codec.encode_request(self.name) + return self._invoke(request) def contains(self, item): """ @@ -88,7 +92,8 @@ def contains(self, item): """ check_not_none(item, "Value can't be None") item_data = self._to_data(item) - return self._encode_invoke(set_contains_codec, value=item_data) + request = set_contains_codec.encode_request(self.name, item_data) + return self._invoke(request, set_contains_codec.decode_response) def contains_all(self, items): """ @@ -102,7 +107,9 @@ def contains_all(self, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(set_contains_all_codec, items=data_items) + + request = set_contains_all_codec.encode_request(self.name, data_items) + return self._invoke(request, set_contains_all_codec.decode_response) def get_all(self): """ @@ -110,7 +117,11 @@ def get_all(self): :return: (Sequence), list of the items in this set. """ - return self._encode_invoke(set_get_all_codec) + def handler(message): + return ImmutableLazyDataList(set_get_all_codec.decode_response(message), self._to_object) + + request = set_get_all_codec.encode_request(self.name) + return self._invoke(request, handler) def is_empty(self): """ @@ -118,7 +129,8 @@ def is_empty(self): :return: (bool), ``true`` if this set is empty, ``false`` otherwise. """ - return self._encode_invoke(set_is_empty_codec) + request = set_is_empty_codec.encode_request(self.name) + return self._invoke(request, set_is_empty_codec.decode_response) def remove(self, item): """ @@ -129,7 +141,8 @@ def remove(self, item): """ check_not_none(item, "Value can't be None") item_data = self._to_data(item) - return self._encode_invoke(set_remove_codec, value=item_data) + request = set_remove_codec.encode_request(self.name, item_data) + return self._invoke(request, set_remove_codec.decode_response) def remove_all(self, items): """ @@ -143,7 +156,9 @@ def remove_all(self, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(set_compare_and_remove_all_codec, values=data_items) + + request = set_compare_and_remove_all_codec.encode_request(self.name, data_items) + return self._invoke(request, set_compare_and_remove_all_codec.decode_response) def remove_listener(self, registration_id): """ @@ -167,7 +182,9 @@ def retain_all(self, items): for item in items: check_not_none(item, "Value can't be None") data_items.append(self._to_data(item)) - return self._encode_invoke(set_compare_and_retain_all_codec, values=data_items) + + request = set_compare_and_retain_all_codec.encode_request(self.name, data_items) + return self._invoke(request, set_compare_and_retain_all_codec.decode_response) def size(self): """ @@ -175,4 +192,5 @@ def size(self): :return: (int), number of items in this set. """ - return self._encode_invoke(set_size_codec) \ No newline at end of file + request = set_size_codec.encode_request(self.name) + return self._invoke(request, set_size_codec.decode_response) diff --git a/hazelcast/proxy/topic.py b/hazelcast/proxy/topic.py index fb43e68935..53109a5aba 100644 --- a/hazelcast/proxy/topic.py +++ b/hazelcast/proxy/topic.py @@ -24,17 +24,18 @@ def add_listener(self, on_message=None): :param on_message: (Function), function to be called when a message is published. :return: (str), a registration id which is used as a key to remove the listener. """ - request = topic_add_message_listener_codec.encode_request(self.name, self._is_smart) + codec = topic_add_message_listener_codec + request = codec.encode_request(self.name, self._is_smart) def handle(item, publish_time, uuid): - member = self._client.cluster.get_member_by_uuid(uuid) + member = self._context.cluster_service.get_member(uuid) item_event = TopicMessage(self.name, item, publish_time, member, self._to_object) on_message(item_event) return self._register_listener( - request, lambda r: topic_add_message_listener_codec.decode_response(r)['response'], + request, lambda r: codec.decode_response(r), lambda reg_id: topic_remove_message_listener_codec.encode_request(self.name, reg_id), - lambda m: topic_add_message_listener_codec.handle(m, handle)) + lambda m: codec.handle(m, handle)) def publish(self, message): """ @@ -43,7 +44,8 @@ def publish(self, message): :param message: (object), the message to be published. """ message_data = self._to_data(message) - self._encode_invoke(topic_publish_codec, message=message_data) + request = topic_publish_codec.encode_request(self.name, message_data) + return self._invoke(request) def remove_listener(self, registration_id): """ diff --git a/hazelcast/proxy/transactional_list.py b/hazelcast/proxy/transactional_list.py index 92162addc3..4b63521141 100644 --- a/hazelcast/proxy/transactional_list.py +++ b/hazelcast/proxy/transactional_list.py @@ -1,7 +1,7 @@ from hazelcast.protocol.codec import transactional_list_add_codec, transactional_list_remove_codec, \ transactional_list_size_codec from hazelcast.proxy.base import TransactionalProxy -from hazelcast.util import check_not_none +from hazelcast.util import check_not_none, thread_id class TransactionalList(TransactionalProxy): @@ -16,7 +16,9 @@ def add(self, item): :return: (bool), ``true`` if the item is added successfully, ``false`` otherwise. """ check_not_none(item, "item can't be none") - return self._encode_invoke(transactional_list_add_codec, item=self._to_data(item)) + item_data = self._to_data(item) + request = transactional_list_add_codec.encode_request(self.name, self.transaction.id, thread_id(), item_data) + return self._invoke(request, transactional_list_add_codec.decode_response) def remove(self, item): """ @@ -26,7 +28,9 @@ def remove(self, item): :return: (bool), ``true`` if the item is removed successfully, ``false`` otherwise. """ check_not_none(item, "item can't be none") - return self._encode_invoke(transactional_list_remove_codec, item=self._to_data(item)) + item_data = self._to_data(item) + request = transactional_list_remove_codec.encode_request(self.name, self.transaction.id, thread_id(), item_data) + return self._invoke(request, transactional_list_remove_codec.decode_response) def size(self): """ @@ -34,4 +38,5 @@ def size(self): :return: (int), the size of the list. """ - return self._encode_invoke(transactional_list_size_codec) + request = transactional_list_size_codec.encode_request(self.name, self.transaction.id, thread_id()) + return self._invoke(request, transactional_list_size_codec.decode_response) diff --git a/hazelcast/proxy/transactional_map.py b/hazelcast/proxy/transactional_map.py index 2015815afe..706f0d1d25 100644 --- a/hazelcast/proxy/transactional_map.py +++ b/hazelcast/proxy/transactional_map.py @@ -5,13 +5,14 @@ transactional_map_replace_codec, transactional_map_replace_if_same_codec, transactional_map_set_codec, \ transactional_map_size_codec, transactional_map_values_codec, transactional_map_values_with_predicate_codec from hazelcast.proxy.base import TransactionalProxy -from hazelcast.util import check_not_none, to_millis +from hazelcast.util import check_not_none, to_millis, thread_id, ImmutableLazyDataList class TransactionalMap(TransactionalProxy): """ Transactional implementation of :class:`~hazelcast.proxy.map.Map`. """ + def contains_key(self, key): """ Transactional implementation of :func:`Map.contains_key(key) ` @@ -20,7 +21,10 @@ def contains_key(self, key): :return: (bool), ``true`` if this map contains an entry for the specified key, ``false`` otherwise. """ check_not_none(key, "key can't be none") - return self._encode_invoke(transactional_map_contains_key_codec, key=self._to_data(key)) + key_data = self._to_data(key) + request = transactional_map_contains_key_codec.encode_request(self.name, self.transaction.id, thread_id(), + key_data) + return self._invoke(request, transactional_map_contains_key_codec.decode_response) def get(self, key): """ @@ -30,7 +34,13 @@ def get(self, key): :return: (object), the value for the specified key. """ check_not_none(key, "key can't be none") - return self._encode_invoke(transactional_map_get_codec, key=self._to_data(key)) + + def handler(message): + return self._to_object(transactional_map_get_codec.decode_response(message)) + + key_data = self._to_data(key) + request = transactional_map_get_codec.encode_request(self.name, self.transaction.id, thread_id(), key_data) + return self._invoke(request, handler) def get_for_update(self, key): """ @@ -44,7 +54,14 @@ def get_for_update(self, key): :func:`Map.get(key) ` """ check_not_none(key, "key can't be none") - return self._encode_invoke(transactional_map_get_for_update_codec, key=self._to_data(key)) + + def handler(message): + return self._to_object(transactional_map_get_for_update_codec.decode_response(message)) + + key_data = self._to_data(key) + request = transactional_map_get_for_update_codec.encode_request(self.name, self.transaction.id, thread_id(), + key_data) + return self._invoke(request, handler) def size(self): """ @@ -52,7 +69,8 @@ def size(self): :return: (int), number of entries in this map. """ - return self._encode_invoke(transactional_map_size_codec) + request = transactional_map_size_codec.encode_request(self.name, self.transaction.id, thread_id()) + return self._invoke(request, transactional_map_size_codec.decode_response) def is_empty(self): """ @@ -60,8 +78,8 @@ def is_empty(self): :return: (bool), ``true`` if this map contains no key-value mappings, ``false`` otherwise. """ - - return self._encode_invoke(transactional_map_is_empty_codec) + request = transactional_map_is_empty_codec.encode_request(self.name, self.transaction.id, thread_id()) + return self._invoke(request, transactional_map_is_empty_codec.decode_response) def put(self, key, value, ttl=-1): """ @@ -77,8 +95,15 @@ def put(self, key, value, ttl=-1): """ check_not_none(key, "key can't be none") check_not_none(value, "value can't be none") - return self._encode_invoke(transactional_map_put_codec, key=self._to_data(key), - value=self._to_data(value), ttl=to_millis(ttl)) + + def handler(message): + return self._to_object(transactional_map_put_codec.decode_response(message)) + + key_data = self._to_data(key) + value_data = self._to_data(value) + request = transactional_map_put_codec.encode_request(self.name, self.transaction.id, thread_id(), key_data, + value_data, to_millis(ttl)) + return self._invoke(request, handler) def put_if_absent(self, key, value): """ @@ -90,13 +115,19 @@ def put_if_absent(self, key, value): :param key: (object), key of the entry. :param value: (object), value of the entry. - :param ttl: (int), maximum time in seconds for this entry to stay in the map (optional). :return: (object), old value of the entry. """ check_not_none(key, "key can't be none") check_not_none(value, "value can't be none") - return self._encode_invoke(transactional_map_put_if_absent_codec, key=self._to_data(key), - value=self._to_data(value)) + + def handler(message): + return self._to_object(transactional_map_put_if_absent_codec.decode_response(message)) + + key_data = self._to_data(key) + value_data = self._to_data(value) + request = transactional_map_put_if_absent_codec.encode_request(self.name, self.transaction.id, thread_id(), + key_data, value_data) + return self._invoke(request, handler) def set(self, key, value): """ @@ -110,8 +141,12 @@ def set(self, key, value): """ check_not_none(key, "key can't be none") check_not_none(value, "value can't be none") - return self._encode_invoke(transactional_map_set_codec, key=self._to_data(key), - value=self._to_data(value)) + + key_data = self._to_data(key) + value_data = self._to_data(value) + request = transactional_map_set_codec.encode_request(self.name, self.transaction.id, + thread_id(), key_data, value_data) + return self._invoke(request) def replace(self, key, value): """ @@ -126,8 +161,15 @@ def replace(self, key, value): """ check_not_none(key, "key can't be none") check_not_none(value, "value can't be none") - return self._encode_invoke(transactional_map_replace_codec, key=self._to_data(key), - value=self._to_data(value)) + + def handler(message): + return self._to_object(transactional_map_replace_codec.decode_response(message)) + + key_data = self._to_data(key) + value_data = self._to_data(value) + request = transactional_map_replace_codec.encode_request(self.name, self.transaction.id, thread_id(), + key_data, value_data) + return self._invoke(request, handler) def replace_if_same(self, key, old_value, new_value): """ @@ -145,8 +187,13 @@ def replace_if_same(self, key, old_value, new_value): check_not_none(key, "key can't be none") check_not_none(old_value, "old_value can't be none") check_not_none(new_value, "new_value can't be none") - return self._encode_invoke(transactional_map_replace_if_same_codec, key=self._to_data(key), - old_value=self._to_data(old_value), new_value=self._to_data(new_value)) + + key_data = self._to_data(key) + old_value_data = self._to_data(old_value) + new_value_data = self._to_data(new_value) + request = transactional_map_replace_if_same_codec.encode_request(self.name, self.transaction.id, thread_id(), + key_data, old_value_data, new_value_data) + return self._invoke(request, transactional_map_replace_if_same_codec.decode_response) def remove(self, key): """ @@ -159,7 +206,13 @@ def remove(self, key): :return: (object), the previous value associated with key, or ``None`` if there was no mapping for key. """ check_not_none(key, "key can't be none") - return self._encode_invoke(transactional_map_remove_codec, key=self._to_data(key)) + + def handler(message): + return self._to_object(transactional_map_remove_codec.decode_response(message)) + + key_data = self._to_data(key) + request = transactional_map_remove_codec.encode_request(self.name, self.transaction.id, thread_id(), key_data) + return self._invoke(request, handler) def remove_if_same(self, key, value): """ @@ -176,8 +229,12 @@ def remove_if_same(self, key, value): check_not_none(key, "key can't be none") check_not_none(value, "value can't be none") - return self._encode_invoke(transactional_map_remove_if_same_codec, key=self._to_data(key), - value=self._to_data(value)) + + key_data = self._to_data(key) + value_data = self._to_data(value) + request = transactional_map_remove_if_same_codec.encode_request(self.name, self.transaction.id, thread_id(), + key_data, value_data) + return self._invoke(request, transactional_map_remove_if_same_codec.decode_response) def delete(self, key): """ @@ -189,7 +246,10 @@ def delete(self, key): :param key: (object), key of the mapping to be deleted. """ check_not_none(key, "key can't be none") - return self._encode_invoke(transactional_map_delete_codec, key=self._to_data(key)) + + key_data = self._to_data(key) + request = transactional_map_delete_codec.encode_request(self.name, self.transaction.id, thread_id(), key_data) + return self._invoke(request) def key_set(self, predicate=None): """ @@ -201,11 +261,21 @@ def key_set(self, predicate=None): .. seealso:: :class:`~hazelcast.serialization.predicate.Predicate` for more info about predicates. """ - if predicate: - return self._encode_invoke(transactional_map_key_set_with_predicate_codec, - predicate=self._to_data(predicate)) - return self._encode_invoke(transactional_map_key_set_codec) + def handler(message): + return ImmutableLazyDataList(transactional_map_key_set_with_predicate_codec.decode_response(message), + self._to_object) + + predicate_data = self._to_data(predicate) + request = transactional_map_key_set_with_predicate_codec.encode_request(self.name, self.transaction.id, + thread_id(), predicate_data) + else: + def handler(message): + return ImmutableLazyDataList(transactional_map_key_set_codec.decode_response(message), self._to_object) + + request = transactional_map_key_set_codec.encode_request(self.name, self.transaction.id, thread_id()) + + return self._invoke(request, handler) def values(self, predicate=None): """ @@ -218,6 +288,17 @@ def values(self, predicate=None): :class:`~hazelcast.serialization.predicate.Predicate` for more info about predicates. """ if predicate: - return self._encode_invoke(transactional_map_values_with_predicate_codec, - predicate=self._to_data(predicate)) - return self._encode_invoke(transactional_map_values_codec) + def handler(message): + return ImmutableLazyDataList(transactional_map_values_with_predicate_codec.decode_response(message), + self._to_object) + + predicate_data = self._to_data(predicate) + request = transactional_map_values_with_predicate_codec.encode_request(self.name, self.transaction.id, + thread_id(), predicate_data) + else: + def handler(message): + return ImmutableLazyDataList(transactional_map_values_codec.decode_response(message), self._to_object) + + request = transactional_map_values_codec.encode_request(self.name, self.transaction.id, thread_id()) + + return self._invoke(request, handler) diff --git a/hazelcast/proxy/transactional_multi_map.py b/hazelcast/proxy/transactional_multi_map.py index e0e0fed4c8..62a381aaf4 100644 --- a/hazelcast/proxy/transactional_multi_map.py +++ b/hazelcast/proxy/transactional_multi_map.py @@ -2,26 +2,31 @@ transactional_multi_map_remove_codec, transactional_multi_map_remove_entry_codec, \ transactional_multi_map_size_codec, transactional_multi_map_value_count_codec from hazelcast.proxy.base import TransactionalProxy -from hazelcast.util import check_not_none +from hazelcast.util import check_not_none, thread_id, ImmutableLazyDataList class TransactionalMultiMap(TransactionalProxy): """ Transactional implementation of :class:`~hazelcast.proxy.multi_map.MultiMap`. """ + def put(self, key, value): """ Transactional implementation of :func:`MultiMap.put(key, value) ` :param key: (object), the key to be stored. :param value: (object), the value to be stored. - :return: (bool), ``true`` if the size of the multimap is increased, ``false`` if the multimap already contains the - key-value tuple. + :return: (bool), ``true`` if the size of the multimap is increased, ``false`` if the multimap already contains + the key-value tuple. """ check_not_none(key, "key can't be none") check_not_none(value, "value can't be none") - return self._encode_invoke(transactional_multi_map_put_codec, key=self._to_data(key), - value=self._to_data(value)) + + key_data = self._to_data(key) + value_data = self._to_data(value) + request = transactional_multi_map_put_codec.encode_request(self.name, self.transaction.id, + thread_id(), key_data, value_data) + return self._invoke(request, transactional_multi_map_put_codec.decode_response) def get(self, key): """ @@ -31,7 +36,14 @@ def get(self, key): :return: (Sequence), the collection of the values associated with the key. """ check_not_none(key, "key can't be none") - return self._encode_invoke(transactional_multi_map_get_codec, key=self._to_data(key)) + + def handler(message): + return ImmutableLazyDataList(transactional_multi_map_get_codec.decode_response(message), self._to_object) + + key_data = self._to_data(key) + request = transactional_multi_map_get_codec.encode_request(self.name, self.transaction.id, thread_id(), + key_data) + return self._invoke(request, handler) def remove(self, key, value): """ @@ -40,12 +52,16 @@ def remove(self, key, value): :param key: (object), the key of the entry to remove. :param value: (object), the value of the entry to remove. - :return: + :return: (bool), True if the item is removed, False otherwise """ check_not_none(key, "key can't be none") check_not_none(value, "value can't be none") - return self._encode_invoke(transactional_multi_map_remove_entry_codec, key=self._to_data(key), - value=self._to_data(value)) + + key_data = self._to_data(key) + value_data = self._to_data(value) + request = transactional_multi_map_remove_entry_codec.encode_request(self.name, self.transaction.id, + thread_id(), key_data, value_data) + return self._invoke(request, transactional_multi_map_remove_entry_codec.decode_response) def remove_all(self, key): """ @@ -56,7 +72,14 @@ def remove_all(self, key): :return: (list), the collection of the values associated with the key. """ check_not_none(key, "key can't be none") - return self._encode_invoke(transactional_multi_map_remove_codec, key=self._to_data(key)) + + def handler(message): + return ImmutableLazyDataList(transactional_multi_map_remove_codec.decode_response(message), self._to_object) + + key_data = self._to_data(key) + request = transactional_multi_map_remove_codec.encode_request(self.name, self.transaction.id, thread_id(), + key_data) + return self._invoke(request, handler) def value_count(self, key): """ @@ -67,7 +90,11 @@ def value_count(self, key): :return: (int), the number of values matching the given key in the multimap. """ check_not_none(key, "key can't be none") - return self._encode_invoke(transactional_multi_map_value_count_codec, key=self._to_data(key)) + + key_data = self._to_data(key) + request = transactional_multi_map_value_count_codec.encode_request(self.name, self.transaction.id, thread_id(), + key_data) + return self._invoke(request, transactional_multi_map_value_count_codec.decode_response) def size(self): """ @@ -75,4 +102,5 @@ def size(self): :return: (int), the number of key-value tuples in the multimap. """ - return self._encode_invoke(transactional_multi_map_size_codec) + request = transactional_multi_map_size_codec.encode_request(self.name, self.transaction.id, thread_id()) + return self._invoke(request, transactional_multi_map_size_codec.decode_response) diff --git a/hazelcast/proxy/transactional_queue.py b/hazelcast/proxy/transactional_queue.py index 44442efeae..a4687e2b7d 100644 --- a/hazelcast/proxy/transactional_queue.py +++ b/hazelcast/proxy/transactional_queue.py @@ -1,7 +1,7 @@ from hazelcast.protocol.codec import transactional_queue_offer_codec, transactional_queue_peek_codec, \ transactional_queue_poll_codec, transactional_queue_size_codec, transactional_queue_take_codec from hazelcast.proxy.base import TransactionalProxy -from hazelcast.util import check_not_none, to_millis +from hazelcast.util import check_not_none, to_millis, thread_id class TransactionalQueue(TransactionalProxy): @@ -17,8 +17,11 @@ def offer(self, item, timeout=0): :return: (bool), ``true`` if the element was added to this queue, ``false`` otherwise. """ check_not_none(item, "item can't be none") - return self._encode_invoke(transactional_queue_offer_codec, item=self._to_data(item), - timeout=to_millis(timeout)) + + item_data = self._to_data(item) + request = transactional_queue_offer_codec.encode_request(self.name, self.transaction.id, thread_id(), + item_data, to_millis(timeout)) + return self._invoke(request, transactional_queue_offer_codec.decode_response) def take(self): """ @@ -26,27 +29,41 @@ def take(self): :return: (object), the head of this queue. """ - return self._encode_invoke(transactional_queue_take_codec) + def handler(message): + return self._to_object(transactional_queue_take_codec.decode_response(message)) + + request = transactional_queue_take_codec.encode_request(self.name, self.transaction.id, thread_id()) + return self._invoke(request, handler) def poll(self, timeout=0): """ Transactional implementation of :func:`Queue.poll(timeout) ` :param timeout: (long), maximum time in seconds to wait for addition (optional). - :return: (object), the head of this queue, or ``None`` if this queue is empty or specified timeout elapses before an - item is added to the queue. + :return: (object), the head of this queue, or ``None`` if this queue is empty or specified timeout elapses + before an item is added to the queue. """ - return self._encode_invoke(transactional_queue_poll_codec, timeout=to_millis(timeout)) + def handler(message): + return self._to_object(transactional_queue_poll_codec.decode_response(message)) + + request = transactional_queue_poll_codec.encode_request(self.name, self.transaction.id, thread_id(), + to_millis(timeout)) + return self._invoke(request, handler) def peek(self, timeout=0): """ Transactional implementation of :func:`Queue.peek(timeout) ` :param timeout: (long), maximum time in seconds to wait for addition (optional). - :return: (object), the head of this queue, or ``None`` if this queue is empty or specified timeout elapses before an - item is added to the queue. + :return: (object), the head of this queue, or ``None`` if this queue is empty or specified timeout elapses + before an item is added to the queue. """ - return self._encode_invoke(transactional_queue_peek_codec, timeout=to_millis(timeout)) + def handler(message): + return self._to_object(transactional_queue_peek_codec.decode_response(message)) + + request = transactional_queue_peek_codec.encode_request(self.name, self.transaction.id, thread_id(), + to_millis(timeout)) + return self._invoke(request, handler) def size(self): """ @@ -54,4 +71,5 @@ def size(self): :return: (int), size of the queue. """ - return self._encode_invoke(transactional_queue_size_codec) + request = transactional_queue_size_codec.encode_request(self.name, self.transaction.id, thread_id()) + return self._invoke(request, transactional_queue_size_codec.decode_response) diff --git a/hazelcast/proxy/transactional_set.py b/hazelcast/proxy/transactional_set.py index 1379892314..0ec3b11fdf 100644 --- a/hazelcast/proxy/transactional_set.py +++ b/hazelcast/proxy/transactional_set.py @@ -1,7 +1,7 @@ from hazelcast.protocol.codec import transactional_set_add_codec, transactional_set_remove_codec, \ transactional_set_size_codec from hazelcast.proxy.base import TransactionalProxy -from hazelcast.util import check_not_none +from hazelcast.util import check_not_none, thread_id class TransactionalSet(TransactionalProxy): @@ -16,7 +16,9 @@ def add(self, item): :return: (bool), ``true`` if item is added successfully, ``false`` otherwise. """ check_not_none(item, "item can't be none") - return self._encode_invoke(transactional_set_add_codec, item=self._to_data(item)) + item_data = self._to_data(item) + request = transactional_set_add_codec.encode_request(self.name, self.transaction.id, thread_id(), item_data) + return self._invoke(request, transactional_set_add_codec.decode_response) def remove(self, item): """ @@ -26,7 +28,9 @@ def remove(self, item): :return: (bool), ``true`` if item is remove successfully, ``false`` otherwise. """ check_not_none(item, "item can't be none") - return self._encode_invoke(transactional_set_remove_codec, item=self._to_data(item)) + item_data = self._to_data(item) + request = transactional_set_remove_codec.encode_request(self.name, self.transaction.id, thread_id(), item_data) + return self._invoke(request, transactional_set_remove_codec.decode_response) def size(self): """ @@ -34,4 +38,5 @@ def size(self): :return: (int), size of the set. """ - return self._encode_invoke(transactional_set_size_codec) + request = transactional_set_size_codec.encode_request(self.name, self.transaction.id, thread_id()) + return self._invoke(request, transactional_set_size_codec.decode_response) diff --git a/hazelcast/reactor.py b/hazelcast/reactor.py index a3ca825835..e0e0cff3bb 100644 --- a/hazelcast/reactor.py +++ b/hazelcast/reactor.py @@ -9,9 +9,12 @@ from collections import deque from functools import total_ordering + +from hazelcast import six from hazelcast.config import PROTOCOL -from hazelcast.connection import Connection, BUFFER_SIZE -from hazelcast.exception import HazelcastError +from hazelcast.connection import Connection +from hazelcast.core import Address +from hazelcast.errors import HazelcastError from hazelcast.future import Future from hazelcast.six.moves import queue @@ -26,7 +29,7 @@ class AsyncoreReactor(object): _is_live = False logger = logging.getLogger("HazelcastClient.AsyncoreReactor") - def __init__(self, logger_extras=None): + def __init__(self, logger_extras): self._logger_extras = logger_extras self._timers = queue.PriorityQueue() self._map = {} @@ -59,7 +62,7 @@ def _check_timers(self): now = time.time() while not self._timers.empty(): try: - _, timer = self._timers.queue[0] + timer = self._timers.queue[0][1] except IndexError: return @@ -82,26 +85,28 @@ def add_timer(self, delay, callback): def shutdown(self): if not self._is_live: return + self._is_live = False + + if self._thread is not threading.current_thread(): + self._thread.join() + for connection in list(self._map.values()): try: - connection.close(HazelcastError("Client is shutting down")) + connection.close(None, HazelcastError("Client is shutting down")) except OSError as connection: if connection.args[0] == socket.EBADF: pass else: raise self._map.clear() - self._thread.join() - def new_connection(self, address, connect_timeout, socket_options, connection_closed_callback, message_callback, - network_config): - return AsyncoreConnection(self._map, address, connect_timeout, socket_options, - connection_closed_callback, message_callback, network_config, self._logger_extras) + def connection_factory(self, connection_manager, connection_id, address, network_config, message_callback): + return AsyncoreConnection(self._map, connection_manager, connection_id, address, + network_config, message_callback, self._logger_extras) def _cleanup_timer(self, timer): try: - self.logger.debug("Cancel timer %s" % timer, extra=self._logger_extras) self._timers.queue.remove((timer.end, timer)) except ValueError: pass @@ -115,35 +120,44 @@ def _cleanup_all_timers(self): return +_BUFFER_SIZE = 128000 + + class AsyncoreConnection(Connection, asyncore.dispatcher): sent_protocol_bytes = False - read_buffer_size = BUFFER_SIZE + read_buffer_size = _BUFFER_SIZE - def __init__(self, map, address, connect_timeout, socket_options, connection_closed_callback, - message_callback, network_config, logger_extras=None): - asyncore.dispatcher.__init__(self, map=map) - Connection.__init__(self, address, connection_closed_callback, message_callback, logger_extras) + def __init__(self, dispatcher_map, connection_manager, connection_id, address, + network_config, message_callback, logger_extras): + asyncore.dispatcher.__init__(self, map=dispatcher_map) + Connection.__init__(self, connection_manager, connection_id, message_callback, logger_extras) + self.connected_address = address self._write_lock = threading.Lock() self._write_queue = deque() self.create_socket(socket.AF_INET, socket.SOCK_STREAM) - self.socket.settimeout(connect_timeout) + + timeout = network_config.connection_timeout + if not timeout: + timeout = six.MAXSIZE + + self.socket.settimeout(timeout) # set tcp no delay self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) # set socket buffer - self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, BUFFER_SIZE) - self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, BUFFER_SIZE) + self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, _BUFFER_SIZE) + self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, _BUFFER_SIZE) - for socket_option in socket_options: + for socket_option in network_config.socket_options: if socket_option.option is socket.SO_RCVBUF: self.read_buffer_size = socket_option.value self.socket.setsockopt(socket_option.level, socket_option.option, socket_option.value) - self.connect(self._address) + self.connect((address.host, address.port)) - ssl_config = network_config.ssl_config + ssl_config = network_config.ssl if ssl and ssl_config.enabled: ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) @@ -184,16 +198,25 @@ def __init__(self, map, address, connect_timeout, socket_options, connection_clo # the socket should be non-blocking from now on self.socket.settimeout(0) - self._write_queue.append(b"CB2") + self.local_address = Address(*self.socket.getsockname()) + + self._write_queue.append(b"CP2") def handle_connect(self): - self.start_time_in_seconds = time.time() - self.logger.debug("Connected to %s", self._address, extra=self._logger_extras) + self.start_time = time.time() + self.logger.debug("Connected to %s", self.connected_address, extra=self._logger_extras) def handle_read(self): - self._read_buffer.extend(self.recv(self.read_buffer_size)) - self.last_read_in_seconds = time.time() - self.receive_message() + reader = self._reader + while True: + data = self.recv(self.read_buffer_size) + reader.read(data) + self.last_read_time = time.time() + if len(data) < self.read_buffer_size: + break + + if reader.length: + reader.process() def handle_write(self): with self._write_lock: @@ -202,59 +225,63 @@ def handle_write(self): except IndexError: return sent = self.send(data) - self.last_write_in_seconds = time.time() + self.last_write_time = time.time() self.sent_protocol_bytes = True if sent < len(data): self._write_queue.appendleft(data[sent:]) def handle_close(self): self.logger.warning("Connection closed by server", extra=self._logger_extras) - self.close(IOError("Connection closed by server")) + self.close(None, IOError("Connection closed by server")) def handle_error(self): error = sys.exc_info()[1] if sys.exc_info()[0] is socket.error: if error.errno != errno.EAGAIN and error.errno != errno.EDEADLK: self.logger.exception("Received error", extra=self._logger_extras) - self.close(IOError(error)) + self.close(None, IOError(error)) else: - self.logger.warning("Received unexpected error: " + str(error), extra=self._logger_extras) + self.logger.exception("Received unexpected error: %s" % error, extra=self._logger_extras) def readable(self): - return not self._closed and self.sent_protocol_bytes + return self.live and self.sent_protocol_bytes - def write(self, data): + def _write(self, buf): # if write queue is empty, send the data right away, otherwise add to queue if len(self._write_queue) == 0 and self._write_lock.acquire(False): try: - sent = self.send(data) - self.last_write_in_seconds = time.time() - if sent < len(data): + sent = self.send(buf) + self.last_write_time = time.time() + if sent < len(buf): self.logger.info("Adding to queue", extra=self._logger_extras) - self._write_queue.appendleft(data[sent:]) + self._write_queue.appendleft(buf[sent:]) finally: self._write_lock.release() else: - self._write_queue.append(data) + self._write_queue.append(buf) def writable(self): return len(self._write_queue) > 0 - def close(self, cause): - if not self._closed: - self._closed = True - asyncore.dispatcher.close(self) - self._connection_closed_callback(self, cause) + def _inner_close(self): + asyncore.dispatcher.close(self) + + def __repr__(self): + return "Connection(id=%s, live=%s, remote_address=%s)" % (self._id, self.live, self.remote_address) + + def __str__(self): + return self.__repr__() @total_ordering class Timer(object): - canceled = False + __slots__ = ("end", "timer_ended_cb", "timer_canceled_cb", "canceled") def __init__(self, end, timer_ended_cb, timer_canceled_cb): self.end = end self.timer_ended_cb = timer_ended_cb self.timer_canceled_cb = timer_canceled_cb + self.canceled = False def __eq__(self, other): return self.end == other.end @@ -273,6 +300,8 @@ def check_timer(self, now): if self.canceled: return True - if now > self.end: + if now >= self.end: self.timer_ended_cb() return True + + return False diff --git a/hazelcast/serialization/base.py b/hazelcast/serialization/base.py index 5c1bc9fd9c..3f5ff23218 100644 --- a/hazelcast/serialization/base.py +++ b/hazelcast/serialization/base.py @@ -1,10 +1,10 @@ import sys from threading import RLock +from hazelcast.config import INTEGER_TYPE from hazelcast.serialization.api import * from hazelcast.serialization.data import * -from hazelcast.config import INTEGER_TYPE -from hazelcast.exception import HazelcastInstanceNotActiveError, HazelcastSerializationError +from hazelcast.errors import HazelcastInstanceNotActiveError, HazelcastSerializationError from hazelcast.serialization.input import _ObjectDataInput from hazelcast.serialization.output import _ObjectDataOutput from hazelcast.serialization.serializer import * diff --git a/hazelcast/serialization/bits.py b/hazelcast/serialization/bits.py index ab50a73736..e0cae0c017 100644 --- a/hazelcast/serialization/bits.py +++ b/hazelcast/serialization/bits.py @@ -1,3 +1,5 @@ +import struct + """ Constants """ @@ -9,24 +11,27 @@ FLOAT_SIZE_IN_BYTES = 4 LONG_SIZE_IN_BYTES = 8 DOUBLE_SIZE_IN_BYTES = 8 +UUID_SIZE_IN_BYTES = 17 # bool + long + long + +LE_INT = struct.Struct("i") +BE_INT8 = struct.Struct(">b") +BE_UINT8 = struct.Struct(">B") +BE_INT16 = struct.Struct(">h") +BE_UINT16 = struct.Struct(">H") +BE_LONG = struct.Struct(">q") +BE_FLOAT = struct.Struct(">f") +BE_DOUBLE = struct.Struct(">d") BIG_ENDIAN = 2 LITTLE_ENDIAN = 1 @@ -59,4 +64,4 @@ def calculate_size_data(val): def calculate_size_address(val): - return calculate_size_str(val.host) + INT_SIZE_IN_BYTES + return calculate_size_str(val.host) + INT_SIZE_IN_BYTES \ No newline at end of file diff --git a/hazelcast/serialization/data.py b/hazelcast/serialization/data.py index 725a38ae15..f033472535 100644 --- a/hazelcast/serialization/data.py +++ b/hazelcast/serialization/data.py @@ -1,7 +1,5 @@ -from struct import unpack_from - from hazelcast.hash import murmur_hash3_x86_32 -from hazelcast.serialization.bits import * +from hazelcast.serialization import BE_INT from hazelcast.serialization.serialization_const import * PARTITION_HASH_OFFSET = 0 @@ -35,7 +33,7 @@ def get_type(self): """ if self.total_size() == 0: return CONSTANT_TYPE_NULL - return unpack_from(FMT_BE_INT, self._buffer, TYPE_OFFSET)[0] + return BE_INT.unpack_from(self._buffer, TYPE_OFFSET)[0] def total_size(self): """ @@ -63,7 +61,7 @@ def get_partition_hash(self): :return: partition hash """ if self.has_partition_hash(): - return unpack_from(FMT_BE_INT, self._buffer, PARTITION_HASH_OFFSET)[0] + return BE_INT.unpack_from(self._buffer, PARTITION_HASH_OFFSET)[0] return self.hash_code() def is_portable(self): @@ -82,7 +80,7 @@ def has_partition_hash(self): """ return self._buffer is not None \ and len(self._buffer) >= HEAP_DATA_OVERHEAD \ - and unpack_from(FMT_BE_INT, self._buffer, PARTITION_HASH_OFFSET)[0] != 0 + and BE_INT.unpack_from(self._buffer, PARTITION_HASH_OFFSET)[0] != 0 def hash_code(self): """ @@ -96,8 +94,8 @@ def __hash__(self): return self.hash_code() def __eq__(self, other): - return isinstance(other, self.__class__) and self.total_size() == other.total_size() and self._buffer == other.to_bytes() + return isinstance(other, Data) and self.total_size() == other.total_size() \ + and self._buffer == other.to_bytes() def __len__(self): return self.total_size() - diff --git a/hazelcast/serialization/input.py b/hazelcast/serialization/input.py index 7be39c1661..785390e3a3 100644 --- a/hazelcast/serialization/input.py +++ b/hazelcast/serialization/input.py @@ -1,11 +1,8 @@ -import struct - from hazelcast.serialization.api import * from hazelcast.serialization.bits import * from hazelcast.serialization.data import Data from hazelcast import six from hazelcast.six.moves import range -from hazelcast.config import ClientProperties class _ObjectDataInput(ObjectDataInput): @@ -15,16 +12,15 @@ def __init__(self, buff, offset=0, serialization_service=None, is_big_endian=Tru self._is_big_endian = is_big_endian self._pos = offset self._size = len(buff) - self._respect_bytearrays = False if serialization_service is None else serialization_service.properties.get_bool(ClientProperties.SERIALIZATION_INPUT_RETURNS_BYTEARRAY) # Local cache struct formats according to endianness - self._FMT_INT8 = FMT_BE_INT8 if self._is_big_endian else FMT_LE_INT8 - self._FMT_UINT8 = FMT_BE_UINT8 if self._is_big_endian else FMT_LE_UINT8 - self._FMT_INT = FMT_BE_INT if self._is_big_endian else FMT_LE_INT - self._FMT_SHORT = FMT_BE_INT16 if self._is_big_endian else FMT_LE_INT16 - self._FMT_CHAR = FMT_BE_UINT16 if self._is_big_endian else FMT_LE_UINT16 - self._FMT_LONG = FMT_BE_LONG if self._is_big_endian else FMT_LE_LONG - self._FMT_FLOAT = FMT_BE_FLOAT if self._is_big_endian else FMT_LE_FLOAT - self._FMT_DOUBLE = FMT_BE_DOUBLE if self._is_big_endian else FMT_LE_DOUBLE + self._FMT_INT8 = BE_INT8 if self._is_big_endian else LE_INT8 + self._FMT_UINT8 = BE_UINT8 if self._is_big_endian else LE_UINT8 + self._FMT_INT = BE_INT if self._is_big_endian else LE_INT + self._FMT_SHORT = BE_INT16 if self._is_big_endian else LE_INT16 + self._FMT_CHAR = BE_UINT16 if self._is_big_endian else LE_UINT16 + self._FMT_LONG = BE_LONG if self._is_big_endian else LE_LONG + self._FMT_FLOAT = BE_FLOAT if self._is_big_endian else LE_FLOAT + self._FMT_DOUBLE = BE_DOUBLE if self._is_big_endian else LE_DOUBLE def read_into(self, buff, offset=None, length=None): _off = offset if offset is not None else 0 @@ -86,23 +82,9 @@ def read_utf(self): length = self.read_int() if length == NULL_ARRAY_LENGTH: return None - result = bytearray() - for i in range(0, length): - _first_byte = self.read_byte() & 0xFF - b = _first_byte >> 4 - if 0 <= b <= 7: - result.append(_first_byte) - continue - if 12 <= b <= 13: - result.append(_first_byte) - result.append(self.read_byte() & 0xFF) - continue - if b == 14: - result.append(_first_byte) - result.append(self.read_byte() & 0xFF) - result.append(self.read_byte() & 0xFF) - continue - raise UnicodeDecodeError("Malformed utf-8 content") + result = bytearray(length) + if length > 0: + self.read_into(result, 0, length) return result.decode("utf-8") def read_byte_array(self): @@ -113,10 +95,7 @@ def read_byte_array(self): if length > 0: self.read_into(result, 0, length) - if self._respect_bytearrays: - return result - - return list(result) + return result def read_boolean_array(self): return self._read_array_fnc(self.read_boolean) @@ -171,10 +150,10 @@ def _check_available(self, position, size): def _read_from_buff(self, fmt, size, position=None): if position is None: - val = struct.unpack_from(fmt, self._buffer, self._pos) + val = fmt.unpack_from(self._buffer, self._pos) self._pos += size else: - val = struct.unpack_from(fmt, self._buffer, position) + val = fmt.unpack_from(self._buffer, position) return val[0] def _read_array_fnc(self, read_item_fnc): diff --git a/hazelcast/serialization/output.py b/hazelcast/serialization/output.py index de87ede17f..b7e5d71c06 100644 --- a/hazelcast/serialization/output.py +++ b/hazelcast/serialization/output.py @@ -1,5 +1,3 @@ -import struct - from hazelcast.serialization.api import * from hazelcast.serialization.bits import * from hazelcast.six.moves import range @@ -13,12 +11,12 @@ def __init__(self, init_size, serialization_service, is_big_endian=True): self._is_big_endian = is_big_endian self._pos = 0 # Local cache struct formats according to endianness - self._FMT_INT = FMT_BE_INT if self._is_big_endian else FMT_LE_INT - self._FMT_SHORT = FMT_BE_INT16 if self._is_big_endian else FMT_LE_INT16 + self._FMT_INT = BE_INT if self._is_big_endian else LE_INT + self._FMT_SHORT = BE_INT16 if self._is_big_endian else LE_INT16 self._CHAR_ENCODING = "utf_16_be" if self._is_big_endian else "utf_16_le" - self._FMT_LONG = FMT_BE_LONG if self._is_big_endian else FMT_LE_LONG - self._FMT_FLOAT = FMT_BE_FLOAT if self._is_big_endian else FMT_LE_FLOAT - self._FMT_DOUBLE = FMT_BE_DOUBLE if self._is_big_endian else FMT_LE_DOUBLE + self._FMT_LONG = BE_LONG if self._is_big_endian else LE_LONG + self._FMT_FLOAT = BE_FLOAT if self._is_big_endian else LE_FLOAT + self._FMT_DOUBLE = BE_DOUBLE if self._is_big_endian else LE_DOUBLE def _write(self, val): self._ensure_available(BYTE_SIZE_IN_BYTES) @@ -44,7 +42,7 @@ def write_byte(self, val): def write_short(self, val): self._ensure_available(SHORT_SIZE_IN_BYTES) - struct.pack_into(self._FMT_SHORT, self._buffer, self._pos, val) + self._FMT_SHORT.pack_into(self._buffer, self._pos, val) self._pos += SHORT_SIZE_IN_BYTES def write_char(self, val): @@ -54,36 +52,38 @@ def write_char(self, val): def write_int(self, val, position=None): self._ensure_available(INT_SIZE_IN_BYTES) if position is None: - struct.pack_into(self._FMT_INT, self._buffer, self._pos, val) + self._FMT_INT.pack_into(self._buffer, self._pos, val) self._pos += INT_SIZE_IN_BYTES else: - struct.pack_into(self._FMT_INT, self._buffer, position, val) + self._FMT_INT.pack_into(self._buffer, position, val) def write_int_big_endian(self, val): self._ensure_available(INT_SIZE_IN_BYTES) - struct.pack_into(FMT_BE_INT, self._buffer, self._pos, val) + BE_INT.pack_into(self._buffer, self._pos, val) self._pos += INT_SIZE_IN_BYTES def write_long(self, val): self._ensure_available(LONG_SIZE_IN_BYTES) - struct.pack_into(self._FMT_LONG, self._buffer, self._pos, val) + self._FMT_LONG.pack_into(self._buffer, self._pos, val) self._pos += LONG_SIZE_IN_BYTES def write_float(self, val): self._ensure_available(FLOAT_SIZE_IN_BYTES) - struct.pack_into(self._FMT_FLOAT, self._buffer, self._pos, val) + self._FMT_FLOAT.pack_into(self._buffer, self._pos, val) self._pos += FLOAT_SIZE_IN_BYTES def write_double(self, val): self._ensure_available(DOUBLE_SIZE_IN_BYTES) - struct.pack_into(self._FMT_DOUBLE, self._buffer, self._pos, val) + self._FMT_DOUBLE.pack_into(self._buffer, self._pos, val) self._pos += DOUBLE_SIZE_IN_BYTES def write_utf(self, val): - _len = len(val) if val is not None else NULL_ARRAY_LENGTH - self.write_int(_len) - if _len > 0: - self.write_from(val.encode("utf-8")) + if val is None: + self.write_int(NULL_ARRAY_LENGTH) + else: + encoded_data = val.encode("utf-8") + self.write_int(len(encoded_data)) + self.write_from(encoded_data) def write_byte_array(self, val): _len = len(val) if val is not None else NULL_ARRAY_LENGTH diff --git a/hazelcast/serialization/portable/classdef.py b/hazelcast/serialization/portable/classdef.py index a3c66e00c6..de115cbf28 100644 --- a/hazelcast/serialization/portable/classdef.py +++ b/hazelcast/serialization/portable/classdef.py @@ -1,4 +1,4 @@ -from hazelcast.exception import HazelcastSerializationError +from hazelcast.errors import HazelcastSerializationError from hazelcast.util import enum from hazelcast import six diff --git a/hazelcast/serialization/portable/context.py b/hazelcast/serialization/portable/context.py index 3e0a41736e..f693c4cef3 100644 --- a/hazelcast/serialization/portable/context.py +++ b/hazelcast/serialization/portable/context.py @@ -1,7 +1,7 @@ import threading from hazelcast import util -from hazelcast.exception import HazelcastSerializationError +from hazelcast.errors import HazelcastSerializationError from hazelcast.serialization import bits from hazelcast.serialization.portable.classdef import ClassDefinition, ClassDefinitionBuilder, FieldType, FieldDefinition from hazelcast.serialization.portable.writer import ClassDefinitionWriter diff --git a/hazelcast/serialization/portable/reader.py b/hazelcast/serialization/portable/reader.py index 27f95ca4af..9960afc82c 100644 --- a/hazelcast/serialization/portable/reader.py +++ b/hazelcast/serialization/portable/reader.py @@ -1,9 +1,10 @@ -from hazelcast.exception import HazelcastSerializationError +from hazelcast.errors import HazelcastSerializationError from hazelcast.serialization import bits from hazelcast.serialization.api import PortableReader from hazelcast.serialization.portable.classdef import FieldType from hazelcast.six.moves import range + class DefaultPortableReader(PortableReader): def __init__(self, portable_serializer, data_input, class_def): self._portable_serializer = portable_serializer diff --git a/hazelcast/serialization/portable/serializer.py b/hazelcast/serialization/portable/serializer.py index 429aa69fd1..34a3033bdc 100644 --- a/hazelcast/serialization/portable/serializer.py +++ b/hazelcast/serialization/portable/serializer.py @@ -1,5 +1,5 @@ import hazelcast.util as util -from hazelcast.exception import HazelcastSerializationError +from hazelcast.errors import HazelcastSerializationError from hazelcast.serialization.api import StreamSerializer, Portable from hazelcast.serialization.portable.reader import DefaultPortableReader, MorphingPortableReader from hazelcast.serialization.portable.writer import DefaultPortableWriter diff --git a/hazelcast/serialization/portable/writer.py b/hazelcast/serialization/portable/writer.py index 5221765820..a759d9d1e7 100644 --- a/hazelcast/serialization/portable/writer.py +++ b/hazelcast/serialization/portable/writer.py @@ -1,5 +1,5 @@ import hazelcast.util as util -from hazelcast.exception import HazelcastSerializationError +from hazelcast.errors import HazelcastSerializationError from hazelcast.serialization import INT_SIZE_IN_BYTES, NULL_ARRAY_LENGTH from hazelcast.serialization.api import PortableWriter from hazelcast.serialization.output import EmptyObjectDataOutput diff --git a/hazelcast/serialization/predicate.py b/hazelcast/serialization/predicate.py index b83515c6fc..0d6569fb12 100644 --- a/hazelcast/serialization/predicate.py +++ b/hazelcast/serialization/predicate.py @@ -1,6 +1,6 @@ from hazelcast.serialization.api import IdentifiedDataSerializable -PREDICATE_FACTORY_ID = -32 +PREDICATE_FACTORY_ID = -20 class Predicate(IdentifiedDataSerializable): @@ -204,6 +204,7 @@ def write_data(self, object_data_output): def __repr__(self): return "TruePredicate()" + sql = SqlPredicate is_equal_to = EqualPredicate is_not_equal_to = NotEqualPredicate diff --git a/hazelcast/serialization/serialization_const.py b/hazelcast/serialization/serialization_const.py index 8565873fce..5e342e2f3c 100644 --- a/hazelcast/serialization/serialization_const.py +++ b/hazelcast/serialization/serialization_const.py @@ -27,18 +27,14 @@ # ------------------------------------------------------------ # DEFAULT SERIALIZERS -JAVA_DEFAULT_TYPE_CLASS = -21 -JAVA_DEFAULT_TYPE_DATE = -22 -JAVA_DEFAULT_TYPE_BIG_INTEGER = -23 -JAVA_DEFAULT_TYPE_BIG_DECIMAL = -24 -JAVA_DEFAULT_TYPE_ENUM = -25 -JAVA_DEFAULT_TYPE_ARRAY_LIST = -26 -JAVA_DEFAULT_TYPE_LINKED_LIST = -27 +JAVA_DEFAULT_TYPE_CLASS = -24 +JAVA_DEFAULT_TYPE_DATE = -25 +JAVA_DEFAULT_TYPE_BIG_INTEGER = -26 +JAVA_DEFAULT_TYPE_BIG_DECIMAL = -27 +JAVA_DEFAULT_TYPE_ARRAY_LIST = -29 +JAVA_DEFAULT_TYPE_LINKED_LIST = -30 JAVASCRIPT_JSON_SERIALIZATION_TYPE = -130 -# NUMBER OF CONSTANT SERIALIZERS... -CONSTANT_SERIALIZERS_LENGTH = 28 - # ------------------------------------------------------------ # JAVA SERIALIZATION diff --git a/hazelcast/serialization/serializer.py b/hazelcast/serialization/serializer.py index ed475033e1..5f4ea938cc 100644 --- a/hazelcast/serialization/serializer.py +++ b/hazelcast/serialization/serializer.py @@ -306,23 +306,6 @@ def get_type_id(self): return JAVA_DEFAULT_TYPE_CLASS -class JavaEnumSerializer(BaseSerializer): - def read(self, inp): - """ - :param inp: - :return: a tuple of (Enum-name, Enum-value-name) - """ - return tuple(inp.read_utf(), inp.read_utf()) - - def write(self, out, obj): - enum_name, enum_val_name = obj - out.write_utf(enum_name) - out.write_utf(enum_val_name) - - def get_type_id(self): - return JAVA_DEFAULT_TYPE_ENUM - - class ArrayListSerializer(BaseSerializer): def read(self, inp): size = inp.read_int() diff --git a/hazelcast/serialization/service.py b/hazelcast/serialization/service.py index 1ceda5c251..11cd7ed74f 100644 --- a/hazelcast/serialization/service.py +++ b/hazelcast/serialization/service.py @@ -1,11 +1,9 @@ -from hazelcast.exception import HazelcastSerializationError from hazelcast.serialization.base import BaseSerializationService from hazelcast.serialization.portable.classdef import FieldType from hazelcast.serialization.portable.context import PortableContext from hazelcast.serialization.portable.serializer import PortableSerializer from hazelcast.serialization.serializer import * from hazelcast import six -from hazelcast.config import ClientProperties DEFAULT_OUT_BUFFER_SIZE = 4 * 1024 @@ -18,7 +16,7 @@ def default_partition_strategy(key): class SerializationServiceV1(BaseSerializationService): - def __init__(self, serialization_config, properties=ClientProperties({}), version=1, global_partition_strategy=default_partition_strategy, + def __init__(self, serialization_config, version=1, global_partition_strategy=default_partition_strategy, output_buffer_size=DEFAULT_OUT_BUFFER_SIZE): super(SerializationServiceV1, self).__init__(version, global_partition_strategy, output_buffer_size, serialization_config.is_big_endian, @@ -42,8 +40,6 @@ def __init__(self, serialization_config, properties=ClientProperties({}), versio if global_serializer: self._registry._global_serializer = global_serializer() - self.properties = properties - def _register_constant_serializers(self): self._registry.register_constant_serializer(self._registry._null_serializer, type(None)) self._registry.register_constant_serializer(self._registry._data_serializer) @@ -72,7 +68,6 @@ def _register_constant_serializers(self): self._registry.register_constant_serializer(BigIntegerSerializer()) self._registry.register_constant_serializer(BigDecimalSerializer()) self._registry.register_constant_serializer(JavaClassSerializer()) - self._registry.register_constant_serializer(JavaEnumSerializer()) self._registry.register_constant_serializer(ArrayListSerializer(), list) self._registry.register_constant_serializer(LinkedListSerializer()) self._registry.register_constant_serializer(HazelcastJsonValueSerializer(), HazelcastJsonValue) diff --git a/hazelcast/statistics.py b/hazelcast/statistics.py index 1d8e3a3459..5d108f346d 100644 --- a/hazelcast/statistics.py +++ b/hazelcast/statistics.py @@ -1,38 +1,38 @@ import logging import os +from hazelcast.invocation import Invocation from hazelcast.protocol.codec import client_statistics_codec -from hazelcast.util import calculate_version, current_time_in_millis, to_millis, to_nanos, current_time +from hazelcast.util import current_time_in_millis, to_millis, to_nanos, current_time from hazelcast.config import ClientProperties from hazelcast.version import CLIENT_VERSION, CLIENT_TYPE from hazelcast import six try: import psutil + PSUTIL_ENABLED = True except ImportError: PSUTIL_ENABLED = False class Statistics(object): - - _SINCE_VERSION_STRING = "3.9" - _SINCE_VERSION = calculate_version(_SINCE_VERSION_STRING) - _NEAR_CACHE_CATEGORY_PREFIX = "nc." _STAT_SEPARATOR = "," _KEY_VALUE_SEPARATOR = "=" _EMPTY_STAT_VALUE = "" _DEFAULT_PROBE_VALUE = 0 - logger = logging.getLogger("HazelcastClient.Statistics") - def __init__(self, client): + def __init__(self, client, reactor, connection_manager, invocation_service, near_cache_manager, logger_extras): self._client = client - self._logger_extras = {"client_name": client.name, "group_name": client.config.group_config.name} + self._reactor = reactor + self._connection_manager = connection_manager + self._invocation_service = invocation_service + self._near_cache_manager = near_cache_manager + self._logger_extras = logger_extras self._enabled = client.properties.get_bool(ClientProperties.STATISTICS_ENABLED) - self._cached_owner_address = None self._statistics_timer = None self._failed_gauges = set() @@ -45,18 +45,20 @@ def start(self): default_period = self._client.properties.get_seconds_positive_or_default( ClientProperties.STATISTICS_PERIOD_SECONDS) - self.logger.warning("Provided client statistics {} cannot be less than or equal to 0." + self.logger.warning("Provided client statistics {} cannot be less than or equal to 0. " "You provided {} as the configuration. Client will use the default value " "{} instead.".format(ClientProperties.STATISTICS_PERIOD_SECONDS.name, - period, - default_period), extra=self._logger_extras) + period, default_period), extra=self._logger_extras) period = default_period def _statistics_task(): + if not self._client.lifecycle_service.is_running(): + return + self._send_statistics() - self._statistics_timer = self._client.reactor.add_timer(period, _statistics_task) + self._statistics_timer = self._reactor.add_timer(period, _statistics_task) - self._statistics_timer = self._client.reactor.add_timer(period, _statistics_task) + self._statistics_timer = self._reactor.add_timer(period, _statistics_task) self.logger.info("Client statistics enabled with the period of {} seconds.".format(period), extra=self._logger_extras) @@ -66,21 +68,23 @@ def shutdown(self): self._statistics_timer.cancel() def _send_statistics(self): - owner_connection = self._get_owner_connection() - if owner_connection is None: - self.logger.debug("Cannot send client statistics to the server. No owner connection.", + connection = self._connection_manager.get_random_connection() + if not connection: + self.logger.debug("Cannot send client statistics to the server. No connection found.", extra=self._logger_extras) return + collection_timestamp = current_time_in_millis() stats = [] - self._fill_metrics(stats, owner_connection) + self._fill_metrics(stats, connection) self._add_near_cache_stats(stats) self._add_runtime_and_os_stats(stats) - self._send_stats_to_owner("".join(stats), owner_connection) + self._send_stats_to_owner(collection_timestamp, "".join(stats), connection) - def _send_stats_to_owner(self, stats, owner_connection): - request = client_statistics_codec.encode_request(stats) - self._client.invoker.invoke_on_connection(request, owner_connection) + def _send_stats_to_owner(self, collection_timestamp, stats, connection): + request = client_statistics_codec.encode_request(collection_timestamp, stats, bytearray(0)) + invocation = Invocation(request, connection=connection) + self._invocation_service.invoke(invocation) def _add_runtime_and_os_stats(self, stats): os_and_runtime_stats = self._get_os_and_runtime_stats() @@ -129,20 +133,20 @@ def _get_os_and_runtime_stats(self): return psutil_stats - def _fill_metrics(self, stats, owner_connection): + def _fill_metrics(self, stats, connection): self._add_stat(stats, "lastStatisticsCollectionTime", current_time_in_millis()) self._add_stat(stats, "enterprise", "false") self._add_stat(stats, "clientType", CLIENT_TYPE) self._add_stat(stats, "clientVersion", CLIENT_VERSION) - self._add_stat(stats, "clusterConnectionTimestamp", to_millis(owner_connection.start_time_in_seconds)) + self._add_stat(stats, "clusterConnectionTimestamp", to_millis(connection.start_time)) - local_host, local_ip = owner_connection.socket.getsockname() - local_address = str(local_host) + ":" + str(local_ip) + local_address = connection.local_address + local_address = str(local_address.host) + ":" + str(local_address.port) self._add_stat(stats, "clientAddress", local_address) self._add_stat(stats, "clientName", self._client.name) def _add_near_cache_stats(self, stats): - for near_cache in self._client.near_cache_manager.list_all_near_caches(): + for near_cache in self._near_cache_manager.list_near_caches(): near_cache_name_with_prefix = self._get_name_with_prefix(near_cache.name) near_cache_name_with_prefix.append(".") prefix = "".join(near_cache_name_with_prefix) @@ -172,28 +176,6 @@ def _add_stat(self, stats, name, value, key_prefix=None): def _add_empty_stat(self, stats, name, key_prefix=None): self._add_stat(stats, name, Statistics._EMPTY_STAT_VALUE, key_prefix) - def _get_owner_connection(self): - current_owner_address = self._client.cluster.owner_connection_address - connection = self._client.connection_manager.get_connection(current_owner_address) - - if connection is None: - return None - - server_version = connection.server_version - if server_version < Statistics._SINCE_VERSION: - # do not print too many logs if connected to an old version server - if self._cached_owner_address and self._cached_owner_address != current_owner_address: - self.logger.debug("Client statistics cannot be sent to server {} since," - "connected owner server version is less than the minimum supported server version ," - "{}.".format(current_owner_address, Statistics._SINCE_VERSION_STRING), - extra=self._logger_extras) - - # cache the last connected server address for decreasing the log prints - self._cached_owner_address = current_owner_address - return None - - return connection - def _get_name_with_prefix(self, name): return [Statistics._NEAR_CACHE_CATEGORY_PREFIX, self._escape_special_characters(name)] @@ -301,5 +283,3 @@ def _collect_process_cpu_time(self, psutil_stats, probe_name, process): @_safe_psutil_stat_collector def _collect_process_uptime(self, psutil_stats, probe_name, process): return to_millis(current_time() - process.create_time()) - - diff --git a/hazelcast/transaction.py b/hazelcast/transaction.py index 68a1806cfd..1580d8b307 100644 --- a/hazelcast/transaction.py +++ b/hazelcast/transaction.py @@ -1,8 +1,9 @@ import logging import threading import time -from hazelcast.exception import HazelcastInstanceNotActiveError, TransactionError +from hazelcast.errors import TransactionError, IllegalStateError from hazelcast.future import make_blocking +from hazelcast.invocation import Invocation from hazelcast.protocol.codec import transaction_create_codec, transaction_commit_codec, transaction_rollback_codec from hazelcast.proxy.transactional_list import TransactionalList from hazelcast.proxy.transactional_map import TransactionalMap @@ -12,13 +13,13 @@ from hazelcast.util import thread_id from hazelcast.six.moves import range + _STATE_ACTIVE = "active" _STATE_NOT_STARTED = "not_started" _STATE_COMMITTED = "committed" _STATE_ROLLED_BACK = "rolled_back" _STATE_PARTIAL_COMMIT = "rolling_back" - TWO_PHASE = 1 """ The two phase commit is separated in 2 parts. First it tries to execute the prepare; if there are any conflicts, @@ -45,20 +46,21 @@ class TransactionManager(object): """ logger = logging.getLogger("HazelcastClient.TransactionManager") - def __init__(self, client): - self._client = client - self._logger_extras = {"client_name": client.name, "group_name": client.config.group_config.name} + def __init__(self, context, logger_extras): + self._context = context + self._logger_extras = logger_extras def _connect(self): + connection_manager = self._context.connection_manager for count in range(0, RETRY_COUNT): - try: - address = self._client.load_balancer.next_address() - return self._client.connection_manager.get_or_connect(address).result() - except (IOError, HazelcastInstanceNotActiveError): - self.logger.debug("Could not get a connection for the transaction. Attempt %d of %d", count, - RETRY_COUNT, exc_info=True, extra=self._logger_extras) - if count + 1 == RETRY_COUNT: - raise + connection = connection_manager.get_random_connection() + if connection: + return connection + + self.logger.debug("Could not get a connection for the transaction. Attempt %d of %d", count, + RETRY_COUNT, exc_info=True, extra=self._logger_extras) + if count + 1 == RETRY_COUNT: + raise IllegalStateError("No active connection is found") def new_transaction(self, timeout, durability, transaction_type): """ @@ -71,7 +73,7 @@ def new_transaction(self, timeout, durability, transaction_type): :return: (:class:`~hazelcast.transaction.Transaction`), new created Transaction. """ connection = self._connect() - return Transaction(self._client, connection, timeout, durability, transaction_type) + return Transaction(self._context, connection, timeout, durability, transaction_type) class Transaction(object): @@ -84,14 +86,13 @@ class Transaction(object): start_time = None _locals = threading.local() thread_id = None - logger = logging.getLogger("HazelcastClient.Transaction") - def __init__(self, client, connection, timeout, durability, transaction_type): + def __init__(self, context, connection, timeout, durability, transaction_type): + self._context = context self.connection = connection self.timeout = timeout self.durability = durability self.transaction_type = transaction_type - self.client = client self._objects = {} def begin(self): @@ -106,11 +107,15 @@ def begin(self): self.start_time = time.time() self.thread_id = thread_id() try: - request = transaction_create_codec.encode_request(timeout=int(self.timeout * 1000), durability=self.durability, + request = transaction_create_codec.encode_request(timeout=int(self.timeout * 1000), + durability=self.durability, transaction_type=self.transaction_type, thread_id=self.thread_id) - response = self.client.invoker.invoke_on_connection(request, self.connection).result() - self.id = transaction_create_codec.decode_response(response)["response"] + invocation = Invocation(request, connection=self.connection, response_handler=lambda m: m) + invocation_service = self._context.invocation_service + invocation_service.invoke(invocation) + response = invocation.future.result() + self.id = transaction_create_codec.decode_response(response) self.state = _STATE_ACTIVE except: self._locals.transaction_exists = False @@ -126,7 +131,10 @@ def commit(self): try: self._check_timeout() request = transaction_commit_codec.encode_request(self.id, self.thread_id) - self.client.invoker.invoke_on_connection(request, self.connection).result() + invocation = Invocation(request, connection=self.connection) + invocation_service = self._context.invocation_service + invocation_service.invoke(invocation) + invocation.future.result() self.state = _STATE_COMMITTED except: self.state = _STATE_PARTIAL_COMMIT @@ -144,7 +152,10 @@ def rollback(self): try: if self.state != _STATE_PARTIAL_COMMIT: request = transaction_rollback_codec.encode_request(self.id, self.thread_id) - self.client.invoker.invoke_on_connection(request, self.connection).result() + invocation = Invocation(request, connection=self.connection) + invocation_service = self._context.invocation_service + invocation_service.invoke(invocation) + invocation.future.result() self.state = _STATE_ROLLED_BACK finally: self._locals.transaction_exists = False @@ -202,7 +213,7 @@ def _get_or_create_object(self, name, proxy_type): try: return self._objects[key] except KeyError: - proxy = proxy_type(name, self) + proxy = proxy_type(name, self, self._context) self._objects[key] = proxy return make_blocking(proxy) diff --git a/hazelcast/util.py b/hazelcast/util.py index 7d01411326..0572c53f50 100644 --- a/hazelcast/util.py +++ b/hazelcast/util.py @@ -1,12 +1,9 @@ -import itertools import threading import time import logging -import hazelcast from collections import Sequence, Iterable from hazelcast import six -from hazelcast.six.moves import range from hazelcast.version import GIT_COMMIT_ID, GIT_COMMIT_DATE, CLIENT_VERSION DEFAULT_ADDRESS = "127.0.0.1" @@ -132,7 +129,8 @@ class AtomicInteger(object): AtomicInteger is an Integer which can work atomically. """ def __init__(self, initial=0): - self.count = itertools.count(start=initial) + self._mux = threading.RLock() + self._counter = initial def get_and_increment(self): """ @@ -140,14 +138,10 @@ def get_and_increment(self): :return: (int), current value of AtomicInteger. """ - return next(self.count) - - def set(self, value): - """ - Sets the value of this AtomicInteger. - :param value: (int), the new value of AtomicInteger. - """ - self.count = itertools.count(start=value) + with self._mux: + res = self._counter + self._counter += 1 + return res def enum(**enums): @@ -160,25 +154,6 @@ def enum(**enums): return type('Enum', (), enums) -def _parse_address(address): - if ":" in address: - host, port = address.split(":") - return [hazelcast.core.Address(host, int(port))] - return [hazelcast.core.Address(address, p) for p in range(DEFAULT_PORT, DEFAULT_PORT + 3)] - - -def get_possible_addresses(addresses=[], member_list=[]): - return set((addresses + [m.address for m in member_list])) or _parse_address(DEFAULT_ADDRESS) - - -def get_provider_addresses(providers=[]): - return list(itertools.chain(*[p.load_addresses() for p in providers])) - - -def parse_addresses(addresses=[]): - return list(itertools.chain(*[_parse_address(a) for a in addresses])) - - class ImmutableLazyDataList(Sequence): def __init__(self, list_data, to_object): super(ImmutableLazyDataList, self).__init__() @@ -326,9 +301,9 @@ def filter(self, record): class HazelcastFormatter(logging.Formatter): def format(self, record): client_name = getattr(record, "client_name", None) - group_name = getattr(record, "group_name", None) - if client_name and group_name: - record.msg = "[" + group_name + "] [" + client_name + "] " + record.msg + cluster_name = getattr(record, "cluster_name", None) + if client_name and cluster_name: + record.msg = "[" + cluster_name + "] [" + client_name + "] " + record.msg return super(HazelcastFormatter, self).format(record) diff --git a/hazelcast/version.py b/hazelcast/version.py index 8d03d0316e..434f9cc7e0 100644 --- a/hazelcast/version.py +++ b/hazelcast/version.py @@ -5,7 +5,7 @@ SERIALIZATION_VERSION = 1 CLIENT_TYPE = "PYH" -CLIENT_VERSION_INFO = (3, 12, 3) +CLIENT_VERSION_INFO = (4, 0, 0) CLIENT_VERSION = ".".join(map(str, CLIENT_VERSION_INFO)) diff --git a/run-tests.ps1 b/run-tests.ps1 index 38619a804f..ebc4406bda 100644 --- a/run-tests.ps1 +++ b/run-tests.ps1 @@ -1,10 +1,10 @@ -$serverVersion = "3.12.5" +$serverVersion = "4.0" $hazelcastTestVersion=$serverVersion $hazelcastEnterpriseTestVersion=$serverVersion $hazelcastVersion=$serverVersion $hazelcastEnterpriseVersion=$serverVersion -$hazelcastRCVersion="0.3-SNAPSHOT" +$hazelcastRCVersion="0.8-SNAPSHOT" $snapshotRepo="https://oss.sonatype.org/content/repositories/snapshots" $releaseRepo="http://repo1.maven.apache.org/maven2" $enterpriseReleaseRepo="https://repository.hazelcast.com/release/" @@ -62,7 +62,7 @@ if(Test-Path env:HAZELCAST_ENTERPRISE_KEY){ pip install -r test-requirements.txt --user Write-Host Starting Hazelcast ... -$remoteControllerApp = Start-Process -FilePath java -ArgumentList ( "-Dhazelcast.enterprise.license.key=$env:HAZELCAST_ENTERPRISE_KEY","-cp", "$classpath", "com.hazelcast.remotecontroller.Main" ) -RedirectStandardOutput "rc_stdout.log" -RedirectStandardError "rc_stderr.log" -PassThru +$remoteControllerApp = Start-Process -FilePath java -ArgumentList ( "-Dhazelcast.enterprise.license.key=$env:HAZELCAST_ENTERPRISE_KEY","-cp", "$classpath", "com.hazelcast.remotecontroller.Main", "--use-simple-server") -RedirectStandardOutput "rc_stdout.log" -RedirectStandardError "rc_stderr.log" -PassThru Write-Host Wait for Hazelcast to start ... Start-Sleep -s 15 diff --git a/run-tests.sh b/run-tests.sh index 6bff3798f3..3a5db285b7 100644 --- a/run-tests.sh +++ b/run-tests.sh @@ -1,18 +1,32 @@ #!/bin/bash +function cleanup { + if [ "x${rcPid}" != "x" ] + then + echo "Killing remote controller server with pid ${rcPid}" + kill -9 "${rcPid}" + fi + exit +} + +# Disables printing security sensitive data to the logs +set +x + +trap cleanup EXIT + if [ "$1" = "--local" ] ; then USER="--user" else USER="" fi -HZ_VERSION="3.12.5" +HZ_VERSION="4.0" HAZELCAST_TEST_VERSION=${HZ_VERSION} HAZELCAST_ENTERPRISE_TEST_VERSION=${HZ_VERSION} HAZELCAST_VERSION=${HZ_VERSION} HAZELCAST_ENTERPRISE_VERSION=${HZ_VERSION} -HAZELCAST_RC_VERSION="0.3-SNAPSHOT" +HAZELCAST_RC_VERSION="0.8-SNAPSHOT" SNAPSHOT_REPO="https://oss.sonatype.org/content/repositories/snapshots" RELEASE_REPO="http://repo1.maven.apache.org/maven2" ENTERPRISE_RELEASE_REPO="https://repository.hazelcast.com/release/" @@ -94,7 +108,8 @@ fi pip install -r test-requirements.txt ${USER} --no-cache-dir -java -Dhazelcast.enterprise.license.key=${HAZELCAST_ENTERPRISE_KEY} -cp ${CLASSPATH} com.hazelcast.remotecontroller.Main>rc_stdout.log 2>rc_stderr.log & +java -Dhazelcast.enterprise.license.key="${HAZELCAST_ENTERPRISE_KEY}" -cp ${CLASSPATH} com.hazelcast.remotecontroller.Main --use-simple-server>rc_stdout.log 2>rc_stderr.log & +rcPid=$! sleep 15 diff --git a/start-rc.sh b/start-rc.sh index 566f6e18a9..222858b6c9 100644 --- a/start-rc.sh +++ b/start-rc.sh @@ -1,18 +1,32 @@ #!/bin/bash +function cleanup { + if [ "x${rcPid}" != "x" ] + then + echo "Killing remote controller server with pid ${rcPid}" + kill -9 "${rcPid}" + fi + exit +} + +# Disables printing security sensitive data to the logs +set +x + +trap cleanup EXIT + if [ "$1" = "--local" ] ; then USER="--user" else USER="" fi -HZ_VERSION="3.12.5" +HZ_VERSION="4.0" HAZELCAST_TEST_VERSION=${HZ_VERSION} HAZELCAST_ENTERPRISE_TEST_VERSION=${HZ_VERSION} HAZELCAST_VERSION=${HZ_VERSION} HAZELCAST_ENTERPRISE_VERSION=${HZ_VERSION} -HAZELCAST_RC_VERSION="0.3-SNAPSHOT" +HAZELCAST_RC_VERSION="0.8-SNAPSHOT" SNAPSHOT_REPO="https://oss.sonatype.org/content/repositories/snapshots" RELEASE_REPO="http://repo1.maven.apache.org/maven2" ENTERPRISE_RELEASE_REPO="https://repository.hazelcast.com/release/" @@ -94,4 +108,5 @@ fi pip install -r test-requirements.txt ${USER} --no-cache-dir -java -Dhazelcast.enterprise.license.key=${HAZELCAST_ENTERPRISE_KEY} -cp ${CLASSPATH} com.hazelcast.remotecontroller.Main +java -Dhazelcast.enterprise.license.key="${HAZELCAST_ENTERPRISE_KEY}" -cp ${CLASSPATH} com.hazelcast.remotecontroller.Main --use-simple-server +rcPid=$! diff --git a/test-requirements.txt b/test-requirements.txt index ddb92a6803..b42a29b59b 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -1,4 +1,4 @@ -thrift==0.10.0 +thrift==0.13.0 nose==1.3.7 coverage==4.5.1 psutil>=5.4.8 \ No newline at end of file diff --git a/tests/address_test.py b/tests/address_test.py index 38bab0e8aa..32fdbdc07f 100644 --- a/tests/address_test.py +++ b/tests/address_test.py @@ -1,61 +1,55 @@ import unittest -from hazelcast.core import Address, Member -from hazelcast.util import get_possible_addresses, get_provider_addresses -from hazelcast.connection import DefaultAddressProvider -from hazelcast.config import ClientNetworkConfig -from hazelcast import six +from hazelcast.core import AddressHelper -class AddressTest(unittest.TestCase): +class AddressHelperTest(unittest.TestCase): + v4_address = "127.0.0.1" + v6_address = "2001:0db8:85a3:0000:0000:8a2e:0370:7334" + localhost = "localhost" + port = 8080 + default_port = 5701 + default_port_count = 3 - def setUp(self): - self.network_config = ClientNetworkConfig() - self.address_provider = DefaultAddressProvider(self.network_config) + def test_v4_address_with_port(self): + self._validate_with_port(self.v4_address + ":" + str(self.port), self.v4_address, self.port) - def test_no_given_address(self): - self.network_config.addresses = [] - provider_addresses = get_provider_addresses([self.address_provider]) - addresses = get_possible_addresses(provider_addresses) - six.assertCountEqual(self, addresses, - [Address("127.0.0.1", 5701), Address("127.0.0.1", 5702), Address("127.0.0.1", 5703)]) + def test_v4_address_without_port(self): + self._validate_without_port(self.v4_address, self.v4_address) - def test_single_given_address_with_no_port(self): - self.network_config.addresses = ["127.0.0.1"] - provider_addresses = get_provider_addresses([self.address_provider]) - addresses = get_possible_addresses(provider_addresses) + def test_v6_address_with_port(self): + self._validate_with_port("[" + self.v6_address + "]:" + str(self.port), self.v6_address, self.port) - six.assertCountEqual(self, addresses, - [Address("127.0.0.1", 5701), Address("127.0.0.1", 5702), Address("127.0.0.1", 5703)]) + def test_v6_address_without_port(self): + self._validate_without_port(self.v6_address, self.v6_address) - def test_single_address_and_port(self): - self.network_config.addresses = ["127.0.0.1:5701"] - provider_addresses = get_provider_addresses([self.address_provider]) - addresses = get_possible_addresses(provider_addresses) + def test_v6_address_without_port_with_brackets(self): + self._validate_without_port("[" + self.v6_address + "]", self.v6_address) - six.assertCountEqual(self, addresses, [Address("127.0.0.1", 5701)]) + def test_localhost_with_port(self): + self._validate_with_port(self.localhost + ":" + str(self.port), self.localhost, self.port) - def test_multiple_addresses(self): - self.network_config.addresses = ["127.0.0.1:5701", "10.0.0.1"] - provider_addresses = get_provider_addresses([self.address_provider]) - addresses = get_possible_addresses(provider_addresses) + def test_localhost_without_port(self): + self._validate_without_port(self.localhost, self.localhost) - six.assertCountEqual(self, addresses, - [Address("127.0.0.1", 5701), Address("10.0.0.1", 5701), Address("10.0.0.1", 5702), - Address("10.0.0.1", 5703)]) + def _validate_with_port(self, address, host, port): + primaries, secondaries = AddressHelper.get_possible_addresses(address) + self.assertEqual(1, len(primaries)) + self.assertEqual(0, len(secondaries)) - def test_multiple_addresses_non_unique(self): - self.network_config.addresses = ["127.0.0.1:5701", "127.0.0.1:5701"] - provider_addresses = get_provider_addresses([self.address_provider]) - addresses = get_possible_addresses(provider_addresses) + address = primaries[0] + self.assertEqual(host, address.host) + self.assertEqual(port, address.port) - six.assertCountEqual(self, addresses, [Address("127.0.0.1", 5701)]) + def _validate_without_port(self, address, host): + primaries, secondaries = AddressHelper.get_possible_addresses(address) + self.assertEqual(1, len(primaries)) + self.assertEqual(self.default_port_count - 1, len(secondaries)) - def test_addresses_and_members(self): - self.network_config.addresses = ["127.0.0.1:5701"] - member_list = [Member(Address("10.0.0.1", 5703), "uuid1"), Member(Address("10.0.0.2", 5701), "uuid2")] - provider_addresses = get_provider_addresses([self.address_provider]) - addresses = get_possible_addresses(provider_addresses, member_list) - - six.assertCountEqual(self, addresses, - [Address("127.0.0.1", 5701), Address("10.0.0.1", 5703), Address("10.0.0.2", 5701)]) + for i in range(self.default_port_count): + if i == 0: + address = primaries[i] + else: + address = secondaries[i - 1] + self.assertEqual(host, address.host) + self.assertEqual(self.default_port + i, address.port) diff --git a/tests/base.py b/tests/base.py index c7fdc7592a..6fb7bdf9e8 100644 --- a/tests/base.py +++ b/tests/base.py @@ -110,6 +110,7 @@ def setUpClass(cls): @classmethod def tearDownClass(cls): cls.client.shutdown() + cls.rc.terminateCluster(cls.cluster.id) cls.rc.exit() @classmethod diff --git a/tests/client_message_test.py b/tests/client_message_test.py index cf3c4a34f7..bbb795ccd1 100644 --- a/tests/client_message_test.py +++ b/tests/client_message_test.py @@ -1,216 +1,314 @@ # coding: utf-8 import unittest - +import uuid + +from hazelcast import six +from hazelcast.connection import _Reader +from hazelcast.errors import _ErrorsCodec +from hazelcast.protocol import ErrorHolder +from hazelcast.protocol.builtin import CodecUtil, FixSizedTypesCodec, ByteArrayCodec, DataCodec, EntryListCodec, \ + StringCodec, EntryListUUIDListIntegerCodec, EntryListUUIDLongCodec, ListMultiFrameCodec, ListIntegerCodec, \ + ListLongCodec, ListUUIDCodec, MapCodec from hazelcast.protocol.client_message import * from hazelcast.protocol.codec import client_authentication_codec - -READ_HEADER = "00" * 20 + "1600" +from hazelcast.protocol.codec.custom.error_holder_codec import ErrorHolderCodec +from hazelcast.serialization.data import Data -class ClientMessageTest(unittest.TestCase): +class OutboundMessageTest(unittest.TestCase): def test_header_fields(self): - message = ClientMessage(payload_size=30) - - correlation_id = 6474838 - message_type = 987 - flags = 5 - partition_id = 27 - frame_length = 100 - data_offset = 17 - - message.set_correlation_id(correlation_id) - message.set_message_type(message_type) - message.set_flags(flags) - message.set_partition_id(partition_id) - message.set_frame_length(frame_length) - message.set_data_offset(data_offset) - - self.assertEqual(correlation_id, message.get_correlation_id()) - self.assertEqual(message_type, message.get_message_type()) - self.assertEqual(flags, message.get_flags()) - self.assertEqual(partition_id, message.get_partition_id()) - self.assertEqual(frame_length, message.get_frame_length()) - self.assertEqual(data_offset, message.get_data_offset()) - - def test_append_byte(self): - message = ClientMessage(payload_size=30) - - message.append_byte(0x21) - message.append_byte(0xF2) - message.append_byte(0x34) - - data_offset = message.get_data_offset() - self.assertEqual(b"21f234", binascii.hexlify(message.buffer[data_offset:data_offset + 3])) - - def test_append_bool(self): - message = ClientMessage(payload_size=30) - - message.append_bool(True) - - data_offset = message.get_data_offset() - self.assertEqual(b"01", binascii.hexlify(message.buffer[data_offset:data_offset + 1])) - - def test_append_int(self): - message = ClientMessage(payload_size=30) - - message.append_int(0x1feeddcc) - - data_offset = message.get_data_offset() - self.assertEqual(b"ccddee1f", binascii.hexlify(message.buffer[data_offset:data_offset + 4])) - - def test_append_long(self): - message = ClientMessage(payload_size=30) - - message.append_long(0x1feeddccbbaa8765) - - data_offset = message.get_data_offset() - self.assertEqual(b"6587aabbccddee1f", binascii.hexlify(message.buffer[data_offset:data_offset + 8])) - - def test_append_str(self): - message = ClientMessage(payload_size=30) - - frame_length = 1 - flags = 2 - message_type = 3 - correlation_id = 4 - partition_id = 5 - - message.set_correlation_id(correlation_id) - message.set_message_type(message_type) - message.set_flags(flags) - message.set_partition_id(partition_id) - message.set_frame_length(frame_length) - - message.append_str("abc") - - # buffer content should be - # 01000000 00 02 0300 0400000000000000 05000000 1600 03000000 616263 0000000000000000000000000000000000000000000000 - self.assertEqual(b"01000000", binascii.hexlify(message.buffer[0:4])) - self.assertEqual(b"00", binascii.hexlify(message.buffer[4:5])) - self.assertEqual(b"02", binascii.hexlify(message.buffer[5:6])) - self.assertEqual(b"0300", binascii.hexlify(message.buffer[6:8])) - self.assertEqual(b"0400000000000000", binascii.hexlify(message.buffer[8:16])) - self.assertEqual(b"05000000", binascii.hexlify(message.buffer[16:20])) - self.assertEqual(b"1600", binascii.hexlify(message.buffer[20:22])) - self.assertEqual(b"03000000", binascii.hexlify(message.buffer[22:26])) - self.assertEqual(b"616263", binascii.hexlify(message.buffer[26:29])) - - def test_read_byte(self): - hexstr = READ_HEADER + "78" - buf = binascii.unhexlify(hexstr) - - message = ClientMessage(buff=buf) - self.assertEqual(0x78, message.read_byte()) - - def test_read_bool(self): - hexstr = READ_HEADER + "01" - buf = binascii.unhexlify(hexstr) - - message = ClientMessage(buff=buf) - self.assertEqual(True, message.read_bool()) - - def test_read_int(self): - hexstr = READ_HEADER + "12345678" - buf = binascii.unhexlify(hexstr) - - message = ClientMessage(buff=buf) - self.assertEqual(0x78563412, message.read_int()) - - def test_read_long(self): - hexstr = READ_HEADER + "6587aabbccddee1f" - buf = binascii.unhexlify(hexstr) - - message = ClientMessage(buff=buf) - self.assertEqual(0x1feeddccbbaa8765, message.read_long()) - - def test_read_str(self): - hexstr = READ_HEADER + "03000000616263" - buf = binascii.unhexlify(hexstr) - - message = ClientMessage(buff=buf) - self.assertEqual("abc", message.read_str()) - - def test_no_flag(self): - message = ClientMessage(payload_size=30) - message.set_flags(0) - - self.assertFalse(message.is_flag_set(BEGIN_FLAG)) - self.assertFalse(message.is_flag_set(END_FLAG)) - self.assertFalse(message.is_flag_set(LISTENER_FLAG)) - - def test_set_flag_begin(self): - message = ClientMessage(payload_size=30) - message.set_flags(0) - - message.add_flag(BEGIN_FLAG) - - self.assertTrue(message.is_flag_set(BEGIN_FLAG)) - self.assertFalse(message.is_flag_set(END_FLAG)) - self.assertFalse(message.is_flag_set(LISTENER_FLAG)) - - def test_set_flag_end(self): - message = ClientMessage(payload_size=30) - message.set_flags(0) - - message.add_flag(END_FLAG) - - self.assertFalse(message.is_flag_set(BEGIN_FLAG)) - self.assertTrue(message.is_flag_set(END_FLAG)) - self.assertFalse(message.is_flag_set(LISTENER_FLAG)) - - def test_set_flag_listener(self): - message = ClientMessage(payload_size=30) - message.set_flags(0) - - message.add_flag(LISTENER_FLAG) - - self.assertFalse(message.is_flag_set(BEGIN_FLAG)) - self.assertFalse(message.is_flag_set(END_FLAG)) - self.assertTrue(message.is_flag_set(LISTENER_FLAG)) - - def test_clone(self): - message = ClientMessage(payload_size=0) - message.set_flags(0) - - message.add_flag(LISTENER_FLAG) - clone = message.clone() - clone.add_flag(BEGIN_FLAG) - - self.assertTrue(message.is_flag_set(LISTENER_FLAG)) - self.assertTrue(clone.is_flag_set(LISTENER_FLAG)) - self.assertFalse(message.is_flag_set(BEGIN_FLAG)) - self.assertTrue(clone.is_flag_set(BEGIN_FLAG)) + # 6 bytes for the length + flags + 4 bytes message type + 8 bytes correlation id + 4 bytes partition id + buf = bytearray(22) + message = OutboundMessage(buf, False) + self.assertFalse(message.retryable) + message.set_correlation_id(42) + message.set_partition_id(23) + + correlation_id = LE_LONG.unpack_from(message.buf, 6 + 4)[0] + partition_id = LE_INT.unpack_from(message.buf, 6 + 4 + 8)[0] + self.assertEqual(42, correlation_id) + self.assertEqual(42, message.get_correlation_id()) + self.assertEqual(23, partition_id) + + def test_copy(self): + buf = bytearray(range(20)) + message = OutboundMessage(buf, True) + + copy = message.copy() + self.assertTrue(copy.retryable) + buf[0] = 99 + self.assertEqual(99, message.buf[0]) + self.assertEqual(0, copy.buf[0]) # should be a deep copy + + +BEGIN_FRAME = Frame(bytearray(0), 1 << 12) +END_FRAME = Frame(bytearray(), 1 << 11) + + +class InboundMessageTest(unittest.TestCase): + def test_fast_forward(self): + message = InboundMessage(BEGIN_FRAME.copy()) + + # New custom-typed parameter with its own begin and end frames + message.add_frame(BEGIN_FRAME.copy()) + message.add_frame(Frame(bytearray(0), 0)) + message.add_frame(END_FRAME.copy()) + + message.add_frame(END_FRAME.copy()) + + # begin frame + message.next_frame() + CodecUtil.fast_forward_to_end_frame(message) + self.assertFalse(message.has_next_frame()) + + +class EncodeDecodeTest(unittest.TestCase): + @classmethod + def setUpClass(cls): + cls.reader = _Reader(None) + + def setUp(self): + self.buf = create_initial_buffer(50, 0, True) + self.message = OutboundMessage(self.buf, False) + + def write_and_decode(self): + self.reader.read(self.message.buf) + return self.reader._read_message() + + def mark_initial_frame_as_non_final(self): + flags = 1 << 11 | 1 << 12 + LE_UINT16.pack_into(self.buf, INT_SIZE_IN_BYTES, flags) + + def test_byte(self): + FixSizedTypesCodec.encode_byte(self.buf, 16, 3) + message = self.write_and_decode() + buf = message.next_frame().buf + self.assertEqual(3, FixSizedTypesCodec.decode_byte(buf, 10)) + + def test_boolean(self): + FixSizedTypesCodec.encode_boolean(self.buf, 16, True) + message = self.write_and_decode() + buf = message.next_frame().buf + self.assertEqual(True, FixSizedTypesCodec.decode_boolean(buf, 10)) + + def test_int(self): + FixSizedTypesCodec.encode_int(self.buf, 16, 1234) + message = self.write_and_decode() + buf = message.next_frame().buf + self.assertEqual(1234, FixSizedTypesCodec.decode_int(buf, 10)) + + def test_uuid(self): + random_uuid = uuid.uuid4() + FixSizedTypesCodec.encode_uuid(self.buf, 16, random_uuid) + message = self.write_and_decode() + buf = message.next_frame().buf + self.assertEqual(random_uuid, FixSizedTypesCodec.decode_uuid(buf, 10)) + + def test_none_uuid(self): + FixSizedTypesCodec.encode_uuid(self.buf, 16, None) + message = self.write_and_decode() + buf = message.next_frame().buf + self.assertIsNone(FixSizedTypesCodec.decode_uuid(buf, 10)) + + def test_long(self): + FixSizedTypesCodec.encode_long(self.buf, 16, 1234567890123) + message = self.write_and_decode() + buf = message.next_frame().buf + self.assertEqual(1234567890123, FixSizedTypesCodec.decode_long(buf, 10)) + + def test_byte_array(self): + self.mark_initial_frame_as_non_final() + b = six.u("abc©☺𩸽").encode("utf-8") + ByteArrayCodec.encode(self.buf, b, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(b, ByteArrayCodec.decode(message)) + + def test_data(self): + self.mark_initial_frame_as_non_final() + data = Data("123456789".encode("utf-8")) + DataCodec.encode(self.buf, data) + DataCodec.encode_nullable(self.buf, data) + DataCodec.encode_nullable(self.buf, None, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(data, DataCodec.decode(message)) + self.assertEqual(data, DataCodec.decode_nullable(message)) + self.assertIsNone(DataCodec.decode_nullable(message)) + + def test_entry_list(self): + self.mark_initial_frame_as_non_final() + entries = [("a", "1"), ("b", "2"), ("c", "3")] + EntryListCodec.encode(self.buf, entries, StringCodec.encode, StringCodec.encode) + EntryListCodec.encode_nullable(self.buf, entries, StringCodec.encode, StringCodec.encode) + EntryListCodec.encode_nullable(self.buf, None, StringCodec.encode, StringCodec.encode, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(entries, EntryListCodec.decode(message, StringCodec.decode, StringCodec.decode)) + self.assertEqual(entries, EntryListCodec.decode_nullable(message, StringCodec.decode, StringCodec.decode)) + self.assertIsNone(EntryListCodec.decode_nullable(message, StringCodec.decode, StringCodec.decode)) + + def test_uuid_integer_list_entry_list(self): + self.mark_initial_frame_as_non_final() + entries = [(uuid.uuid4(), [1, 2]), (uuid.uuid4(), [3, 4]), (uuid.uuid4(), [5, 6])] + EntryListUUIDListIntegerCodec.encode(self.buf, entries, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(entries, EntryListUUIDListIntegerCodec.decode(message)) + + def test_uuid_long_entry_list(self): + self.mark_initial_frame_as_non_final() + entries = [(uuid.uuid4(), 0xCAFE), (uuid.uuid4(), 0xBABE), (uuid.uuid4(), 56789123123123)] + EntryListUUIDLongCodec.encode(self.buf, entries, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(entries, EntryListUUIDLongCodec.decode(message)) + + def test_errors(self): + self.mark_initial_frame_as_non_final() + holder = ErrorHolder(-12345, "class", "message", []) + ListMultiFrameCodec.encode(self.buf, [holder], ErrorHolderCodec.encode, True) + message = self.write_and_decode() + self.assertEqual([holder], _ErrorsCodec.decode(message)) + + def test_integer_list(self): + self.mark_initial_frame_as_non_final() + l = [0xCAFE, 0xBABE, -9999999] + ListIntegerCodec.encode(self.buf, l, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(l, ListIntegerCodec.decode(message)) + + def test_long_list(self): + self.mark_initial_frame_as_non_final() + l = [1, -2, 56789123123123] + ListLongCodec.encode(self.buf, l, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(l, ListLongCodec.decode(message)) + + def test_list(self): + self.mark_initial_frame_as_non_final() + l = list(map(six.u, ["a", "b", "c", "😃"])) + ListMultiFrameCodec.encode(self.buf, l, StringCodec.encode) + ListMultiFrameCodec.encode_nullable(self.buf, l, StringCodec.encode) + ListMultiFrameCodec.encode_nullable(self.buf, None, StringCodec.encode) + ListMultiFrameCodec.encode_contains_nullable(self.buf, l, StringCodec.encode) + ListMultiFrameCodec.encode_contains_nullable(self.buf, [None], StringCodec.encode, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(l, ListMultiFrameCodec.decode(message, StringCodec.decode)) + self.assertEqual(l, ListMultiFrameCodec.decode_nullable(message, StringCodec.decode)) + self.assertIsNone(ListMultiFrameCodec.decode_nullable(message, StringCodec.decode)) + self.assertEqual(l, ListMultiFrameCodec.decode_contains_nullable(message, StringCodec.decode)) + self.assertEqual([None], ListMultiFrameCodec.decode_contains_nullable(message, StringCodec.decode)) + + def test_uuid_list(self): + self.mark_initial_frame_as_non_final() + l = [uuid.uuid4(), uuid.uuid4(), uuid.uuid4()] + ListUUIDCodec.encode(self.buf, l, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(l, ListUUIDCodec.decode(message)) + + def test_map(self): + self.mark_initial_frame_as_non_final() + m = dict() + m["a"] = "b" + m["c"] = "d" + m["e"] = "f" + MapCodec.encode(self.buf, m, StringCodec.encode, StringCodec.encode) + MapCodec.encode_nullable(self.buf, m, StringCodec.encode, StringCodec.encode) + MapCodec.encode_nullable(self.buf, None, StringCodec.encode, StringCodec.encode, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(m, MapCodec.decode(message, StringCodec.decode, StringCodec.decode)) + self.assertEqual(m, MapCodec.decode_nullable(message, StringCodec.decode, StringCodec.decode)) + self.assertIsNone(MapCodec.decode_nullable(message, StringCodec.decode, StringCodec.decode)) + + def test_string(self): + self.mark_initial_frame_as_non_final() + string = six.u("abc©☺𩸽🐶😁") + StringCodec.encode(self.buf, string, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual(string, StringCodec.decode(message)) + + def test_nullable(self): + self.mark_initial_frame_as_non_final() + CodecUtil.encode_nullable(self.buf, "a", StringCodec.encode) + CodecUtil.encode_nullable(self.buf, None, StringCodec.encode, True) + message = self.write_and_decode() + message.next_frame() # initial frame + self.assertEqual("a", CodecUtil.decode_nullable(message, StringCodec.decode)) + self.assertIsNone(CodecUtil.decode_nullable(message, StringCodec.decode)) + + +class _MutableInteger(object): + def __init__(self, initial_value): + self.value = initial_value + + def increment(self): + self.value += 1 class ClientMessageBuilderTest(unittest.TestCase): - def test_message_accumulate(self): - message = client_authentication_codec.encode_request("user", "pass", "uuid", "owner-uuid", True, "PYH", 1, "3.10") - message.set_correlation_id(1) - - def message_callback(merged_message): - self.assertTrue(merged_message.is_flag_set(BEGIN_END_FLAG)) - self.assertEqual(merged_message.get_frame_length(), message.get_frame_length()) - self.assertEqual(merged_message.get_correlation_id(), message.get_correlation_id()) - - builder = ClientMessageBuilder(message_callback=message_callback) - - header = message.buffer[0:message.get_data_offset()] - payload = message.buffer[message.get_data_offset():] - - indx_1 = len(payload) // 3 - indx_2 = 2 * len(payload) // 3 - - p1 = payload[0:indx_1] - p2 = payload[indx_1: indx_2] - p3 = payload[indx_2:] - - cm1 = ClientMessage(buff=header + p1) - cm2 = ClientMessage(buff=header + p2) - cm3 = ClientMessage(buff=header + p3) - cm1.add_flag(BEGIN_FLAG) - cm3.add_flag(END_FLAG) - - builder.on_message(cm1) - builder.on_message(cm2) - builder.on_message(cm3) - + @classmethod + def setUpClass(cls): + cls.reader = _Reader(None) + + def setUp(self): + self.counter = _MutableInteger(0) + self.builder = ClientMessageBuilder(lambda m: self.counter.increment()) + + def test_unfragmented_message(self): + request = client_authentication_codec.encode_request("dev", "user", "pass", uuid.uuid4(), + "PYH", 1, "4.0", "python", []) + self.reader.read(request.buf) + message = self.reader._read_message() + self.builder.on_message(message) + self.assertEqual(1, self.counter.value) + + def test_fragmented_message(self): + size = SIZE_OF_FRAME_LENGTH_AND_FLAGS + LONG_SIZE_IN_BYTES + fragmentation_id = 1234567890123 + begin_buf = bytearray(size) + LE_INT.pack_into(begin_buf, 0, size) + LE_UINT16.pack_into(begin_buf, INT_SIZE_IN_BYTES, 1 << 15) + LE_LONG.pack_into(begin_buf, SIZE_OF_FRAME_LENGTH_AND_FLAGS, fragmentation_id) + StringCodec.encode(begin_buf, "a", True) + + middle_buf = bytearray(size) + LE_INT.pack_into(middle_buf, 0, size) + LE_LONG.pack_into(middle_buf, SIZE_OF_FRAME_LENGTH_AND_FLAGS, fragmentation_id) + StringCodec.encode(middle_buf, "b", True) + + end_buf = bytearray(size) + LE_INT.pack_into(end_buf, 0, size) + LE_UINT16.pack_into(end_buf, INT_SIZE_IN_BYTES, 1 << 14) + LE_LONG.pack_into(end_buf, SIZE_OF_FRAME_LENGTH_AND_FLAGS, fragmentation_id) + StringCodec.encode(end_buf, "c", True) + + self.reader.read(begin_buf) + begin_message = self.reader._read_message() + self.builder.on_message(begin_message) + self.assertEqual(0, self.counter.value) + self.assertEqual(1, len(self.builder._fragmented_messages)) + fragmented_message = self.builder._fragmented_messages[fragmentation_id] + self.assertIsNotNone(fragmented_message) + self.assertEqual("a", fragmented_message.end_frame.buf.decode("utf-8")) + + self.reader.read(middle_buf) + middle_message = self.reader._read_message() + self.builder.on_message(middle_message) + self.assertEqual(0, self.counter.value) + self.assertEqual(1, len(self.builder._fragmented_messages)) + fragmented_message = self.builder._fragmented_messages[fragmentation_id] + self.assertIsNotNone(fragmented_message) + self.assertEqual("b", fragmented_message.end_frame.buf.decode("utf-8")) + + self.reader.read(end_buf) + end_message = self.reader._read_message() + self.builder.on_message(end_message) + self.assertEqual(1, self.counter.value) + self.assertEqual(0, len(self.builder._fragmented_messages)) diff --git a/tests/client_test.py b/tests/client_test.py index 05d664be17..1361f48931 100644 --- a/tests/client_test.py +++ b/tests/client_test.py @@ -3,42 +3,54 @@ from tests.base import HazelcastTestCase from hazelcast.config import ClientConfig, ClientProperties from hazelcast.client import HazelcastClient -from hazelcast.lifecycle import LIFECYCLE_STATE_DISCONNECTED +from hazelcast.lifecycle import LifecycleState +from tests.hzrc.ttypes import Lang +from tests.util import configure_logging class ClientTest(HazelcastTestCase): + @classmethod + def setUpClass(cls): + configure_logging() + def test_client_only_listens(self): rc = self.create_rc() client_heartbeat_seconds = 8 - cluster_config = """ + cluster_config = """ {} """.format(client_heartbeat_seconds) cluster = self.create_cluster(rc, cluster_config) - member = cluster.start_member() + cluster.start_member() - client_config = ClientConfig() - client_config.set_property(ClientProperties.HEARTBEAT_INTERVAL.name, 1000) + config = ClientConfig() + config.cluster_name = cluster.id + config.set_property(ClientProperties.HEARTBEAT_INTERVAL.name, 1000) - client1 = HazelcastClient(client_config) + client1 = HazelcastClient(config) def lifecycle_event_collector(): events = [] def event_collector(e): - if e == LIFECYCLE_STATE_DISCONNECTED: + print(e) + if e == LifecycleState.DISCONNECTED: events.append(e) event_collector.events = events return event_collector collector = lifecycle_event_collector() - client1.lifecycle.add_listener(collector) - client2 = HazelcastClient() + client1.lifecycle_service.add_listener(collector) + + config2 = ClientConfig() + config2.cluster_name = cluster.id + client2 = HazelcastClient(config2) key = "topic-name" topic = client1.get_topic(key) @@ -48,14 +60,51 @@ def message_listener(e): topic.add_listener(message_listener) - client2topic = client2.get_topic(key) + topic2 = client2.get_topic(key) begin = time.time() while (time.time() - begin) < 2 * client_heartbeat_seconds: - client2topic.publish("message") + topic2.publish("message") time.sleep(0.5) self.assertEqual(0, len(collector.events)) client1.shutdown() client2.shutdown() rc.exit() + + +class ClientLabelsTest(HazelcastTestCase): + @classmethod + def setUpClass(cls): + configure_logging() + cls.rc = cls.create_rc() + cls.cluster = cls.create_cluster(cls.rc) + cls.cluster.start_member() + + @classmethod + def tearDownClass(cls): + cls.rc.terminateCluster(cls.cluster.id) + cls.rc.exit() + + def tearDown(self): + self.shutdown_all_clients() + + def test_default_config(self): + config = ClientConfig() + config.cluster_name = self.cluster.id + + self.create_client(config) + self.assertIsNone(self.get_labels_from_member()) + + def test_provided_labels_are_received(self): + config = ClientConfig() + config.cluster_name = self.cluster.id + config.labels.add("test-label") + self.create_client(config) + self.assertEqual(b"test-label", self.get_labels_from_member()) + + def get_labels_from_member(self): + script = "var client = instance_0.getClientService().getConnectedClients().iterator().next();\n" \ + "result = client.getLabels().iterator().next();\n" + return self.rc.executeOnController(self.cluster.id, script, Lang.JAVASCRIPT).result + diff --git a/tests/cluster_test.py b/tests/cluster_test.py index 30e95c1c91..8ac88bf46c 100644 --- a/tests/cluster_test.py +++ b/tests/cluster_test.py @@ -1,16 +1,30 @@ -import hazelcast +import unittest + +from hazelcast import ClientConfig, HazelcastClient, six +from hazelcast.cluster import RandomLB, RoundRobinLB from tests.base import HazelcastTestCase +from tests.util import configure_logging class ClusterTest(HazelcastTestCase): rc = None + @classmethod + def setUpClass(cls): + configure_logging() + def setUp(self): self.rc = self.create_rc() self.cluster = self.create_cluster(self.rc) + def create_config(self): + config = ClientConfig() + config.cluster_name = self.cluster.id + return config + def tearDown(self): self.shutdown_all_clients() + self.rc.terminateCluster(self.cluster.id) self.rc.exit() def test_initial_membership_listener(self): @@ -19,7 +33,7 @@ def test_initial_membership_listener(self): def member_added(m): events.append(m) - config = hazelcast.ClientConfig() + config = self.create_config() config.membership_listeners.append((member_added, None)) member = self.cluster.start_member() @@ -27,7 +41,7 @@ def member_added(m): self.create_client(config) self.assertEqual(len(events), 1) - self.assertEqual(events[0].uuid, member.uuid) + self.assertEqual(str(events[0].uuid), member.uuid) self.assertEqual(events[0].address, member.address) def test_for_existing_members(self): @@ -37,12 +51,13 @@ def member_added(member): events.append(member) member = self.cluster.start_member() - client = self.create_client() + config = self.create_config() + client = self.create_client(config) - client.cluster.add_listener(member_added, fire_for_existing=True) + client.cluster_service.add_listener(member_added, fire_for_existing=True) self.assertEqual(len(events), 1) - self.assertEqual(events[0].uuid, member.uuid) + self.assertEqual(str(events[0].uuid), member.uuid) self.assertEqual(events[0].address, member.address) def test_member_added(self): @@ -52,15 +67,16 @@ def member_added(member): events.append(member) self.cluster.start_member() - client = self.create_client() + config = self.create_config() + client = self.create_client(config) - client.cluster.add_listener(member_added, fire_for_existing=True) + client.cluster_service.add_listener(member_added, fire_for_existing=True) new_member = self.cluster.start_member() def assertion(): self.assertEqual(len(events), 2) - self.assertEqual(events[1].uuid, new_member.uuid) + self.assertEqual(str(events[1].uuid), new_member.uuid) self.assertEqual(events[1].address, new_member.address) self.assertTrueEventually(assertion) @@ -74,39 +90,127 @@ def member_removed(member): self.cluster.start_member() member_to_remove = self.cluster.start_member() - client = self.create_client() + config = self.create_config() + client = self.create_client(config) - client.cluster.add_listener(member_removed=member_removed) + client.cluster_service.add_listener(member_removed=member_removed) member_to_remove.shutdown() def assertion(): self.assertEqual(len(events), 1) - self.assertEqual(events[0].uuid, member_to_remove.uuid) + self.assertEqual(str(events[0].uuid), member_to_remove.uuid) self.assertEqual(events[0].address, member_to_remove.address) self.assertTrueEventually(assertion) def test_exception_in_membership_listener(self): - def listener(e): + def listener(_): raise RuntimeError("error") - config = hazelcast.ClientConfig() + config = self.create_config() config.membership_listeners.append((listener, listener)) self.cluster.start_member() self.create_client(config) - def test_cluster_service_get_members_by_property(self): - self.cluster.start_member() - client = self.create_client() - - self.assertEqual(1, len(client.cluster.members)) - - def test_cluster_service_cant_set_members(self): + def test_cluster_service_get_members(self): self.cluster.start_member() - client = self.create_client() - - with self.assertRaises(AttributeError): - client.cluster.members = [] + config = self.create_config() + client = self.create_client(config) + self.assertEqual(1, len(client.cluster_service.get_members())) + def test_cluster_service_get_members_with_selector(self): + member = self.cluster.start_member() + config = self.create_config() + client = self.create_client(config) + + self.assertEqual(0, len(client.cluster_service.get_members(lambda m: member.address != m.address))) + + +class _MockClusterService(object): + def __init__(self, members): + self._members = members + + def add_listener(self, listener, *_): + for m in self._members: + listener(m) + + def get_members(self): + return self._members + + +class LoadBalancersTest(unittest.TestCase): + def test_random_lb_with_no_members(self): + cluster = _MockClusterService([]) + lb = RandomLB() + lb.init(cluster, None) + self.assertIsNone(lb.next()) + + def test_round_robin_lb_with_no_members(self): + cluster = _MockClusterService([]) + lb = RoundRobinLB() + lb.init(cluster, None) + self.assertIsNone(lb.next()) + + def test_random_lb_with_members(self): + cluster = _MockClusterService([0, 1, 2]) + lb = RandomLB() + lb.init(cluster, None) + for _ in range(10): + self.assertTrue(0 <= lb.next() <= 2) + + def test_round_robin_lb_with_members(self): + cluster = _MockClusterService([0, 1, 2]) + lb = RoundRobinLB() + lb.init(cluster, None) + for i in range(10): + self.assertEqual(i % 3, lb.next()) + + +class LoadBalancersWithRealClusterTest(HazelcastTestCase): + @classmethod + def setUpClass(cls): + configure_logging() + cls.rc = cls.create_rc() + cls.cluster = cls.create_cluster(cls.rc, None) + cls.member1 = cls.cluster.start_member() + cls.member2 = cls.cluster.start_member() + cls.addresses = [cls.member1.address, cls.member2.address] + + @classmethod + def tearDownClass(cls): + cls.rc.terminateCluster(cls.cluster.id) + cls.rc.exit() + + def test_random_load_balancer(self): + config = ClientConfig() + config.cluster_name = self.cluster.id + config.load_balancer = RandomLB() + client = HazelcastClient(config) + self.assertTrue(client.lifecycle_service.is_running()) + + lb = client._load_balancer + self.assertTrue(isinstance(lb, RandomLB)) + + six.assertCountEqual(self, self.addresses, list(map(lambda m: m.address, lb._members))) + for _ in range(10): + self.assertTrue(lb.next().address in self.addresses) + + client.shutdown() + + def test_round_robin_load_balancer(self): + config = ClientConfig() + config.cluster_name = self.cluster.id + config.load_balancer = RoundRobinLB() + client = HazelcastClient(config) + self.assertTrue(client.lifecycle_service.is_running()) + + lb = client._load_balancer + self.assertTrue(isinstance(lb, RoundRobinLB)) + + six.assertCountEqual(self, self.addresses, list(map(lambda m: m.address, lb._members))) + for i in range(10): + self.assertEqual(self.addresses[i % len(self.addresses)], lb.next().address) + + client.shutdown() diff --git a/tests/connection_strategy_test.py b/tests/connection_strategy_test.py new file mode 100644 index 0000000000..f194880ab2 --- /dev/null +++ b/tests/connection_strategy_test.py @@ -0,0 +1,148 @@ +from hazelcast import ClientConfig, HazelcastClient, six +from hazelcast.config import RECONNECT_MODE +from hazelcast.errors import ClientOfflineError, HazelcastClientNotActiveError +from hazelcast.lifecycle import LifecycleState +from tests.base import HazelcastTestCase +from tests.util import random_string, configure_logging + + +class ConnectionStrategyTest(HazelcastTestCase): + @classmethod + def setUpClass(cls): + configure_logging() + cls.rc = cls.create_rc() + + @classmethod + def tearDownClass(cls): + cls.rc.exit() + + def setUp(self): + self.client = None + self.cluster = None + + def tearDown(self): + if self.client: + self.client.shutdown() + self.client = None + + if self.cluster: + self.rc.terminateCluster(self.cluster.id) + self.cluster = None + + def test_async_start_with_no_cluster(self): + config = ClientConfig() + config.connection_strategy.async_start = True + self.client = HazelcastClient(config) + + with self.assertRaises(ClientOfflineError): + self.client.get_map(random_string()) + + def test_async_start_with_no_cluster_throws_after_shutdown(self): + config = ClientConfig() + config.connection_strategy.async_start = True + self.client = HazelcastClient(config) + + self.client.shutdown() + with self.assertRaises(HazelcastClientNotActiveError): + self.client.get_map(random_string()) + + def test_async_start(self): + self.cluster = self.rc.createCluster(None, None) + self.rc.startMember(self.cluster.id) + config = ClientConfig() + config.cluster_name = self.cluster.id + config.network.addresses.append("localhost:5701") + config.connection_strategy.async_start = True + + def collector(): + events = [] + + def on_state_change(event): + if event == LifecycleState.CONNECTED: + events.append(event) + + on_state_change.events = events + return on_state_change + event_collector = collector() + config.add_lifecycle_listener(event_collector) + self.client = HazelcastClient(config) + + self.assertTrueEventually(lambda: self.assertEqual(1, len(event_collector.events))) + self.client.get_map(random_string()) + + def test_off_reconnect_mode(self): + self.cluster = self.rc.createCluster(None, None) + member = self.rc.startMember(self.cluster.id) + config = ClientConfig() + config.cluster_name = self.cluster.id + config.network.addresses.append("localhost:5701") + config.connection_strategy.reconnect_mode = RECONNECT_MODE.OFF + config.connection_strategy.connection_retry.cluster_connect_timeout = six.MAXSIZE + + def collector(): + events = [] + + def on_state_change(event): + if event == LifecycleState.SHUTDOWN: + events.append(event) + + on_state_change.events = events + return on_state_change + event_collector = collector() + config.add_lifecycle_listener(event_collector) + self.client = HazelcastClient(config) + m = self.client.get_map(random_string()).blocking() + # no exception at this point + m.put(1, 1) + self.rc.shutdownMember(self.cluster.id, member.uuid) + self.assertTrueEventually(lambda: self.assertEqual(1, len(event_collector.events))) + + with self.assertRaises(HazelcastClientNotActiveError): + m.put(1, 1) + + def test_async_reconnect_mode(self): + self.cluster = self.rc.createCluster(None, None) + member = self.rc.startMember(self.cluster.id) + config = ClientConfig() + config.cluster_name = self.cluster.id + config.network.addresses.append("localhost:5701") + config.connection_strategy.reconnect_mode = RECONNECT_MODE.ASYNC + config.connection_strategy.connection_retry.cluster_connect_timeout = six.MAXSIZE + + def collector(event_type): + events = [] + + def on_state_change(event): + if event == event_type: + events.append(event) + + on_state_change.events = events + return on_state_change + disconnected_collector = collector(LifecycleState.DISCONNECTED) + config.add_lifecycle_listener(disconnected_collector) + self.client = HazelcastClient(config) + m = self.client.get_map(random_string()).blocking() + # no exception at this point + m.put(1, 1) + + self.rc.shutdownMember(self.cluster.id, member.uuid) + self.assertTrueEventually(lambda: self.assertEqual(1, len(disconnected_collector.events))) + with self.assertRaises(ClientOfflineError): + m.put(1, 1) + + self.rc.startMember(self.cluster.id) + + connected_collector = collector(LifecycleState.CONNECTED) + self.client.lifecycle_service.add_listener(connected_collector) + self.assertTrueEventually(lambda: self.assertEqual(1, len(connected_collector.events))) + + m.put(1, 1) + + def test_async_start_with_partition_specific_proxies(self): + config = ClientConfig() + config.connection_strategy.async_start = True + self.client = HazelcastClient(config) + + with self.assertRaises(ClientOfflineError): + self.client.get_list(random_string()) + diff --git a/tests/discovery/address_provider_test.py b/tests/discovery/address_provider_test.py new file mode 100644 index 0000000000..5d8b30514f --- /dev/null +++ b/tests/discovery/address_provider_test.py @@ -0,0 +1,36 @@ +from unittest import TestCase +from hazelcast.connection import DefaultAddressProvider +from hazelcast.discovery import HazelcastCloudAddressProvider +from hazelcast.config import ClientConfig +from hazelcast import HazelcastClient +from hazelcast.errors import IllegalStateError + + +class _TestClient(HazelcastClient): + def _start(self): + pass + + +class AddressProviderTest(TestCase): + def test_default_config(self): + client = _TestClient() + self.assertTrue(isinstance(client._address_provider, DefaultAddressProvider)) + + def test_with_nonempty_network_config_addresses(self): + config = ClientConfig() + config.network.addresses.append("127.0.0.1:5701") + client = _TestClient(config) + self.assertTrue(isinstance(client._address_provider, DefaultAddressProvider)) + + def test_enabled_cloud_config(self): + config = ClientConfig() + config.network.cloud.enabled = True + client = _TestClient(config) + self.assertTrue(isinstance(client._address_provider, HazelcastCloudAddressProvider)) + + def test_multiple_providers(self): + config = ClientConfig() + config.network.cloud.enabled = True + config.network.addresses.append("127.0.0.1") + with self.assertRaises(IllegalStateError): + _TestClient(config) diff --git a/tests/discovery/default_address_provider_test.py b/tests/discovery/default_address_provider_test.py index f407122a92..2581031b6c 100644 --- a/tests/discovery/default_address_provider_test.py +++ b/tests/discovery/default_address_provider_test.py @@ -2,39 +2,57 @@ from hazelcast import six from hazelcast.core import Address from hazelcast.connection import DefaultAddressProvider -from hazelcast.config import ClientNetworkConfig class DefaultAddressProviderTest(TestCase): - def setUp(self): - self.network_config = ClientNetworkConfig() - def test_load_addresses(self): - self.network_config.addresses.append("192.168.0.1:5701") - provider = DefaultAddressProvider(self.network_config) - addresses = provider.load_addresses() - six.assertCountEqual(self, addresses, [Address("192.168.0.1", 5701)]) + initial_list = ["192.168.0.1:5701"] + provider = DefaultAddressProvider(initial_list) + primaries, secondaries = provider.load_addresses() + six.assertCountEqual(self, primaries, [Address("192.168.0.1", 5701)]) + six.assertCountEqual(self, secondaries, []) def test_load_addresses_with_multiple_addresses(self): - self.network_config.addresses.append("192.168.0.1:5701") - self.network_config.addresses.append("192.168.0.1:5702") - self.network_config.addresses.append("192.168.0.2:5701") - provider = DefaultAddressProvider(self.network_config) - addresses = provider.load_addresses() - six.assertCountEqual(self, addresses, [Address("192.168.0.1", 5701), + initial_list = ["192.168.0.1:5701", "192.168.0.1:5702", "192.168.0.2:5701"] + provider = DefaultAddressProvider(initial_list) + primaries, secondaries = provider.load_addresses() + six.assertCountEqual(self, primaries, [Address("192.168.0.1", 5701), Address("192.168.0.1", 5702), Address("192.168.0.2", 5701)]) + six.assertCountEqual(self, secondaries, []) - # we deal with duplicate addresses in the util/get_possible_addresses + # we deal with duplicate addresses in the ConnectionManager#_get_possible_addresses def test_load_addresses_with_duplicate_addresses(self): - self.network_config.addresses.append("192.168.0.1:5701") - self.network_config.addresses.append("192.168.0.1:5701") - provider = DefaultAddressProvider(self.network_config) - addresses = provider.load_addresses() - six.assertCountEqual(self, addresses, [Address("192.168.0.1", 5701), + initial_list = ["192.168.0.1:5701", "192.168.0.1:5701"] + provider = DefaultAddressProvider(initial_list) + primaries, secondaries = provider.load_addresses() + six.assertCountEqual(self, primaries, [Address("192.168.0.1", 5701), Address("192.168.0.1", 5701)]) + six.assertCountEqual(self, secondaries, []) def test_load_addresses_with_empty_addresses(self): - provider = DefaultAddressProvider(self.network_config) - addresses = provider.load_addresses() - six.assertCountEqual(self, addresses, []) + initial_list = [] + provider = DefaultAddressProvider(initial_list) + primaries, secondaries = provider.load_addresses() + six.assertCountEqual(self, primaries, [Address("127.0.0.1", 5701)]) + six.assertCountEqual(self, secondaries, [Address("127.0.0.1", 5702), Address("127.0.0.1", 5703)]) + + def test_load_addresses_without_port(self): + initial_list = ["192.168.0.1"] + provider = DefaultAddressProvider(initial_list) + primaries, secondaries = provider.load_addresses() + six.assertCountEqual(self, primaries, [Address("192.168.0.1", 5701)]) + six.assertCountEqual(self, secondaries, [Address("192.168.0.1", 5702), Address("192.168.0.1", 5703)]) + + def test_translate(self): + provider = DefaultAddressProvider([]) + address = Address("192.168.0.1", 5701) + actual = provider.translate(address) + + self.assertEqual(address, actual) + + def test_translate_none(self): + provider = DefaultAddressProvider([]) + actual = provider.translate(None) + + self.assertIsNone(actual) diff --git a/tests/discovery/default_address_translator_test.py b/tests/discovery/default_address_translator_test.py deleted file mode 100644 index b14cf14f3e..0000000000 --- a/tests/discovery/default_address_translator_test.py +++ /dev/null @@ -1,25 +0,0 @@ -from unittest import TestCase -from hazelcast.core import Address -from hazelcast.connection import DefaultAddressTranslator - - -class DefaultAddressTranslatorTest(TestCase): - def setUp(self): - self.translator = DefaultAddressTranslator() - self.address = Address("192.168.0.1", 5701) - - def test_translate(self): - actual = self.translator.translate(self.address) - - self.assertEqual(self.address, actual) - - def test_translate_none(self): - actual = self.translator.translate(None) - - self.assertIsNone(actual) - - def test_refresh_and_translate(self): - self.translator.refresh() - actual = self.translator.translate(self.address) - - self.assertEqual(self.address, actual) diff --git a/tests/discovery/hazelcast_cloud_config_test.py b/tests/discovery/hazelcast_cloud_config_test.py index 005a9577b7..6353f54d20 100644 --- a/tests/discovery/hazelcast_cloud_config_test.py +++ b/tests/discovery/hazelcast_cloud_config_test.py @@ -2,7 +2,7 @@ from hazelcast.client import HazelcastClient, ClientProperties from hazelcast.config import ClientConfig, ClientCloudConfig from hazelcast.discovery import HazelcastCloudDiscovery -from hazelcast.exception import HazelcastIllegalStateError +from hazelcast.errors import IllegalStateError class HazelcastCloudConfigTest(TestCase): @@ -12,7 +12,7 @@ def setUp(self): self.config = ClientConfig() def test_cloud_config_defaults(self): - cloud_config = self.config.network_config.cloud_config + cloud_config = self.config.network.cloud self.assertEqual(False, cloud_config.enabled) self.assertEqual("", cloud_config.discovery_token) @@ -20,9 +20,9 @@ def test_cloud_config(self): cloud_config = ClientCloudConfig() cloud_config.enabled = True cloud_config.discovery_token = self.token - self.config.network_config.cloud_config = cloud_config - self.assertEqual(True, self.config.network_config.cloud_config.enabled) - self.assertEqual(self.token, self.config.network_config.cloud_config.discovery_token) + self.config.network.cloud = cloud_config + self.assertEqual(True, self.config.network.cloud.enabled) + self.assertEqual(self.token, self.config.network.cloud.discovery_token) def test_cloud_config_with_property(self): self.config.set_property(ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.name, self.token) @@ -31,10 +31,11 @@ def test_cloud_config_with_property(self): self.assertEqual(self.token, token) def test_cloud_config_with_property_and_client_configuration(self): - self.config.network_config.cloud_config.enabled = True + self.config.network.cloud.enabled = True + self.config.connection_strategy.connection_retry.cluster_connect_timeout = 2 self.config.set_property(ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.name, self.token) - with self.assertRaises(HazelcastIllegalStateError): - client = HazelcastClient(self.config) + with self.assertRaises(IllegalStateError): + HazelcastClient(self.config) def test_custom_cloud_url(self): self.config.set_property(ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.name, self.token) diff --git a/tests/discovery/hazelcast_cloud_discovery_test.py b/tests/discovery/hazelcast_cloud_discovery_test.py index 2eac854db6..38192ab08e 100644 --- a/tests/discovery/hazelcast_cloud_discovery_test.py +++ b/tests/discovery/hazelcast_cloud_discovery_test.py @@ -6,7 +6,7 @@ from hazelcast import six from unittest import TestCase from hazelcast.core import Address -from hazelcast.exception import HazelcastCertificationError +from hazelcast.errors import HazelcastCertificationError from hazelcast.discovery import HazelcastCloudDiscovery from hazelcast.config import ClientConfig from hazelcast.client import HazelcastClient @@ -133,20 +133,20 @@ def test_invalid_certificates(self): def test_client_with_cloud_discovery(self): config = ClientConfig() - config.network_config.cloud_config.enabled = True - config.network_config.cloud_config.discovery_token = TOKEN + config.network.cloud.enabled = True + config.network.cloud.discovery_token = TOKEN config.set_property(HazelcastCloudDiscovery.CLOUD_URL_BASE_PROPERTY.name, HOST + ":" + str(self.server.port)) client = TestClient(config) - client._address_translator.cloud_discovery._ctx = self.ctx - client._address_providers[0].cloud_discovery._ctx = self.ctx + client._address_provider.cloud_discovery._ctx = self.ctx - private_addresses = client._address_providers[0].load_addresses() + private_addresses, secondaries = client._address_provider.load_addresses() six.assertCountEqual(self, list(ADDRESSES.keys()), private_addresses) + six.assertCountEqual(self, secondaries, []) for private_address in private_addresses: - translated_address = client._address_translator.translate(private_address) + translated_address = client._address_provider.translate(private_address) self.assertEqual(ADDRESSES[private_address], translated_address) diff --git a/tests/discovery/hazelcast_cloud_provider_test.py b/tests/discovery/hazelcast_cloud_provider_test.py index ca3d716218..7e43963a69 100644 --- a/tests/discovery/hazelcast_cloud_provider_test.py +++ b/tests/discovery/hazelcast_cloud_provider_test.py @@ -1,4 +1,6 @@ from unittest import TestCase + +from hazelcast import six from hazelcast.core import Address from hazelcast.discovery import HazelcastCloudDiscovery, HazelcastCloudAddressProvider @@ -7,28 +9,61 @@ class HazelcastCloudProviderTest(TestCase): expected_addresses = dict() cloud_discovery = None provider = None + private_address = Address("127.0.0.1", 5701) + public_address = Address("192.168.0.1", 5701) def setUp(self): self.expected_addresses[Address("10.0.0.1", 5701)] = Address("198.51.100.1", 5701) self.expected_addresses[Address("10.0.0.1", 5702)] = Address("198.51.100.1", 5702) self.expected_addresses[Address("10.0.0.2", 5701)] = Address("198.51.100.2", 5701) + self.expected_addresses[self.private_address] = self.public_address self.cloud_discovery = HazelcastCloudDiscovery("", "", 0) self.cloud_discovery.discover_nodes = lambda: self.expected_addresses self.provider = HazelcastCloudAddressProvider("", "", 0) self.provider.cloud_discovery = self.cloud_discovery def test_load_addresses(self): - addresses = self.provider.load_addresses() + addresses, secondaries = self.provider.load_addresses() - self.assertEqual(3, len(addresses)) - for address in self.expected_addresses.keys(): - addresses.remove(address) - self.assertEqual(0, len(addresses)) + self.assertEqual(4, len(addresses)) + self.assertEqual(0, len(secondaries)) + six.assertCountEqual(self, list(self.expected_addresses.keys()), addresses) def test_load_addresses_with_exception(self): self.provider.cloud_discovery.discover_nodes = self.mock_discover_nodes_with_exception - addresses = self.provider.load_addresses() + addresses, secondaries = self.provider.load_addresses() self.assertEqual(0, len(addresses)) + self.assertEqual(0, len(secondaries)) + + def test_translate_when_address_is_none(self): + actual = self.provider.translate(None) + + self.assertIsNone(actual) + + def test_translate(self): + actual = self.provider.translate(self.private_address) + + self.assertEqual(self.public_address, actual) + + def test_refresh_and_translate(self): + self.provider.refresh() + actual = self.provider.translate(self.private_address) + + self.assertEqual(self.public_address, actual) + + def test_translate_when_not_found(self): + not_available_address = Address("127.0.0.3", 5701) + actual = self.provider.translate(not_available_address) + + self.assertIsNone(actual) + + def test_refresh_with_exception(self): + cloud_discovery = HazelcastCloudDiscovery("", "", 0) + cloud_discovery.discover_nodes = self.mock_discover_nodes_with_exception + provider = HazelcastCloudAddressProvider("", "", 0) + provider.cloud_discovery = cloud_discovery + provider.refresh() def mock_discover_nodes_with_exception(self): raise Exception("Expected exception") + diff --git a/tests/discovery/hazelcast_cloud_translator_test.py b/tests/discovery/hazelcast_cloud_translator_test.py deleted file mode 100644 index 7e4dcceb36..0000000000 --- a/tests/discovery/hazelcast_cloud_translator_test.py +++ /dev/null @@ -1,55 +0,0 @@ -from unittest import TestCase -from hazelcast.core import Address -from hazelcast.discovery import HazelcastCloudDiscovery, HazelcastCloudAddressTranslator - - -class HazelcastCloudTranslatorTest(TestCase): - lookup = dict() - private_address = None - public_address = None - logger = None - translator = None - - def setUp(self): - self.private_address = Address("127.0.0.1", 5701) - self.public_address = Address("192.168.0.1", 5701) - self.lookup[self.private_address] = self.public_address - self.lookup[Address("127.0.0.2", 5701)] = Address("192.168.0.2", 5701) - self.cloud_discovery = HazelcastCloudDiscovery("", "", 0) - self.cloud_discovery.discover_nodes = lambda: self.lookup - self.translator = HazelcastCloudAddressTranslator("", "", 0) - self.translator.cloud_discovery = self.cloud_discovery - - def test_translate_when_address_is_none(self): - actual = self.translator.translate(None) - - self.assertIsNone(actual) - - def test_translate(self): - actual = self.translator.translate(self.private_address) - - self.assertEqual(self.public_address.host, actual.host) - self.assertEqual(self.private_address.port, actual.port) - - def test_refresh_and_translate(self): - self.translator.refresh() - actual = self.translator.translate(self.private_address) - - self.assertEqual(self.public_address.host, actual.host) - self.assertEqual(self.private_address.port, actual.port) - - def test_translate_when_not_found(self): - not_available_address = Address("127.0.0.3", 5701) - actual = self.translator.translate(not_available_address) - - self.assertIsNone(actual) - - def test_refresh_with_exception(self): - cloud_discovery = HazelcastCloudDiscovery("", "", 0) - cloud_discovery.discover_nodes = self.mock_discover_nodes_with_exception - translator = HazelcastCloudAddressTranslator("", "", 0) - translator.cloud_discovery = cloud_discovery - translator.refresh() - - def mock_discover_nodes_with_exception(self): - raise Exception("Expected exception") diff --git a/tests/discovery/multiple_providers_test.py b/tests/discovery/multiple_providers_test.py deleted file mode 100644 index 1121385c2e..0000000000 --- a/tests/discovery/multiple_providers_test.py +++ /dev/null @@ -1,86 +0,0 @@ -from unittest import TestCase -from hazelcast.core import Address -from hazelcast.connection import DefaultAddressProvider -from hazelcast.discovery import HazelcastCloudAddressProvider -from hazelcast.config import ClientNetworkConfig -from hazelcast.util import get_provider_addresses, get_possible_addresses -from hazelcast import six - - -class MultipleProvidersTest(TestCase): - def setUp(self): - self.network_config = ClientNetworkConfig() - self.cloud_address_provider = HazelcastCloudAddressProvider("", "", 0) - self.cloud_address_provider.load_addresses = lambda: [Address("10.0.0.1", 5701)] - - def test_multiple_providers_with_empty_network_config_addresses(self): - default_address_provider = DefaultAddressProvider(self.network_config) - - providers = [default_address_provider, self.cloud_address_provider] - provider_addresses = get_provider_addresses(providers) - six.assertCountEqual(self, provider_addresses, [Address("10.0.0.1", 5701)]) - - addresses = get_possible_addresses(provider_addresses) - six.assertCountEqual(self, addresses, [Address("10.0.0.1", 5701)]) - - def test_multiple_providers_with_nonempty_network_config_addresses(self): - self.network_config.addresses.append("127.0.0.1:5701") - default_address_provider = DefaultAddressProvider(self.network_config) - - providers = [default_address_provider, self.cloud_address_provider] - provider_addresses = get_provider_addresses(providers) - six.assertCountEqual(self, provider_addresses, [Address("10.0.0.1", 5701), Address("127.0.0.1", 5701)]) - - addresses = get_possible_addresses(provider_addresses) - six.assertCountEqual(self, addresses, [Address("10.0.0.1", 5701), Address("127.0.0.1", 5701)]) - - def test_multiple_providers_with_nonempty_network_config_addresses_without_port(self): - self.network_config.addresses.append("127.0.0.1") - default_address_provider = DefaultAddressProvider(self.network_config) - - providers = [default_address_provider, self.cloud_address_provider] - provider_addresses = get_provider_addresses(providers) - six.assertCountEqual(self, provider_addresses, [Address("10.0.0.1", 5701), - Address("127.0.0.1", 5701), - Address("127.0.0.1", 5702), - Address("127.0.0.1", 5703)]) - - addresses = get_possible_addresses(provider_addresses) - six.assertCountEqual(self, addresses, [Address("10.0.0.1", 5701), - Address("127.0.0.1", 5701), - Address("127.0.0.1", 5702), - Address("127.0.0.1", 5703)]) - - def test_multiple_providers_with_duplicate_network_config_addresses(self): - self.network_config.addresses.append("127.0.0.1:5701") - self.network_config.addresses.append("127.0.0.1:5701") - default_address_provider = DefaultAddressProvider(self.network_config) - - providers = [default_address_provider, self.cloud_address_provider] - provider_addresses = get_provider_addresses(providers) - six.assertCountEqual(self, provider_addresses, [Address("10.0.0.1", 5701), - Address("127.0.0.1", 5701), - Address("127.0.0.1", 5701)]) - - addresses = get_possible_addresses(provider_addresses) - six.assertCountEqual(self, addresses, [Address("10.0.0.1", 5701), - Address("127.0.0.1", 5701)]) - - # When given empty addresses and members parameters, get_possible_addresses - # returns _parse_address(DEFAULT_ADDRESS) which returns address list in this test. - # When multiple providers exist, this case could only happens if both - # of the address providers returns empty addresses with their load_addresses methods. - # This case should never happen with cloud address provider but we are - # doing this test to show the behavior. - def test_multiple_providers_with_empty_load_addresses(self): - default_address_provider = DefaultAddressProvider(self.network_config) - self.cloud_address_provider.load_addresses = lambda: [] - - providers = [default_address_provider, self.cloud_address_provider] - provider_addresses = get_provider_addresses(providers) - six.assertCountEqual(self, provider_addresses, []) - - addresses = get_possible_addresses(provider_addresses) - six.assertCountEqual(self, addresses, [Address("127.0.0.1", 5701), - Address("127.0.0.1", 5702), - Address("127.0.0.1", 5703)]) diff --git a/tests/discovery/multiple_translators_test.py b/tests/discovery/multiple_translators_test.py deleted file mode 100644 index 974e5dcca1..0000000000 --- a/tests/discovery/multiple_translators_test.py +++ /dev/null @@ -1,14 +0,0 @@ -from unittest import TestCase -from hazelcast.config import ClientConfig -from hazelcast.exception import HazelcastError -from hazelcast.client import HazelcastClient - - -class MultipleTranslatorsTest(TestCase): - def test_multiple_translators(self): - config = ClientConfig() - config.network_config.addresses.append("127.0.0.1:5701") - config.network_config.cloud_config.enabled = True - - with self.assertRaises(HazelcastError): - client = HazelcastClient(config) diff --git a/tests/future_test.py b/tests/future_test.py index e79fa177e6..2111588a23 100644 --- a/tests/future_test.py +++ b/tests/future_test.py @@ -243,7 +243,7 @@ def test_set_exception_with_non_exception(self): def test_callback_throws_exception(self): f = Future() - def invalid_func(): + def invalid_func(_): raise RuntimeError("error!") f.add_done_callback(invalid_func) @@ -253,7 +253,7 @@ def test_continue_with_throws_exception(self): f = Future() e = RuntimeError("error") - def continue_func(f): + def continue_func(_): raise e n = f.continue_with(continue_func) diff --git a/tests/hazelcast_json_value_test.py b/tests/hazelcast_json_value_test.py index 3360fe5619..ebaf019779 100644 --- a/tests/hazelcast_json_value_test.py +++ b/tests/hazelcast_json_value_test.py @@ -3,7 +3,6 @@ from hazelcast.core import HazelcastJsonValue from hazelcast.serialization.predicate import is_greater_than, is_equal_to from tests.base import SingleMemberTestCase -from tests.util import set_attr from unittest import TestCase @@ -38,7 +37,6 @@ def test_hazelcast_json_value_loads(self): self.assertEqual(self.json_obj, json_value.loads()) -@set_attr(category=3.12) class HazelcastJsonValueWithMapTest(SingleMemberTestCase): @classmethod def setUpClass(cls): @@ -46,6 +44,11 @@ def setUpClass(cls): cls.json_str = '{"key": "value"}' cls.json_obj = {"key": "value"} + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): self.map = self.client.get_map("json-test").blocking() diff --git a/tests/heartbeat_test.py b/tests/heartbeat_test.py index 3002014f67..2383c3c27c 100644 --- a/tests/heartbeat_test.py +++ b/tests/heartbeat_test.py @@ -2,7 +2,7 @@ from hazelcast.core import Address from tests.base import HazelcastTestCase from hazelcast.config import ClientConfig, ClientProperties -from tests.util import configure_logging, open_connection_to_address +from tests.util import configure_logging, open_connection_to_address, wait_for_partition_table class HeartbeatTest(HazelcastTestCase): @@ -19,6 +19,7 @@ def setUp(self): self.cluster = self.create_cluster(self.rc) self.member = self.rc.startMember(self.cluster.id) self.config = ClientConfig() + self.config.cluster_name = self.cluster.id self.config.set_property(ClientProperties.HEARTBEAT_INTERVAL.name, 500) self.config.set_property(ClientProperties.HEARTBEAT_TIMEOUT.name, 2000) @@ -29,38 +30,39 @@ def tearDown(self): self.client.shutdown() self.rc.shutdownCluster(self.cluster.id) - def test_heartbeat_stopped(self): + def test_heartbeat_stopped_and_restored(self): + member2 = self.rc.startMember(self.cluster.id) + addr = Address(member2.host, member2.port) + wait_for_partition_table(self.client) + open_connection_to_address(self.client, member2.uuid) def connection_collector(): connections = [] - def collector(c): + def collector(c, *args): connections.append(c) collector.connections = connections return collector - heartbeat_stopped_collector = connection_collector() - heartbeat_restored_collector = connection_collector() + connection_added_collector = connection_collector() + connection_removed_collector = connection_collector() - self.client.heartbeat.add_listener(on_heartbeat_stopped=heartbeat_stopped_collector, - on_heartbeat_restored=heartbeat_restored_collector) + self.client._connection_manager.add_listener(connection_added_collector, connection_removed_collector) - member2 = self.rc.startMember(self.cluster.id) - addr = Address(member2.host, member2.port) - open_connection_to_address(self.client, addr) self.simulate_heartbeat_lost(self.client, addr, 2) def assert_heartbeat_stopped_and_restored(): - self.assertEqual(1, len(heartbeat_stopped_collector.connections)) - self.assertEqual(1, len(heartbeat_restored_collector.connections)) - connection_stopped = heartbeat_stopped_collector.connections[0] - connection_restored = heartbeat_restored_collector.connections[0] - self.assertEqual(connection_stopped._address, (member2.host, member2.port)) - self.assertEqual(connection_restored._address, (member2.host, member2.port)) + self.assertEqual(1, len(connection_added_collector.connections)) + self.assertEqual(1, len(connection_removed_collector.connections)) + stopped_connection = connection_added_collector.connections[0] + restored_connection = connection_removed_collector.connections[0] + self.assertEqual(stopped_connection.connected_address, Address(member2.host, member2.port)) + self.assertEqual(restored_connection.connected_address, Address(member2.host, member2.port)) self.assertTrueEventually(assert_heartbeat_stopped_and_restored) @staticmethod def simulate_heartbeat_lost(client, address, timeout): - client.connection_manager.connections[address].last_read_in_seconds -= timeout + connection = client._connection_manager.get_connection_from_address(address) + connection.last_read_time -= timeout diff --git a/tests/hzrc/RemoteController.py b/tests/hzrc/RemoteController.py index e8267155da..39a82a1d08 100644 --- a/tests/hzrc/RemoteController.py +++ b/tests/hzrc/RemoteController.py @@ -1,15 +1,21 @@ # -# Autogenerated by Thrift Compiler (0.10.0) +# Autogenerated by Thrift Compiler (0.13.0) # # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING # # options string: py:new_style,utf8strings # +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +from thrift.TRecursive import fix_spec + +import sys import logging -from tests.hzrc.ttypes import * +from .ttypes import * from thrift.Thrift import TProcessor from thrift.transport import TTransport +all_structs = [] class Iface(object): @@ -27,6 +33,16 @@ def createCluster(self, hzVersion, xmlconfig): Parameters: - hzVersion - xmlconfig + + """ + pass + + def createClusterKeepClusterName(self, hzVersion, xmlconfig): + """ + Parameters: + - hzVersion + - xmlconfig + """ pass @@ -34,6 +50,7 @@ def startMember(self, clusterId): """ Parameters: - clusterId + """ pass @@ -42,6 +59,7 @@ def shutdownMember(self, clusterId, memberId): Parameters: - clusterId - memberId + """ pass @@ -50,6 +68,7 @@ def terminateMember(self, clusterId, memberId): Parameters: - clusterId - memberId + """ pass @@ -58,6 +77,7 @@ def suspendMember(self, clusterId, memberId): Parameters: - clusterId - memberId + """ pass @@ -66,6 +86,7 @@ def resumeMember(self, clusterId, memberId): Parameters: - clusterId - memberId + """ pass @@ -73,6 +94,7 @@ def shutdownCluster(self, clusterId): """ Parameters: - clusterId + """ pass @@ -80,6 +102,7 @@ def terminateCluster(self, clusterId): """ Parameters: - clusterId + """ pass @@ -87,6 +110,7 @@ def splitMemberFromCluster(self, memberId): """ Parameters: - memberId + """ pass @@ -95,6 +119,7 @@ def mergeMemberToCluster(self, clusterId, memberId): Parameters: - clusterId - memberId + """ pass @@ -104,6 +129,7 @@ def executeOnController(self, clusterId, script, lang): - clusterId - script - lang + """ pass @@ -198,6 +224,7 @@ def createCluster(self, hzVersion, xmlconfig): Parameters: - hzVersion - xmlconfig + """ self.send_createCluster(hzVersion, xmlconfig) return self.recv_createCluster() @@ -228,10 +255,47 @@ def recv_createCluster(self): raise result.serverException raise TApplicationException(TApplicationException.MISSING_RESULT, "createCluster failed: unknown result") + def createClusterKeepClusterName(self, hzVersion, xmlconfig): + """ + Parameters: + - hzVersion + - xmlconfig + + """ + self.send_createClusterKeepClusterName(hzVersion, xmlconfig) + return self.recv_createClusterKeepClusterName() + + def send_createClusterKeepClusterName(self, hzVersion, xmlconfig): + self._oprot.writeMessageBegin('createClusterKeepClusterName', TMessageType.CALL, self._seqid) + args = createClusterKeepClusterName_args() + args.hzVersion = hzVersion + args.xmlconfig = xmlconfig + args.write(self._oprot) + self._oprot.writeMessageEnd() + self._oprot.trans.flush() + + def recv_createClusterKeepClusterName(self): + iprot = self._iprot + (fname, mtype, rseqid) = iprot.readMessageBegin() + if mtype == TMessageType.EXCEPTION: + x = TApplicationException() + x.read(iprot) + iprot.readMessageEnd() + raise x + result = createClusterKeepClusterName_result() + result.read(iprot) + iprot.readMessageEnd() + if result.success is not None: + return result.success + if result.serverException is not None: + raise result.serverException + raise TApplicationException(TApplicationException.MISSING_RESULT, "createClusterKeepClusterName failed: unknown result") + def startMember(self, clusterId): """ Parameters: - clusterId + """ self.send_startMember(clusterId) return self.recv_startMember() @@ -266,6 +330,7 @@ def shutdownMember(self, clusterId, memberId): Parameters: - clusterId - memberId + """ self.send_shutdownMember(clusterId, memberId) return self.recv_shutdownMember() @@ -299,6 +364,7 @@ def terminateMember(self, clusterId, memberId): Parameters: - clusterId - memberId + """ self.send_terminateMember(clusterId, memberId) return self.recv_terminateMember() @@ -332,6 +398,7 @@ def suspendMember(self, clusterId, memberId): Parameters: - clusterId - memberId + """ self.send_suspendMember(clusterId, memberId) return self.recv_suspendMember() @@ -365,6 +432,7 @@ def resumeMember(self, clusterId, memberId): Parameters: - clusterId - memberId + """ self.send_resumeMember(clusterId, memberId) return self.recv_resumeMember() @@ -397,6 +465,7 @@ def shutdownCluster(self, clusterId): """ Parameters: - clusterId + """ self.send_shutdownCluster(clusterId) return self.recv_shutdownCluster() @@ -428,6 +497,7 @@ def terminateCluster(self, clusterId): """ Parameters: - clusterId + """ self.send_terminateCluster(clusterId) return self.recv_terminateCluster() @@ -459,6 +529,7 @@ def splitMemberFromCluster(self, memberId): """ Parameters: - memberId + """ self.send_splitMemberFromCluster(memberId) return self.recv_splitMemberFromCluster() @@ -491,6 +562,7 @@ def mergeMemberToCluster(self, clusterId, memberId): Parameters: - clusterId - memberId + """ self.send_mergeMemberToCluster(clusterId, memberId) return self.recv_mergeMemberToCluster() @@ -525,6 +597,7 @@ def executeOnController(self, clusterId, script, lang): - clusterId - script - lang + """ self.send_executeOnController(clusterId, script, lang) return self.recv_executeOnController() @@ -563,6 +636,7 @@ def __init__(self, handler): self._processMap["clean"] = Processor.process_clean self._processMap["exit"] = Processor.process_exit self._processMap["createCluster"] = Processor.process_createCluster + self._processMap["createClusterKeepClusterName"] = Processor.process_createClusterKeepClusterName self._processMap["startMember"] = Processor.process_startMember self._processMap["shutdownMember"] = Processor.process_shutdownMember self._processMap["terminateMember"] = Processor.process_terminateMember @@ -573,9 +647,15 @@ def __init__(self, handler): self._processMap["splitMemberFromCluster"] = Processor.process_splitMemberFromCluster self._processMap["mergeMemberToCluster"] = Processor.process_mergeMemberToCluster self._processMap["executeOnController"] = Processor.process_executeOnController + self._on_message_begin = None + + def on_message_begin(self, func): + self._on_message_begin = func def process(self, iprot, oprot): (name, type, seqid) = iprot.readMessageBegin() + if self._on_message_begin: + self._on_message_begin(name, type, seqid) if name not in self._processMap: iprot.skip(TType.STRUCT) iprot.readMessageEnd() @@ -597,11 +677,15 @@ def process_ping(self, seqid, iprot, oprot): try: result.success = self._handler.ping() msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("ping", msg_type, seqid) result.write(oprot) @@ -616,11 +700,15 @@ def process_clean(self, seqid, iprot, oprot): try: result.success = self._handler.clean() msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("clean", msg_type, seqid) result.write(oprot) @@ -635,11 +723,15 @@ def process_exit(self, seqid, iprot, oprot): try: result.success = self._handler.exit() msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("exit", msg_type, seqid) result.write(oprot) @@ -654,20 +746,50 @@ def process_createCluster(self, seqid, iprot, oprot): try: result.success = self._handler.createCluster(args.hzVersion, args.xmlconfig) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise except ServerException as serverException: msg_type = TMessageType.REPLY result.serverException = serverException - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("createCluster", msg_type, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() + def process_createClusterKeepClusterName(self, seqid, iprot, oprot): + args = createClusterKeepClusterName_args() + args.read(iprot) + iprot.readMessageEnd() + result = createClusterKeepClusterName_result() + try: + result.success = self._handler.createClusterKeepClusterName(args.hzVersion, args.xmlconfig) + msg_type = TMessageType.REPLY + except TTransport.TTransportException: + raise + except ServerException as serverException: + msg_type = TMessageType.REPLY + result.serverException = serverException + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') + msg_type = TMessageType.EXCEPTION + result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') + oprot.writeMessageBegin("createClusterKeepClusterName", msg_type, seqid) + result.write(oprot) + oprot.writeMessageEnd() + oprot.trans.flush() + def process_startMember(self, seqid, iprot, oprot): args = startMember_args() args.read(iprot) @@ -676,14 +798,18 @@ def process_startMember(self, seqid, iprot, oprot): try: result.success = self._handler.startMember(args.clusterId) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise except ServerException as serverException: msg_type = TMessageType.REPLY result.serverException = serverException - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("startMember", msg_type, seqid) result.write(oprot) @@ -698,11 +824,15 @@ def process_shutdownMember(self, seqid, iprot, oprot): try: result.success = self._handler.shutdownMember(args.clusterId, args.memberId) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("shutdownMember", msg_type, seqid) result.write(oprot) @@ -717,11 +847,15 @@ def process_terminateMember(self, seqid, iprot, oprot): try: result.success = self._handler.terminateMember(args.clusterId, args.memberId) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("terminateMember", msg_type, seqid) result.write(oprot) @@ -736,11 +870,15 @@ def process_suspendMember(self, seqid, iprot, oprot): try: result.success = self._handler.suspendMember(args.clusterId, args.memberId) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("suspendMember", msg_type, seqid) result.write(oprot) @@ -755,11 +893,15 @@ def process_resumeMember(self, seqid, iprot, oprot): try: result.success = self._handler.resumeMember(args.clusterId, args.memberId) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("resumeMember", msg_type, seqid) result.write(oprot) @@ -774,11 +916,15 @@ def process_shutdownCluster(self, seqid, iprot, oprot): try: result.success = self._handler.shutdownCluster(args.clusterId) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("shutdownCluster", msg_type, seqid) result.write(oprot) @@ -793,11 +939,15 @@ def process_terminateCluster(self, seqid, iprot, oprot): try: result.success = self._handler.terminateCluster(args.clusterId) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("terminateCluster", msg_type, seqid) result.write(oprot) @@ -812,11 +962,15 @@ def process_splitMemberFromCluster(self, seqid, iprot, oprot): try: result.success = self._handler.splitMemberFromCluster(args.memberId) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("splitMemberFromCluster", msg_type, seqid) result.write(oprot) @@ -831,11 +985,15 @@ def process_mergeMemberToCluster(self, seqid, iprot, oprot): try: result.success = self._handler.mergeMemberToCluster(args.clusterId, args.memberId) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("mergeMemberToCluster", msg_type, seqid) result.write(oprot) @@ -850,11 +1008,15 @@ def process_executeOnController(self, seqid, iprot, oprot): try: result.success = self._handler.executeOnController(args.clusterId, args.script, args.lang) msg_type = TMessageType.REPLY - except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + except TTransport.TTransportException: raise - except Exception as ex: + except TApplicationException as ex: + logging.exception('TApplication exception in handler') + msg_type = TMessageType.EXCEPTION + result = ex + except Exception: + logging.exception('Unexpected exception in handler') msg_type = TMessageType.EXCEPTION - logging.exception(ex) result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') oprot.writeMessageBegin("executeOnController", msg_type, seqid) result.write(oprot) @@ -866,12 +1028,10 @@ def process_executeOnController(self, seqid, iprot, oprot): class ping_args(object): - thrift_spec = ( - ) def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -885,7 +1045,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('ping_args') oprot.writeFieldStop() @@ -904,24 +1064,25 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(ping_args) +ping_args.thrift_spec = ( +) class ping_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.BOOL, 'success', None, None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -940,7 +1101,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('ping_result') if self.success is not None: @@ -963,16 +1124,18 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(ping_result) +ping_result.thrift_spec = ( + (0, TType.BOOL, 'success', None, None, ), # 0 +) class clean_args(object): - thrift_spec = ( - ) def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -986,7 +1149,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('clean_args') oprot.writeFieldStop() @@ -1005,24 +1168,25 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(clean_args) +clean_args.thrift_spec = ( +) class clean_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.BOOL, 'success', None, None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1041,7 +1205,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('clean_result') if self.success is not None: @@ -1064,16 +1228,18 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(clean_result) +clean_result.thrift_spec = ( + (0, TType.BOOL, 'success', None, None, ), # 0 +) class exit_args(object): - thrift_spec = ( - ) def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1087,7 +1253,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('exit_args') oprot.writeFieldStop() @@ -1106,24 +1272,25 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(exit_args) +exit_args.thrift_spec = ( +) class exit_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.BOOL, 'success', None, None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1142,7 +1309,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('exit_result') if self.success is not None: @@ -1165,6 +1332,10 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(exit_result) +exit_result.thrift_spec = ( + (0, TType.BOOL, 'success', None, None, ), # 0 +) class createCluster_args(object): @@ -1172,13 +1343,9 @@ class createCluster_args(object): Attributes: - hzVersion - xmlconfig + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'hzVersion', 'UTF8', None, ), # 1 - (2, TType.STRING, 'xmlconfig', 'UTF8', None, ), # 2 - ) def __init__(self, hzVersion=None, xmlconfig=None,): self.hzVersion = hzVersion @@ -1186,7 +1353,7 @@ def __init__(self, hzVersion=None, xmlconfig=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1210,7 +1377,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('createCluster_args') if self.hzVersion is not None: @@ -1237,6 +1404,12 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(createCluster_args) +createCluster_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'hzVersion', 'UTF8', None, ), # 1 + (2, TType.STRING, 'xmlconfig', 'UTF8', None, ), # 2 +) class createCluster_result(object): @@ -1244,12 +1417,9 @@ class createCluster_result(object): Attributes: - success - serverException + """ - thrift_spec = ( - (0, TType.STRUCT, 'success', (Cluster, Cluster.thrift_spec), None, ), # 0 - (1, TType.STRUCT, 'serverException', (ServerException, ServerException.thrift_spec), None, ), # 1 - ) def __init__(self, success=None, serverException=None,): self.success = success @@ -1257,7 +1427,7 @@ def __init__(self, success=None, serverException=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1283,7 +1453,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('createCluster_result') if self.success is not None: @@ -1310,25 +1480,176 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(createCluster_result) +createCluster_result.thrift_spec = ( + (0, TType.STRUCT, 'success', [Cluster, None], None, ), # 0 + (1, TType.STRUCT, 'serverException', [ServerException, None], None, ), # 1 +) + + +class createClusterKeepClusterName_args(object): + """ + Attributes: + - hzVersion + - xmlconfig + + """ + + + def __init__(self, hzVersion=None, xmlconfig=None,): + self.hzVersion = hzVersion + self.xmlconfig = xmlconfig + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.STRING: + self.hzVersion = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString() + else: + iprot.skip(ftype) + elif fid == 2: + if ftype == TType.STRING: + self.xmlconfig = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) + return + oprot.writeStructBegin('createClusterKeepClusterName_args') + if self.hzVersion is not None: + oprot.writeFieldBegin('hzVersion', TType.STRING, 1) + oprot.writeString(self.hzVersion.encode('utf-8') if sys.version_info[0] == 2 else self.hzVersion) + oprot.writeFieldEnd() + if self.xmlconfig is not None: + oprot.writeFieldBegin('xmlconfig', TType.STRING, 2) + oprot.writeString(self.xmlconfig.encode('utf-8') if sys.version_info[0] == 2 else self.xmlconfig) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) +all_structs.append(createClusterKeepClusterName_args) +createClusterKeepClusterName_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'hzVersion', 'UTF8', None, ), # 1 + (2, TType.STRING, 'xmlconfig', 'UTF8', None, ), # 2 +) + + +class createClusterKeepClusterName_result(object): + """ + Attributes: + - success + - serverException + + """ + + + def __init__(self, success=None, serverException=None,): + self.success = success + self.serverException = serverException + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 0: + if ftype == TType.STRUCT: + self.success = Cluster() + self.success.read(iprot) + else: + iprot.skip(ftype) + elif fid == 1: + if ftype == TType.STRUCT: + self.serverException = ServerException() + self.serverException.read(iprot) + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) + return + oprot.writeStructBegin('createClusterKeepClusterName_result') + if self.success is not None: + oprot.writeFieldBegin('success', TType.STRUCT, 0) + self.success.write(oprot) + oprot.writeFieldEnd() + if self.serverException is not None: + oprot.writeFieldBegin('serverException', TType.STRUCT, 1) + self.serverException.write(oprot) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) +all_structs.append(createClusterKeepClusterName_result) +createClusterKeepClusterName_result.thrift_spec = ( + (0, TType.STRUCT, 'success', [Cluster, None], None, ), # 0 + (1, TType.STRUCT, 'serverException', [ServerException, None], None, ), # 1 +) class startMember_args(object): """ Attributes: - clusterId + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 - ) def __init__(self, clusterId=None,): self.clusterId = clusterId def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1347,7 +1668,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('startMember_args') if self.clusterId is not None: @@ -1370,6 +1691,11 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(startMember_args) +startMember_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 +) class startMember_result(object): @@ -1377,12 +1703,9 @@ class startMember_result(object): Attributes: - success - serverException + """ - thrift_spec = ( - (0, TType.STRUCT, 'success', (Member, Member.thrift_spec), None, ), # 0 - (1, TType.STRUCT, 'serverException', (ServerException, ServerException.thrift_spec), None, ), # 1 - ) def __init__(self, success=None, serverException=None,): self.success = success @@ -1390,7 +1713,7 @@ def __init__(self, success=None, serverException=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1416,7 +1739,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('startMember_result') if self.success is not None: @@ -1443,6 +1766,11 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(startMember_result) +startMember_result.thrift_spec = ( + (0, TType.STRUCT, 'success', [Member, None], None, ), # 0 + (1, TType.STRUCT, 'serverException', [ServerException, None], None, ), # 1 +) class shutdownMember_args(object): @@ -1450,13 +1778,9 @@ class shutdownMember_args(object): Attributes: - clusterId - memberId + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 - (2, TType.STRING, 'memberId', 'UTF8', None, ), # 2 - ) def __init__(self, clusterId=None, memberId=None,): self.clusterId = clusterId @@ -1464,7 +1788,7 @@ def __init__(self, clusterId=None, memberId=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1488,7 +1812,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('shutdownMember_args') if self.clusterId is not None: @@ -1515,24 +1839,28 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(shutdownMember_args) +shutdownMember_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 + (2, TType.STRING, 'memberId', 'UTF8', None, ), # 2 +) class shutdownMember_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.BOOL, 'success', None, None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1551,7 +1879,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('shutdownMember_result') if self.success is not None: @@ -1574,6 +1902,10 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(shutdownMember_result) +shutdownMember_result.thrift_spec = ( + (0, TType.BOOL, 'success', None, None, ), # 0 +) class terminateMember_args(object): @@ -1581,13 +1913,9 @@ class terminateMember_args(object): Attributes: - clusterId - memberId + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 - (2, TType.STRING, 'memberId', 'UTF8', None, ), # 2 - ) def __init__(self, clusterId=None, memberId=None,): self.clusterId = clusterId @@ -1595,7 +1923,7 @@ def __init__(self, clusterId=None, memberId=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1619,7 +1947,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('terminateMember_args') if self.clusterId is not None: @@ -1646,24 +1974,28 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(terminateMember_args) +terminateMember_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 + (2, TType.STRING, 'memberId', 'UTF8', None, ), # 2 +) class terminateMember_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.BOOL, 'success', None, None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1682,7 +2014,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('terminateMember_result') if self.success is not None: @@ -1705,6 +2037,10 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(terminateMember_result) +terminateMember_result.thrift_spec = ( + (0, TType.BOOL, 'success', None, None, ), # 0 +) class suspendMember_args(object): @@ -1712,13 +2048,9 @@ class suspendMember_args(object): Attributes: - clusterId - memberId + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 - (2, TType.STRING, 'memberId', 'UTF8', None, ), # 2 - ) def __init__(self, clusterId=None, memberId=None,): self.clusterId = clusterId @@ -1726,7 +2058,7 @@ def __init__(self, clusterId=None, memberId=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1750,7 +2082,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('suspendMember_args') if self.clusterId is not None: @@ -1777,24 +2109,28 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(suspendMember_args) +suspendMember_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 + (2, TType.STRING, 'memberId', 'UTF8', None, ), # 2 +) class suspendMember_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.BOOL, 'success', None, None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1813,7 +2149,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('suspendMember_result') if self.success is not None: @@ -1836,6 +2172,10 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(suspendMember_result) +suspendMember_result.thrift_spec = ( + (0, TType.BOOL, 'success', None, None, ), # 0 +) class resumeMember_args(object): @@ -1843,13 +2183,9 @@ class resumeMember_args(object): Attributes: - clusterId - memberId + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 - (2, TType.STRING, 'memberId', 'UTF8', None, ), # 2 - ) def __init__(self, clusterId=None, memberId=None,): self.clusterId = clusterId @@ -1857,7 +2193,7 @@ def __init__(self, clusterId=None, memberId=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1881,7 +2217,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('resumeMember_args') if self.clusterId is not None: @@ -1908,24 +2244,28 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(resumeMember_args) +resumeMember_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 + (2, TType.STRING, 'memberId', 'UTF8', None, ), # 2 +) class resumeMember_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.BOOL, 'success', None, None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -1944,7 +2284,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('resumeMember_result') if self.success is not None: @@ -1967,25 +2307,26 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(resumeMember_result) +resumeMember_result.thrift_spec = ( + (0, TType.BOOL, 'success', None, None, ), # 0 +) class shutdownCluster_args(object): """ Attributes: - clusterId + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 - ) def __init__(self, clusterId=None,): self.clusterId = clusterId def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -2004,7 +2345,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('shutdownCluster_args') if self.clusterId is not None: @@ -2027,24 +2368,27 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(shutdownCluster_args) +shutdownCluster_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 +) class shutdownCluster_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.BOOL, 'success', None, None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -2063,7 +2407,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('shutdownCluster_result') if self.success is not None: @@ -2086,25 +2430,26 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(shutdownCluster_result) +shutdownCluster_result.thrift_spec = ( + (0, TType.BOOL, 'success', None, None, ), # 0 +) class terminateCluster_args(object): """ Attributes: - clusterId + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 - ) def __init__(self, clusterId=None,): self.clusterId = clusterId def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -2123,7 +2468,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('terminateCluster_args') if self.clusterId is not None: @@ -2146,24 +2491,27 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(terminateCluster_args) +terminateCluster_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 +) class terminateCluster_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.BOOL, 'success', None, None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -2182,7 +2530,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('terminateCluster_result') if self.success is not None: @@ -2205,25 +2553,26 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(terminateCluster_result) +terminateCluster_result.thrift_spec = ( + (0, TType.BOOL, 'success', None, None, ), # 0 +) class splitMemberFromCluster_args(object): """ Attributes: - memberId + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'memberId', 'UTF8', None, ), # 1 - ) def __init__(self, memberId=None,): self.memberId = memberId def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -2242,7 +2591,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('splitMemberFromCluster_args') if self.memberId is not None: @@ -2265,24 +2614,27 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(splitMemberFromCluster_args) +splitMemberFromCluster_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'memberId', 'UTF8', None, ), # 1 +) class splitMemberFromCluster_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.STRUCT, 'success', (Cluster, Cluster.thrift_spec), None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -2302,7 +2654,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('splitMemberFromCluster_result') if self.success is not None: @@ -2325,6 +2677,10 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(splitMemberFromCluster_result) +splitMemberFromCluster_result.thrift_spec = ( + (0, TType.STRUCT, 'success', [Cluster, None], None, ), # 0 +) class mergeMemberToCluster_args(object): @@ -2332,13 +2688,9 @@ class mergeMemberToCluster_args(object): Attributes: - clusterId - memberId + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 - (2, TType.STRING, 'memberId', 'UTF8', None, ), # 2 - ) def __init__(self, clusterId=None, memberId=None,): self.clusterId = clusterId @@ -2346,7 +2698,7 @@ def __init__(self, clusterId=None, memberId=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -2370,7 +2722,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('mergeMemberToCluster_args') if self.clusterId is not None: @@ -2397,24 +2749,28 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(mergeMemberToCluster_args) +mergeMemberToCluster_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 + (2, TType.STRING, 'memberId', 'UTF8', None, ), # 2 +) class mergeMemberToCluster_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.STRUCT, 'success', (Cluster, Cluster.thrift_spec), None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -2434,7 +2790,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('mergeMemberToCluster_result') if self.success is not None: @@ -2457,6 +2813,10 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(mergeMemberToCluster_result) +mergeMemberToCluster_result.thrift_spec = ( + (0, TType.STRUCT, 'success', [Cluster, None], None, ), # 0 +) class executeOnController_args(object): @@ -2465,14 +2825,9 @@ class executeOnController_args(object): - clusterId - script - lang + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 - (2, TType.STRING, 'script', 'UTF8', None, ), # 2 - (3, TType.I32, 'lang', None, None, ), # 3 - ) def __init__(self, clusterId=None, script=None, lang=None,): self.clusterId = clusterId @@ -2481,7 +2836,7 @@ def __init__(self, clusterId=None, script=None, lang=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -2510,7 +2865,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('executeOnController_args') if self.clusterId is not None: @@ -2541,24 +2896,29 @@ def __eq__(self, other): def __ne__(self, other): return not (self == other) +all_structs.append(executeOnController_args) +executeOnController_args.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'clusterId', 'UTF8', None, ), # 1 + (2, TType.STRING, 'script', 'UTF8', None, ), # 2 + (3, TType.I32, 'lang', None, None, ), # 3 +) class executeOnController_result(object): """ Attributes: - success + """ - thrift_spec = ( - (0, TType.STRUCT, 'success', (Response, Response.thrift_spec), None, ), # 0 - ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -2578,7 +2938,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('executeOnController_result') if self.success is not None: @@ -2600,4 +2960,10 @@ def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): - return not (self == other) \ No newline at end of file + return not (self == other) +all_structs.append(executeOnController_result) +executeOnController_result.thrift_spec = ( + (0, TType.STRUCT, 'success', [Response, None], None, ), # 0 +) +fix_spec(all_structs) +del all_structs diff --git a/tests/hzrc/__init__.py b/tests/hzrc/__init__.py index e69de29bb2..f8773b31a6 100644 --- a/tests/hzrc/__init__.py +++ b/tests/hzrc/__init__.py @@ -0,0 +1 @@ +__all__ = ['ttypes', 'constants', 'RemoteController'] diff --git a/tests/hzrc/client.py b/tests/hzrc/client.py index 9211676b0c..959a682a1d 100644 --- a/tests/hzrc/client.py +++ b/tests/hzrc/client.py @@ -21,42 +21,51 @@ def __init__(self, host, port): self.remote_controller = RemoteController.Client(protocol) # Connect! transport.open() - except Thrift.TException as tx: - self.logger.warn('%s' % tx.message) + except Thrift.TException: + self.logger.exception('Something went wrong while connecting to remote controller.') - def terminateMember(self, cluster_id, member_id): - return self.remote_controller.terminateMember(cluster_id, member_id) + def ping(self): + return self.remote_controller.ping() - def terminateCluster(self, cluster_id): - return self.remote_controller.terminateCluster(cluster_id) + def clean(self): + return self.remote_controller.clean() + + def exit(self): + self.remote_controller.exit() + self.remote_controller._iprot.trans.close() + + def createCluster(self, hz_version, xml_config): + return self.remote_controller.createCluster(hz_version, xml_config) + + def createClusterKeepClusterName(self, hz_version, xml_config): + return self.remote_controller.createClusterKeepClusterName(hz_version, xml_config) def startMember(self, cluster_id): return self.remote_controller.startMember(cluster_id) - def splitMemberFromCluster(self, member_id): - return self.remote_controller.splitMemberFromCluster(member_id) - def shutdownMember(self, cluster_id, member_id): return self.remote_controller.shutdownMember(cluster_id, member_id) + def terminateMember(self, cluster_id, member_id): + return self.remote_controller.terminateMember(cluster_id, member_id) + + def suspendMember(self, cluster_id, member_id): + return self.remote_controller.suspendMember(cluster_id, member_id) + + def resumeMember(self, cluster_id, member_id): + return self.remote_controller.resumeMember(cluster_id, member_id) + def shutdownCluster(self, cluster_id): return self.remote_controller.shutdownCluster(cluster_id) + def terminateCluster(self, cluster_id): + return self.remote_controller.terminateCluster(cluster_id) + + def splitMemberFromCluster(self, member_id): + return self.remote_controller.splitMemberFromCluster(member_id) + def mergeMemberToCluster(self, cluster_id, member_id): return self.remote_controller.mergeMemberToCluster(cluster_id, member_id) def executeOnController(self, cluster_id, script, lang): return self.remote_controller.executeOnController(cluster_id, script, lang) - - def createCluster(self, hz_version, xml_config): - return self.remote_controller.createCluster(hz_version, xml_config) - - def ping(self): - return self.remote_controller.ping() - - def clean(self): - return self.remote_controller.clean() - - def exit(self): - self.remote_controller.exit() - self.remote_controller._iprot.trans.close() \ No newline at end of file diff --git a/tests/hzrc/constants.py b/tests/hzrc/constants.py index 7e23ce44c9..b941bb8a2b 100644 --- a/tests/hzrc/constants.py +++ b/tests/hzrc/constants.py @@ -1,8 +1,14 @@ # -# Autogenerated by Thrift Compiler (0.10.0) +# Autogenerated by Thrift Compiler (0.13.0) # # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING # # options string: py:new_style,utf8strings # +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +from thrift.TRecursive import fix_spec + +import sys +from .ttypes import * diff --git a/tests/hzrc/ttypes.py b/tests/hzrc/ttypes.py index 9a2e12eb01..de7a0463d1 100644 --- a/tests/hzrc/ttypes.py +++ b/tests/hzrc/ttypes.py @@ -1,5 +1,5 @@ # -# Autogenerated by Thrift Compiler (0.10.0) +# Autogenerated by Thrift Compiler (0.13.0) # # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING # @@ -8,9 +8,12 @@ from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException from thrift.protocol.TProtocol import TProtocolException +from thrift.TRecursive import fix_spec + import sys from thrift.transport import TTransport +all_structs = [] class Lang(object): @@ -38,19 +41,16 @@ class Cluster(object): """ Attributes: - id + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'id', 'UTF8', None, ), # 1 - ) def __init__(self, id=None,): self.id = id def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -69,7 +69,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('Cluster') if self.id is not None: @@ -100,14 +100,9 @@ class Member(object): - uuid - host - port + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'uuid', 'UTF8', None, ), # 1 - (2, TType.STRING, 'host', 'UTF8', None, ), # 2 - (3, TType.I32, 'port', None, None, ), # 3 - ) def __init__(self, uuid=None, host=None, port=None,): self.uuid = uuid @@ -116,7 +111,7 @@ def __init__(self, uuid=None, host=None, port=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -145,7 +140,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('Member') if self.uuid is not None: @@ -184,14 +179,9 @@ class Response(object): - success - message - result + """ - thrift_spec = ( - None, # 0 - (1, TType.BOOL, 'success', None, None, ), # 1 - (2, TType.STRING, 'message', 'UTF8', None, ), # 2 - (3, TType.STRING, 'result', 'BINARY', None, ), # 3 - ) def __init__(self, success=None, message=None, result=None,): self.success = success @@ -200,7 +190,7 @@ def __init__(self, success=None, message=None, result=None,): def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -229,7 +219,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('Response') if self.success is not None: @@ -266,19 +256,16 @@ class ServerException(TException): """ Attributes: - message + """ - thrift_spec = ( - None, # 0 - (1, TType.STRING, 'message', 'UTF8', None, ), # 1 - ) def __init__(self, message=None,): self.message = message def read(self, iprot): if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: - iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec]) return iprot.readStructBegin() while True: @@ -297,7 +284,7 @@ def read(self, iprot): def write(self, oprot): if oprot._fast_encode is not None and self.thrift_spec is not None: - oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec])) return oprot.writeStructBegin('ServerException') if self.message is not None: @@ -322,4 +309,30 @@ def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): - return not (self == other) \ No newline at end of file + return not (self == other) +all_structs.append(Cluster) +Cluster.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'id', 'UTF8', None, ), # 1 +) +all_structs.append(Member) +Member.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'uuid', 'UTF8', None, ), # 1 + (2, TType.STRING, 'host', 'UTF8', None, ), # 2 + (3, TType.I32, 'port', None, None, ), # 3 +) +all_structs.append(Response) +Response.thrift_spec = ( + None, # 0 + (1, TType.BOOL, 'success', None, None, ), # 1 + (2, TType.STRING, 'message', 'UTF8', None, ), # 2 + (3, TType.STRING, 'result', 'BINARY', None, ), # 3 +) +all_structs.append(ServerException) +ServerException.thrift_spec = ( + None, # 0 + (1, TType.STRING, 'message', 'UTF8', None, ), # 1 +) +fix_spec(all_structs) +del all_structs diff --git a/tests/invocation_test.py b/tests/invocation_test.py index d0d3dc0597..0bb1838275 100644 --- a/tests/invocation_test.py +++ b/tests/invocation_test.py @@ -1,25 +1,56 @@ import time +import hazelcast from hazelcast.config import ClientProperties -from hazelcast.exception import TimeoutError +from hazelcast.errors import HazelcastTimeoutError from hazelcast.invocation import Invocation -from tests.base import SingleMemberTestCase +from hazelcast.protocol.client_message import OutboundMessage +from tests.base import HazelcastTestCase -class InvocationTest(SingleMemberTestCase): +class InvocationTest(HazelcastTestCase): @classmethod - def configure_client(cls, config): + def setUpClass(cls): + cls.rc = cls.create_rc() + cls.cluster = cls.create_cluster(cls.rc, None) + cls.member = cls.cluster.start_member() + + @classmethod + def tearDownClass(cls): + cls.rc.terminateCluster(cls.cluster.id) + cls.rc.exit() + + def setUp(self): + config = hazelcast.ClientConfig() + config.cluster_name = self.cluster.id config.set_property(ClientProperties.INVOCATION_TIMEOUT_SECONDS.name, 1) - return config + self.client = hazelcast.HazelcastClient(config) + + def tearDown(self): + self.client.shutdown() def test_invocation_timeout(self): - invocation_service = self.client.invoker - invocation = Invocation(invocation_service, None, partition_id=-1) + request = OutboundMessage(bytearray(22), True) + invocation_service = self.client._invocation_service + invocation = Invocation(request, partition_id=1) - def mocked_has_partition_id(): + def mock(*_): time.sleep(2) - return True + return False + + invocation_service._invoke_on_partition_owner = mock + invocation_service._invoke_on_random_connection = lambda _: False + + invocation_service.invoke(invocation) + with self.assertRaises(HazelcastTimeoutError): + invocation.future.result() + + def test_invocation_not_timed_out_when_there_is_no_exception(self): + request = OutboundMessage(bytearray(22), True) + invocation_service = self.client._invocation_service + invocation = Invocation(request) + invocation_service.invoke(invocation) - invocation.has_partition_id = mocked_has_partition_id - with self.assertRaises(TimeoutError): - invocation_service.invoke(invocation).result() + time.sleep(2) + self.assertFalse(invocation.future.done()) + self.assertEqual(1, len(invocation_service._pending)) diff --git a/tests/lifecycle_test.py b/tests/lifecycle_test.py index 3a5d4c0025..cbf48b027f 100644 --- a/tests/lifecycle_test.py +++ b/tests/lifecycle_test.py @@ -1,6 +1,5 @@ from hazelcast import ClientConfig -from hazelcast.lifecycle import LIFECYCLE_STATE_SHUTDOWN, LIFECYCLE_STATE_SHUTTING_DOWN, LIFECYCLE_STATE_CONNECTED, \ - LIFECYCLE_STATE_STARTING, LIFECYCLE_STATE_DISCONNECTED +from hazelcast.lifecycle import LifecycleState from tests.base import HazelcastTestCase from tests.util import configure_logging, event_collector @@ -17,46 +16,62 @@ def tearDown(self): self.shutdown_all_clients() self.rc.exit() - def test_lifecycle_listener(self): + def test_lifecycle_listener_receives_events_in_order(self): collector = event_collector() config = ClientConfig() - config.lifecycle_listeners = [collector] + config.cluster_name = self.cluster.id + config.lifecycle_listeners.append(collector) self.cluster.start_member() client = self.create_client(config) client.shutdown() - # noinspection PyUnresolvedReferences self.assertEqual(collector.events, - [LIFECYCLE_STATE_STARTING, LIFECYCLE_STATE_CONNECTED, LIFECYCLE_STATE_SHUTTING_DOWN, - LIFECYCLE_STATE_SHUTDOWN]) + [LifecycleState.STARTING, LifecycleState.STARTED, LifecycleState.CONNECTED, + LifecycleState.SHUTTING_DOWN, LifecycleState.DISCONNECTED, LifecycleState.SHUTDOWN]) + + def test_lifecycle_listener_receives_events_in_order_after_startup(self): + self.cluster.start_member() - def test_lifecycle_listener_disconnected(self): collector = event_collector() - member = self.cluster.start_member() - client = self.create_client() + config = ClientConfig() + config.cluster_name = self.cluster.id + client = self.create_client(config) + client.lifecycle_service.add_listener(collector) + client.shutdown() - client.lifecycle.add_listener(collector) + self.assertEqual(collector.events, + [LifecycleState.SHUTTING_DOWN, LifecycleState.DISCONNECTED, LifecycleState.SHUTDOWN]) - member.shutdown() + def test_lifecycle_listener_receives_disconnected_event(self): + member = self.cluster.start_member() - self.assertEqual(collector.events, [LIFECYCLE_STATE_DISCONNECTED]) + collector = event_collector() + config = ClientConfig() + config.cluster_name = self.cluster.id + client = self.create_client(config) + client.lifecycle_service.add_listener(collector) + member.shutdown() + self.assertEqual(collector.events, [LifecycleState.DISCONNECTED]) + client.shutdown() def test_remove_lifecycle_listener(self): collector = event_collector() self.cluster.start_member() - client = self.create_client() - id = client.lifecycle.add_listener(collector) - client.lifecycle.remove_listener(id) + config = ClientConfig() + config.cluster_name = self.cluster.id + client = self.create_client(config) + registration_id = client.lifecycle_service.add_listener(collector) + client.lifecycle_service.remove_listener(registration_id) client.shutdown() - # noinspection PyUnresolvedReferences self.assertEqual(collector.events, []) def test_exception_in_listener(self): - def listener(e): + def listener(_): raise RuntimeError("error") config = ClientConfig() + config.cluster_name = self.cluster.id config.lifecycle_listeners = [listener] self.cluster.start_member() self.create_client(config) diff --git a/tests/listener_test.py b/tests/listener_test.py index 3a9adb730e..96d0416166 100644 --- a/tests/listener_test.py +++ b/tests/listener_test.py @@ -1,5 +1,6 @@ from tests.base import HazelcastTestCase -from tests.util import configure_logging, random_string, event_collector, generate_key_owned_by_instance +from tests.util import configure_logging, random_string, event_collector, generate_key_owned_by_instance, \ + wait_for_partition_table from hazelcast.config import ClientConfig @@ -10,8 +11,8 @@ def setUp(self): self.cluster = self.create_cluster(self.rc, None) self.m1 = self.cluster.start_member() self.m2 = self.cluster.start_member() - self.m3 = self.cluster.start_member() self.client_config = ClientConfig() + self.client_config.cluster_name = self.cluster.id self.collector = event_collector() def tearDown(self): @@ -20,10 +21,11 @@ def tearDown(self): # -------------------------- test_remove_member ----------------------- # def test_smart_listener_remove_member(self): - self.client_config.network_config.smart_routing = True + self.client_config.network.smart_routing = True client = self.create_client(self.client_config) + wait_for_partition_table(client) + key_m1 = generate_key_owned_by_instance(client, self.m1.uuid) map = client.get_map(random_string()).blocking() - key_m1 = generate_key_owned_by_instance(client, self.m1.address) map.put(key_m1, 'value1') map.add_entry_listener(updated_func=self.collector) self.m1.shutdown() @@ -33,54 +35,46 @@ def assert_event(): self.assertEqual(1, len(self.collector.events)) self.assertTrueEventually(assert_event) - def test_non_smart_listener_remove_connected_member(self): - self.client_config.network_config.smart_routing = False + def test_non_smart_listener_remove_member(self): + self.client_config.network.smart_routing = False client = self.create_client(self.client_config) map = client.get_map(random_string()).blocking() map.add_entry_listener(added_func=self.collector) + self.m2.shutdown() + wait_for_partition_table(client) - owner_address = client.cluster.owner_connection_address - - # Test if listener re-registers properly when owner connection is removed. - members = [self.m1, self.m2, self.m3] - for m in members: - if m.address == owner_address: - m.shutdown() - members.remove(m) - - # There are 2 members left. We execute a put operation to each of their partitions - # to test that non-smart listener works in both local and non-local cases. - for m in members: - generated_key = generate_key_owned_by_instance(client, m.address) - map.put(generated_key, 'value') + generated_key = generate_key_owned_by_instance(client, self.m1.uuid) + map.put(generated_key, 'value') def assert_event(): - self.assertEqual(2, len(self.collector.events)) + self.assertEqual(1, len(self.collector.events)) self.assertTrueEventually(assert_event) # -------------------------- test_add_member ----------------------- # def test_smart_listener_add_member(self): - self.client_config.network_config.smart_routing = True + self.client_config.network.smart_routing = True client = self.create_client(self.client_config) map = client.get_map(random_string()).blocking() map.add_entry_listener(added_func=self.collector) - m4 = self.cluster.start_member() - key_m4 = generate_key_owned_by_instance(client, m4.address) - map.put(key_m4, 'value') + m3 = self.cluster.start_member() + wait_for_partition_table(client) + key_m3 = generate_key_owned_by_instance(client, m3.uuid) + map.put(key_m3, 'value') def assert_event(): self.assertEqual(1, len(self.collector.events)) self.assertTrueEventually(assert_event) def test_non_smart_listener_add_member(self): - self.client_config.network_config.smart_routing = True + self.client_config.network.smart_routing = False client = self.create_client(self.client_config) map = client.get_map(random_string()).blocking() map.add_entry_listener(added_func=self.collector) - m4 = self.cluster.start_member() - key_m4 = generate_key_owned_by_instance(client, m4.address) - map.put(key_m4, 'value') + m3 = self.cluster.start_member() + wait_for_partition_table(client) + key_m3 = generate_key_owned_by_instance(client, m3.uuid) + map.put(key_m3, 'value') def assert_event(): self.assertEqual(1, len(self.collector.events)) - self.assertTrueEventually(assert_event) \ No newline at end of file + self.assertTrueEventually(assert_event) diff --git a/tests/logger/logger_test.py b/tests/logger/logger_test.py index 4a7e96797b..48ca91156b 100644 --- a/tests/logger/logger_test.py +++ b/tests/logger/logger_test.py @@ -31,7 +31,8 @@ def test_default_config(self): self.assertIsNone(logger_config.config_file) config = ClientConfig() - config.logger_config = logger_config + config.cluster_name = self.cluster.id + config.logger = logger_config client = HazelcastClient(config) self.assertEqual(logging.INFO, client.logger.level) @@ -68,7 +69,8 @@ def test_non_default_configuration_level(self): self.assertIsNone(logger_config.config_file) config = ClientConfig() - config.logger_config = logger_config + config.cluster_name = self.cluster.id + config.logger = logger_config client = HazelcastClient(config) self.assertEqual(logging.CRITICAL, client.logger.level) @@ -106,7 +108,8 @@ def test_simple_custom_logging_configuration(self): self.assertEqual(config_path, logger_config.config_file) config = ClientConfig() - config.logger_config = logger_config + config.cluster_name = self.cluster.id + config.logger = logger_config client = HazelcastClient(config) self.assertEqual(logging.ERROR, client.logger.getEffectiveLevel()) @@ -135,8 +138,10 @@ def test_simple_custom_logging_configuration(self): client.shutdown() def test_default_configuration_multiple_clients(self): - client1 = HazelcastClient() - client2 = HazelcastClient() + config = ClientConfig() + config.cluster_name = self.cluster.id + client1 = HazelcastClient(config) + client2 = HazelcastClient(config) out = StringIO() @@ -156,9 +161,10 @@ def test_default_configuration_multiple_clients(self): def test_same_custom_configuration_file_with_multiple_clients(self): config = ClientConfig() + config.cluster_name = self.cluster.id config_file = get_abs_path(self.CUR_DIR, "simple_config.json") - config.logger_config.configuration_file = config_file + config.logger.configuration_file = config_file client1 = HazelcastClient(config) client2 = HazelcastClient(config) @@ -179,7 +185,9 @@ def test_same_custom_configuration_file_with_multiple_clients(self): client2.shutdown() def test_default_logger_output(self): - client = HazelcastClient() + config = ClientConfig() + config.cluster_name = self.cluster.id + client = HazelcastClient(config) out = StringIO() @@ -204,9 +212,10 @@ def test_default_logger_output(self): def test_custom_configuration_output(self): config = ClientConfig() + config.cluster_name = self.cluster.id config_file = get_abs_path(self.CUR_DIR, "detailed_config.json") - config.logger_config.config_file = config_file + config.logger.config_file = config_file client = HazelcastClient(config) diff --git a/tests/near_cache_test.py b/tests/near_cache_test.py index 8b787d87bd..ef59b4b3ee 100644 --- a/tests/near_cache_test.py +++ b/tests/near_cache_test.py @@ -5,16 +5,13 @@ from hazelcast.config import NearCacheConfig from hazelcast.near_cache import * from hazelcast.serialization import SerializationServiceV1 -from tests.util import random_string -from hazelcast import six +from tests.util import random_string, configure_logging from hazelcast.six.moves import range class NearCacheTestCase(unittest.TestCase): def setUp(self): - logging.basicConfig(format='%(asctime)s%(msecs)03d [%(name)s] %(levelname)s: %(message)s', datefmt="%H:%M%:%S,") - logging.getLogger().setLevel(logging.DEBUG) - + configure_logging() self.service = SerializationServiceV1(serialization_config=SerializationConfig()) def tearDown(self): @@ -39,7 +36,6 @@ def test_near_cache_config(self): def test_DataRecord_expire_time(self): now = current_time() - six.print_(int(now), now) data_rec = DataRecord("key", "value", create_time=now, ttl_seconds=1) sleep(2) self.assertTrue(data_rec.is_expired(max_idle_seconds=1000)) diff --git a/tests/predicate_test.py b/tests/predicate_test.py index 8bbc1aa78f..6d569abbeb 100644 --- a/tests/predicate_test.py +++ b/tests/predicate_test.py @@ -1,12 +1,13 @@ from unittest import TestCase, skip +from hazelcast.config import IndexConfig from hazelcast.serialization.predicate import is_equal_to, and_, is_between, is_less_than, \ is_less_than_or_equal_to, is_greater_than, is_greater_than_or_equal_to, or_, is_not_equal_to, not_, is_like, \ is_ilike, matches_regex, sql, true, false, is_in, is_instance_of from hazelcast.serialization.api import Portable from tests.base import SingleMemberTestCase from tests.serialization.portable_test import InnerPortable, FACTORY_ID -from tests.util import random_string, set_attr +from tests.util import random_string from hazelcast import six from hazelcast.six.moves import range @@ -77,6 +78,11 @@ def test_false(self): class PredicateTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): self.map = self.client.get_map(random_string()).blocking() @@ -219,8 +225,9 @@ def test_false(self): class PredicatePortableTest(SingleMemberTestCase): @classmethod def configure_client(cls, config): + config.cluster_name = cls.cluster.id the_factory = {InnerPortable.CLASS_ID: InnerPortable} - config.serialization_config.portable_factories[FACTORY_ID] = the_factory + config.serialization.portable_factories[FACTORY_ID] = the_factory return config def setUp(self): @@ -247,7 +254,6 @@ def test_predicate_portable_key(self): self.assertIn(k, map_keys) -@set_attr(category=3.08) class NestedPredicatePortableTest(SingleMemberTestCase): class Body(Portable): @@ -299,8 +305,9 @@ def __eq__(self, other): @classmethod def configure_client(cls, config): + config.cluster_name = cls.cluster.id factory = {1: NestedPredicatePortableTest.Body, 2: NestedPredicatePortableTest.Limb} - config.serialization_config.portable_factories[FACTORY_ID] = factory + config.serialization.portable_factories[FACTORY_ID] = factory return config def setUp(self): @@ -313,10 +320,12 @@ def tearDown(self): def test_adding_indexes(self): # single-attribute index - self.map.add_index("name", True) + single_index = IndexConfig(attributes=["name"]) + self.map.add_index(single_index) # nested-attribute index - self.map.add_index("limb.name", True) + nested_index = IndexConfig(attributes=["limb.name"]) + self.map.add_index(nested_index) def test_single_attribute_query_portable_predicates(self): predicate = is_equal_to("limb.name", "hand") diff --git a/tests/proxy/atomic_long_test.py b/tests/proxy/atomic_long_test.py deleted file mode 100644 index 8862702ac8..0000000000 --- a/tests/proxy/atomic_long_test.py +++ /dev/null @@ -1,87 +0,0 @@ -from hazelcast.exception import HazelcastSerializationError -from hazelcast.serialization.api import IdentifiedDataSerializable -from tests.base import SingleMemberTestCase -from tests.util import random_string - -FACTORY_ID = 1 - - -class Function(IdentifiedDataSerializable): - CLASS_ID = 1 - - def write_data(self, object_data_output): - pass - - def get_factory_id(self): - return FACTORY_ID - - def get_class_id(self): - return self.CLASS_ID - - -class AtomicLongTest(SingleMemberTestCase): - def setUp(self): - self.atomic_long = self.client.get_atomic_long(random_string()).blocking() - self.reference = object() - - def tearDown(self): - self.atomic_long.destroy() - - def test_add_and_get(self): - self.assertEqual(2, self.atomic_long.add_and_get(2)) - self.assertEqual(4, self.atomic_long.add_and_get(2)) - - def test_alter(self): - # TODO: Function must be defined on the server - with self.assertRaises(HazelcastSerializationError): - self.atomic_long.alter(Function()) - - def test_alter_and_get(self): - # TODO: Function must be defined on the server - with self.assertRaises(HazelcastSerializationError): - self.atomic_long.alter_and_get(Function()) - - def test_apply(self): - # TODO: Function must be defined on the server - with self.assertRaises(HazelcastSerializationError): - self.atomic_long.apply(Function()) - - def test_get_and_add(self): - self.assertEqual(0, self.atomic_long.get_and_add(2)) - self.assertEqual(2, self.atomic_long.get_and_add(2)) - - def test_compare_and_set_when_same(self): - self.assertTrue(self.atomic_long.compare_and_set(0, 2)) - - def test_compare_and_set_when_different(self): - self.assertFalse(self.atomic_long.compare_and_set(2, 3)) - - def test_decrement_and_get(self): - self.assertEqual(-1, self.atomic_long.decrement_and_get()) - self.assertEqual(-2, self.atomic_long.decrement_and_get()) - - def test_get_set(self): - self.assertIsNone(self.atomic_long.set(100)) - self.assertEqual(100, self.atomic_long.get()) - - def test_get_and_alter(self): - # TODO: Function must be declared on the server side - with self.assertRaises(HazelcastSerializationError): - self.atomic_long.get_and_alter(Function()) - - def test_get_and_set(self): - self.assertEqual(0, self.atomic_long.get_and_set(100)) - self.assertEqual(100, self.atomic_long.get_and_set(101)) - - def test_increment_and_get(self): - self.assertEqual(1, self.atomic_long.increment_and_get()) - self.assertEqual(2, self.atomic_long.increment_and_get()) - self.assertEqual(3, self.atomic_long.increment_and_get()) - - def test_get_and_increment(self): - self.assertEqual(0, self.atomic_long.get_and_increment()) - self.assertEqual(1, self.atomic_long.get_and_increment()) - self.assertEqual(2, self.atomic_long.get_and_increment()) - - def test_str(self): - self.assertTrue(str(self.atomic_long).startswith("AtomicLong")) diff --git a/tests/proxy/atomic_reference_test.py b/tests/proxy/atomic_reference_test.py deleted file mode 100644 index 411a308f47..0000000000 --- a/tests/proxy/atomic_reference_test.py +++ /dev/null @@ -1,70 +0,0 @@ -from hazelcast.exception import HazelcastSerializationError -from tests.base import SingleMemberTestCase -from tests.proxy.atomic_long_test import Function, FACTORY_ID -from tests.util import random_string - - -class AtomicReferenceTest(SingleMemberTestCase): - def setUp(self): - self.atomic_reference = self.client.get_atomic_reference(random_string()).blocking() - - def tearDown(self): - self.atomic_reference.destroy() - - def test_alter(self): - # TODO: Function must be defined on the server - with self.assertRaises(HazelcastSerializationError): - self.atomic_reference.alter(Function()) - - def test_alter_and_get(self): - # TODO: Function must be defined on the server - with self.assertRaises(HazelcastSerializationError): - self.atomic_reference.alter_and_get(Function()) - - def test_apply(self): - # TODO: Function must be defined on the server - with self.assertRaises(HazelcastSerializationError): - self.atomic_reference.apply(Function()) - - def test_compare_and_set(self): - self.assertTrue(self.atomic_reference.compare_and_set(None, "value")) - self.assertTrue(self.atomic_reference.compare_and_set("value", "new_value")) - - def test_compare_and_set_when_different(self): - self.assertFalse(self.atomic_reference.compare_and_set("value", "new_value")) - - def test_set_get(self): - self.assertIsNone(self.atomic_reference.set("value")) - self.assertEqual(self.atomic_reference.get(), "value") - - def test_get_and_set(self): - self.assertIsNone(self.atomic_reference.get_and_set("value")) - self.assertEqual("value", self.atomic_reference.get_and_set("new_value")) - - def test_set_and_get(self): - self.assertEqual("value", self.atomic_reference.set_and_get("value")) - self.assertEqual("new_value", self.atomic_reference.set_and_get("new_value")) - - def test_is_null_when_null(self): - self.assertTrue(self.atomic_reference.is_null()) - - def test_is_null_when_not_null(self): - self.atomic_reference.set("value") - self.assertFalse(self.atomic_reference.is_null()) - - def test_clear(self): - self.atomic_reference.set("value") - self.atomic_reference.clear() - - self.assertIsNone(self.atomic_reference.get()) - - def test_contains(self): - self.atomic_reference.set("value") - - self.assertTrue(self.atomic_reference.contains("value")) - - def test_contains_when_missing(self): - self.assertFalse(self.atomic_reference.contains("value")) - - def test_str(self): - self.assertTrue(str(self.atomic_reference).startswith("AtomicReference")) diff --git a/tests/proxy/count_down_latch_test.py b/tests/proxy/count_down_latch_test.py deleted file mode 100644 index 30d6722fda..0000000000 --- a/tests/proxy/count_down_latch_test.py +++ /dev/null @@ -1,35 +0,0 @@ -import threading -from time import sleep - -from tests.base import SingleMemberTestCase -from tests.util import random_string -from hazelcast import six -from hazelcast.six.moves import range - - -class CountDownLatchTest(SingleMemberTestCase): - def setUp(self): - self.latch = self.client.get_count_down_latch(random_string()).blocking() - - def test_latch(self): - self.latch.try_set_count(20) - - self.assertEqual(self.latch.get_count(), 20) - - def test_run(): - for i in range(0, 20): - self.latch.count_down() - sleep(0.06) - - _thread = threading.Thread(target=test_run) - _thread.start() - - if six.PY2: - six.exec_("""self.assertFalse(self.latch.await(1))""") - six.exec_("""self.assertTrue(self.latch.await(15))""") - else: - self.assertFalse(self.latch.await_latch(1)) - self.assertTrue(self.latch.await_latch(15)) - - def test_str(self): - self.assertTrue(str(self.latch).startswith("CountDownLatch")) diff --git a/tests/proxy/distributed_objects_test.py b/tests/proxy/distributed_objects_test.py index ab5ea6e21c..c59c64e398 100644 --- a/tests/proxy/distributed_objects_test.py +++ b/tests/proxy/distributed_objects_test.py @@ -4,7 +4,7 @@ from hazelcast.proxy import MAP_SERVICE from tests.base import SingleMemberTestCase from tests.util import event_collector -from hazelcast import six +from hazelcast import six, ClientConfig class DistributedObjectsTest(SingleMemberTestCase): @@ -12,6 +12,9 @@ class DistributedObjectsTest(SingleMemberTestCase): def setUpClass(cls): cls.rc = cls.create_rc() cls.cluster = cls.create_cluster(cls.rc, cls.configure_cluster()) + config = ClientConfig() + config.cluster_name = cls.cluster.id + cls.config = config @classmethod def tearDownClass(cls): @@ -19,7 +22,7 @@ def tearDownClass(cls): def setUp(self): self.member = self.cluster.start_member() - self.client = hazelcast.HazelcastClient() + self.client = hazelcast.HazelcastClient(self.config) def tearDown(self): self.client.shutdown() @@ -39,11 +42,12 @@ def test_get_distributed_objects_clears_destroyed_proxies(self): six.assertCountEqual(self, [m], self.client.get_distributed_objects()) - other_client = hazelcast.HazelcastClient() + other_client = hazelcast.HazelcastClient(self.config) other_clients_map = other_client.get_map("map") other_clients_map.destroy() six.assertCountEqual(self, [], self.client.get_distributed_objects()) + other_client.shutdown() def test_add_distributed_object_listener_object_created(self): collector = event_collector() diff --git a/tests/proxy/executor_test.py b/tests/proxy/executor_test.py index 73ac5d41a4..71e9e9b50c 100644 --- a/tests/proxy/executor_test.py +++ b/tests/proxy/executor_test.py @@ -1,55 +1,74 @@ -from hazelcast.exception import HazelcastSerializationError +import os + from hazelcast.serialization.api import IdentifiedDataSerializable from tests.base import SingleMemberTestCase from tests.util import random_string -FACTORY_ID = 1 - -class Task(IdentifiedDataSerializable): - CLASS_ID = 1 +class _AppendTask(IdentifiedDataSerializable): + """Client side version of com.hazelcast.client.test.executor.tasks.AppendCallable""" + def __init__(self, message): + self.message = message def write_data(self, object_data_output): - pass + object_data_output.write_utf(self.message) + + def read_data(self, object_data_input): + self.message = object_data_input.read_utf() def get_factory_id(self): - return FACTORY_ID + return 66 def get_class_id(self): - return self.CLASS_ID + return 5 + + +_APPENDAGE = ":CallableResult" # defined on the server side class ExecutorTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + + @classmethod + def configure_cluster(cls): + path = os.path.abspath(__file__) + dir_path = os.path.dirname(path) + with open(os.path.join(dir_path, "hazelcast.xml")) as f: + return f.read() + def setUp(self): self.executor = self.client.get_executor(random_string()).blocking() + self.message = random_string() + self.task = _AppendTask(self.message) + + def tearDown(self): + self.executor.shutdown() + self.executor.destroy() def test_execute_on_key_owner(self): - # TODO: Task must be defined on the server - with self.assertRaises(HazelcastSerializationError): - self.executor.execute_on_key_owner("key", Task()) + result = self.executor.execute_on_key_owner("key", self.task) + self.assertEqual(self.message + _APPENDAGE, result) def test_execute_on_member(self): - # TODO: Task must be defined on the server - with self.assertRaises(HazelcastSerializationError): - self.executor.execute_on_member(self.client.cluster.get_member_list()[0], Task()) - + member = self.client.cluster_service.get_members()[0] + result = self.executor.execute_on_member(member, self.task) + self.assertEqual(self.message + _APPENDAGE, result) + def test_execute_on_members(self): - # TODO: Task must be defined on the server - with self.assertRaises(HazelcastSerializationError): - self.executor.execute_on_members(self.client.cluster.get_member_list(), Task()) + members = self.client.cluster_service.get_members() + result = self.executor.execute_on_members(members, self.task) + self.assertEqual([self.message + _APPENDAGE], result) def test_execute_on_all_members(self): - # TODO: Task must be defined on the server - with self.assertRaises(HazelcastSerializationError): - self.executor.execute_on_all_members(Task()) + result = self.executor.execute_on_all_members(self.task) + self.assertEqual([self.message + _APPENDAGE], result) def test_shutdown(self): self.executor.shutdown() self.assertTrue(self.executor.is_shutdown()) - def tearDown(self): - self.executor.shutdown() - self.executor.destroy() - def test_str(self): self.assertTrue(str(self.executor).startswith("Executor")) diff --git a/tests/proxy/flake_id_generator_test.py b/tests/proxy/flake_id_generator_test.py index da7da296a7..ea0cdae98c 100644 --- a/tests/proxy/flake_id_generator_test.py +++ b/tests/proxy/flake_id_generator_test.py @@ -4,13 +4,13 @@ from tests.base import SingleMemberTestCase, HazelcastTestCase from tests.hzrc.ttypes import Lang -from tests.util import configure_logging, set_attr -from hazelcast.config import ClientConfig, FlakeIdGeneratorConfig, MAXIMUM_PREFETCH_COUNT +from tests.util import configure_logging +from hazelcast.config import ClientConfig, FlakeIdGeneratorConfig, _MAXIMUM_PREFETCH_COUNT from hazelcast.client import HazelcastClient from hazelcast.util import to_millis from hazelcast.proxy.flake_id_generator import _IdBatch, _Block, _AutoBatcher from hazelcast.future import ImmediateFuture -from hazelcast.exception import HazelcastError +from hazelcast.errors import HazelcastError FLAKE_ID_STEP = 1 << 16 SHORT_TERM_BATCH_SIZE = 3 @@ -20,7 +20,6 @@ AUTO_BATCHER_BASE = 10 -@set_attr(category=3.10) class FlakeIdGeneratorConfigTest(HazelcastTestCase): def setUp(self): self.flake_id_config = FlakeIdGeneratorConfig() @@ -48,13 +47,13 @@ def test_prefetch_count_should_be_positive(self): def test_prefetch_count_max_size(self): with self.assertRaises(ValueError): - self.flake_id_config.prefetch_count = MAXIMUM_PREFETCH_COUNT + 1 + self.flake_id_config.prefetch_count = _MAXIMUM_PREFETCH_COUNT + 1 -@set_attr(category=3.10) class FlakeIdGeneratorTest(SingleMemberTestCase): @classmethod def configure_client(cls, config): + config.cluster_name = cls.cluster.id flake_id_config = FlakeIdGeneratorConfig("short-term") flake_id_config.prefetch_count = SHORT_TERM_BATCH_SIZE flake_id_config.prefetch_validity_in_millis = to_millis(SHORT_TERM_VALIDITY_SECONDS) @@ -137,7 +136,6 @@ def test_ids_are_from_new_batch_after_batch_is_exhausted(self): flake_id_generator.destroy() -@set_attr(category=3.10) class FlakeIdGeneratorDataStructuresTest(HazelcastTestCase): def test_id_batch_as_iterator(self): base = 3 @@ -231,7 +229,6 @@ def _id_batch_supplier(self, batch_size): return ImmediateFuture(_IdBatch(AUTO_BATCHER_BASE, FLAKE_ID_STEP, batch_size)) -@set_attr(category=3.10) class FlakeIdGeneratorIdOutOfRangeTest(HazelcastTestCase): @classmethod def setUpClass(cls): @@ -251,7 +248,8 @@ def test_new_id_with_at_least_one_suitable_member(self): self.assertTrueEventually(lambda: response.success and response.result is not None) config = ClientConfig() - config.network_config.smart_routing = False + config.cluster_name = self.cluster.id + config.network.smart_routing = False client = HazelcastClient(config) generator = client.get_flake_id_generator("test").blocking() @@ -269,7 +267,9 @@ def test_new_id_fails_when_all_members_are_out_of_node_id_range(self): response2 = self._assign_out_of_range_node_id(self.cluster.id, 1) self.assertTrueEventually(lambda: response2.success and response2.result is not None) - client = HazelcastClient() + config = ClientConfig() + config.cluster_name = self.cluster.id + client = HazelcastClient(config) generator = client.get_flake_id_generator("test").blocking() with self.assertRaises(HazelcastError): diff --git a/tests/proxy/hazelcast.xml b/tests/proxy/hazelcast.xml index 1ee10321ee..83049a30f9 100644 --- a/tests/proxy/hazelcast.xml +++ b/tests/proxy/hazelcast.xml @@ -1,14 +1,10 @@ - - - dev - dev-pass - + false - http://localhost:8080/mancenter 5701 @@ -18,18 +14,6 @@ --> 0 - - - 224.7.7.7 - 54327 - - - 127.0.0.1 - - - 127.0.0.1 - - diff --git a/tests/proxy/hazelcast_crdtreplication_delayed.xml b/tests/proxy/hazelcast_crdtreplication_delayed.xml index acca417479..3d13192e64 100644 --- a/tests/proxy/hazelcast_crdtreplication_delayed.xml +++ b/tests/proxy/hazelcast_crdtreplication_delayed.xml @@ -14,9 +14,10 @@ ~ limitations under the License. --> - + 1000000 100000 diff --git a/tests/proxy/hazelcast_litemember.xml b/tests/proxy/hazelcast_litemember.xml index 8b6d61b225..ed54abf5c2 100644 --- a/tests/proxy/hazelcast_litemember.xml +++ b/tests/proxy/hazelcast_litemember.xml @@ -14,8 +14,9 @@ ~ limitations under the License. --> - + \ No newline at end of file diff --git a/tests/proxy/hazelcast_mapstore.xml b/tests/proxy/hazelcast_mapstore.xml index 25110a4334..388a8bc56c 100644 --- a/tests/proxy/hazelcast_mapstore.xml +++ b/tests/proxy/hazelcast_mapstore.xml @@ -14,9 +14,10 @@ ~ limitations under the License. --> - + com.hazelcast.client.test.SampleMapStore diff --git a/tests/proxy/id_generator_test.py b/tests/proxy/id_generator_test.py deleted file mode 100644 index 080c8b24a4..0000000000 --- a/tests/proxy/id_generator_test.py +++ /dev/null @@ -1,28 +0,0 @@ -from hazelcast.proxy.id_generator import BLOCK_SIZE -from tests.base import SingleMemberTestCase -from tests.util import random_string - - -class IdGeneratorTest(SingleMemberTestCase): - def setUp(self): - self.id_gen = self.client.get_id_generator(random_string()).blocking() - - def test_create_proxy(self): - self.assertTrue(self.id_gen) - - def test_init(self): - init = self.id_gen.init(10) - self.assertTrue(init) - - def test_new_id(self): - self.id_gen.init(10) - new_id = self.id_gen.new_id() - self.assertEqual(new_id, 11) - - def test_str(self): - self.assertTrue(str(self.id_gen).startswith("IdGenerator")) - - def test_new_block(self): - self.id_gen.init(BLOCK_SIZE - 1) - self.assertEqual(self.id_gen.new_id(), BLOCK_SIZE) - self.assertEqual(self.id_gen.new_id(), BLOCK_SIZE + 1) \ No newline at end of file diff --git a/tests/proxy/list_test.py b/tests/proxy/list_test.py index 1d7d7dd1d7..999c3dc8a8 100644 --- a/tests/proxy/list_test.py +++ b/tests/proxy/list_test.py @@ -8,6 +8,11 @@ class ListTest(SingleMemberTestCase): def setUp(self): self.list = self.client.get_list(random_string()).blocking() + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def test_add_entry_listener_item_added(self): collector = event_collector() self.list.add_listener(include_value=False, item_added_func=collector) diff --git a/tests/proxy/lock_test.py b/tests/proxy/lock_test.py deleted file mode 100644 index 53f3a988ad..0000000000 --- a/tests/proxy/lock_test.py +++ /dev/null @@ -1,120 +0,0 @@ -import time -from threading import Event -from tests.base import SingleMemberTestCase -from tests.util import random_string, generate_key_owned_by_instance - - -class LockTest(SingleMemberTestCase): - def setUp(self): - self.lock = self.client.get_lock(random_string()).blocking() - - def tearDown(self): - self.lock.destroy() - - def test_lock(self): - self.lock.lock(10) - - e = Event() - - def lock(): - if not self.lock.try_lock(): - e.set() - - self.start_new_thread(lock) - self.assertSetEventually(e) - - def test_lock_with_lease(self): - self.lock.lock(lease_time=1) - - e = Event() - - def lock(): - if self.lock.try_lock(timeout=10): - e.set() - - self.start_new_thread(lock) - self.assertSetEventually(e) - - def test_is_locked_when_not_locked(self): - self.assertFalse(self.lock.is_locked()) - - def test_is_locked_when_locked(self): - self.lock.lock() - self.assertTrue(self.lock.is_locked()) - - def test_is_locked_when_locked_and_unlocked(self): - self.lock.lock() - self.lock.unlock() - self.assertFalse(self.lock.is_locked()) - - def test_is_locked_by_current_thread(self): - self.lock.lock() - self.assertTrue(self.lock.is_locked_by_current_thread()) - - def test_is_locked_by_current_thread_with_another_thread(self): - self.start_new_thread(lambda: self.lock.lock()) - - self.assertFalse(self.lock.is_locked_by_current_thread()) - - def test_get_remaining_lease_time(self): - self.lock.lock(10) - self.assertGreater(self.lock.get_remaining_lease_time(), 9000) - - def test_get_lock_count_single(self): - self.lock.lock() - - self.assertEqual(self.lock.get_lock_count(), 1) - - def test_get_lock_count_when_reentrant(self): - self.lock.lock() - self.lock.lock() - - self.assertEqual(self.lock.get_lock_count(), 2) - - def test_force_unlock(self): - t = self.start_new_thread(lambda: self.lock.lock()) - t.join() - - self.lock.force_unlock() - - self.assertFalse(self.lock.is_locked()) - - def test_str(self): - self.assertTrue(str(self.lock).startswith("Lock")) - - def test_key_owner_shutdowns_after_invocation_timeout(self): - key_owner = self.cluster.start_member() - - invocation_timeout_seconds = 1 - self.client.invoker.invocation_timeout = invocation_timeout_seconds - - key = generate_key_owned_by_instance(self.client, key_owner.address) - server_lock = self.client.get_lock(key).blocking() - server_lock.lock() - - e = Event() - - def lock_thread_func(): - lock = self.client.get_lock(key).blocking() - lock.lock() - lock.unlock() - e.set() - - self.start_new_thread(lock_thread_func) - - time.sleep(2 * invocation_timeout_seconds) - key_owner.shutdown() - - partition_id = self.client.partition_service.get_partition_id(key) - while not (self.client.partition_service.get_partition_owner(partition_id) == self.member.address): - time.sleep(0.1) - try: - self.assertTrue(server_lock.is_locked()) - self.assertTrue(server_lock.is_locked_by_current_thread()) - self.assertTrue(server_lock.try_lock()) - server_lock.unlock() - server_lock.unlock() - self.assertSetEventually(e) - finally: - # revert the invocation timeout change for other tests since client instance is only created once. - self.client.invoker.invocation_timeout = self.client.properties.INVOCATION_TIMEOUT_SECONDS.default_value diff --git a/tests/proxy/map_nearcache_test.py b/tests/proxy/map_nearcache_test.py index 3d93ca2b04..08022ee5f8 100644 --- a/tests/proxy/map_nearcache_test.py +++ b/tests/proxy/map_nearcache_test.py @@ -19,6 +19,8 @@ def configure_cluster(cls): @classmethod def configure_client(cls, config): + config.cluster_name = cls.cluster.id + near_cache_config = NearCacheConfig(random_string()) # near_cache_config.time_to_live_seconds = 1000 # near_cache_config.max_idle_seconds = 1000 @@ -26,7 +28,7 @@ def configure_client(cls, config): return super(MapTest, cls).configure_client(config) def setUp(self): - name = list(self.client.config.near_cache_configs.values())[0].name + name = list(self.client.config.near_caches.values())[0].name self.map = self.client.get_map(name).blocking() def tearDown(self): diff --git a/tests/proxy/map_test.py b/tests/proxy/map_test.py index 778112aa23..0d696eaece 100644 --- a/tests/proxy/map_test.py +++ b/tests/proxy/map_test.py @@ -1,11 +1,13 @@ import time import os -from hazelcast.exception import HazelcastError, HazelcastSerializationError + +from hazelcast.config import IndexConfig, INDEX_TYPE +from hazelcast.errors import HazelcastError from hazelcast.proxy.map import EntryEventType from hazelcast.serialization.api import IdentifiedDataSerializable from hazelcast.serialization.predicate import SqlPredicate from tests.base import SingleMemberTestCase -from tests.util import random_string, event_collector, fill_map, set_attr +from tests.util import random_string, event_collector, fill_map from hazelcast import six from hazelcast.six.moves import range @@ -40,8 +42,9 @@ def configure_cluster(cls): @classmethod def configure_client(cls, config): - config.serialization_config.add_data_serializable_factory(EntryProcessor.FACTORY_ID, - {EntryProcessor.CLASS_ID: EntryProcessor}) + config.cluster_name = cls.cluster.id + config.serialization.add_data_serializable_factory(EntryProcessor.FACTORY_ID, + {EntryProcessor.CLASS_ID: EntryProcessor}) return config def setUp(self): @@ -143,7 +146,20 @@ def assert_event(): self.assertTrueEventually(assert_event, 5) def test_add_index(self): - self.map.add_index("field", True) + ordered_index = IndexConfig("length", attributes=["this"]) + unordered_index = IndexConfig("length", INDEX_TYPE.HASH, ["this"]) + self.map.add_index(ordered_index) + self.map.add_index(unordered_index) + + def test_add_index_duplicate_fields(self): + config = IndexConfig("length", attributes=["this", "this"]) + with self.assertRaises(ValueError): + self.map.add_index(config) + + def test_add_index_invalid_attribute(self): + config = IndexConfig("length", attributes=["this.x."]) + with self.assertRaises(ValueError): + self.map.add_index(config) def test_clear(self): self._fill_map() @@ -273,15 +289,16 @@ def test_get_entry_view(self): self.assertEqual(entry_view.key, "key") self.assertEqual(entry_view.value, "new_value") + self.assertIsNotNone(entry_view.cost) self.assertIsNotNone(entry_view.creation_time) self.assertIsNotNone(entry_view.expiration_time) self.assertEqual(entry_view.hits, 2) - self.assertEqual(entry_view.version, 1) - self.assertEqual(entry_view.eviction_criteria_number, 0) self.assertIsNotNone(entry_view.last_access_time) self.assertIsNotNone(entry_view.last_stored_time) self.assertIsNotNone(entry_view.last_update_time) + self.assertEqual(entry_view.version, 1) self.assertIsNotNone(entry_view.ttl) + self.assertIsNotNone(entry_view.max_idle) def test_is_empty(self): self.map.put("key", "value") @@ -412,7 +429,7 @@ def test_remove_entry_listener_with_none_id(self): with self.assertRaises(AssertionError) as cm: self.map.remove_entry_listener(None) e = cm.exception - self.assertEqual(e.args[0],"None userRegistrationId is not allowed!") + self.assertEqual(e.args[0], "None user_registration_id is not allowed!") def test_replace(self): self.map.put("key", "value") @@ -437,7 +454,7 @@ def test_set(self): self.map.set("key", "value") self.assertEqual(self.map.get("key"), "value") - + def test_set_ttl(self): self.map.put("key", "value") self.map.set_ttl("key", 0.1) @@ -445,7 +462,7 @@ def test_set_ttl(self): def evicted(): self.assertFalse(self.map.contains_key("key")) - self.assertTrueEventually(evicted, 1) + self.assertTrueEventually(evicted, 5) def test_size(self): self._fill_map() @@ -506,12 +523,16 @@ def test_str(self): def _fill_map(self, count=10): map = {"key-%d" % x: "value-%d" % x for x in range(0, count)} - for k, v in six.iteritems(map): - self.map.put(k, v) + self.map.put_all(map) return map class MapStoreTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + @classmethod def configure_cluster(cls): path = os.path.abspath(__file__) @@ -566,7 +587,6 @@ def test_evict_all(self): self.map.evict_all() self.assertEqual(self.map.size(), 0) - @set_attr(category=3.11) def test_add_entry_listener_item_loaded(self): collector = event_collector() self.map.add_entry_listener(include_value=True, loaded_func=collector) diff --git a/tests/proxy/multi_map_test.py b/tests/proxy/multi_map_test.py index f427d7e983..1b3d8bd528 100644 --- a/tests/proxy/multi_map_test.py +++ b/tests/proxy/multi_map_test.py @@ -1,9 +1,8 @@ import time -from unittest import skip import itertools -from hazelcast.exception import HazelcastError +from hazelcast.errors import HazelcastError from hazelcast.proxy.map import EntryEventType from tests.base import SingleMemberTestCase from tests.util import random_string, event_collector @@ -12,6 +11,11 @@ class MultiMapTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): self.multi_map = self.client.get_multi_map(random_string()).blocking() diff --git a/tests/proxy/pn_counter_test.py b/tests/proxy/pn_counter_test.py index f5c84d0a0f..73af90690e 100644 --- a/tests/proxy/pn_counter_test.py +++ b/tests/proxy/pn_counter_test.py @@ -1,13 +1,17 @@ import os from tests.base import SingleMemberTestCase, HazelcastTestCase -from tests.util import configure_logging, get_abs_path, set_attr -from hazelcast.exception import ConsistencyLostError, NoDataMemberInClusterError -from hazelcast import HazelcastClient +from tests.util import configure_logging, get_abs_path +from hazelcast.errors import ConsistencyLostError, NoDataMemberInClusterError +from hazelcast import HazelcastClient, ClientConfig -@set_attr(category=3.10) class PNCounterBasicTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): self.pn_counter = self.client.get_pn_counter("pn-counter").blocking() @@ -59,7 +63,6 @@ def _check_pn_counter_method(self, return_value, expected_return_value, expected self.assertEqual(expected_get_value, get_value) -@set_attr(category=3.10) class PNCounterConsistencyTest(HazelcastTestCase): @classmethod def setUpClass(cls): @@ -68,23 +71,24 @@ def setUpClass(cls): def setUp(self): self.rc = self.create_rc() self.cluster = self.create_cluster(self.rc, self._configure_cluster()) - self.member1 = self.cluster.start_member() - self.member2 = self.cluster.start_member() - self.client = HazelcastClient() + self.cluster.start_member() + self.cluster.start_member() + config = ClientConfig() + config.cluster_name = self.cluster.id + self.client = HazelcastClient(config) self.pn_counter = self.client.get_pn_counter("pn-counter").blocking() def tearDown(self): - self.pn_counter.destroy() self.client.shutdown() + self.rc.terminateCluster(self.cluster.id) self.rc.exit() def test_consistency_lost_error_raised_when_target_terminates(self): self.pn_counter.add_and_get(3) replica_address = self.pn_counter._current_target_replica_address - member = self.client.cluster.get_member_by_address(replica_address) - self.rc.terminateMember(self.cluster.id, member.uuid) + self.rc.terminateMember(self.cluster.id, str(replica_address.uuid)) with self.assertRaises(ConsistencyLostError): self.pn_counter.add_and_get(5) @@ -92,9 +96,8 @@ def test_counter_can_continue_session_by_calling_reset(self): self.pn_counter.add_and_get(3) replica_address = self.pn_counter._current_target_replica_address - member = self.client.cluster.get_member_by_address(replica_address) - self.rc.terminateMember(self.cluster.id, member.uuid) + self.rc.terminateMember(self.cluster.id, str(replica_address.uuid)) self.pn_counter.reset() self.pn_counter.add_and_get(5) @@ -104,8 +107,12 @@ def _configure_cluster(self): return f.read() -@set_attr(category=3.10) class PNCounterLiteMemberTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + @classmethod def configure_cluster(cls): current_directory = os.path.dirname(__file__) diff --git a/tests/proxy/queue_test.py b/tests/proxy/queue_test.py index e0db4d9ce0..7dc0253d2d 100644 --- a/tests/proxy/queue_test.py +++ b/tests/proxy/queue_test.py @@ -8,6 +8,11 @@ class QueueTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + @classmethod def configure_cluster(cls): path = os.path.abspath(__file__) diff --git a/tests/proxy/replicated_map_test.py b/tests/proxy/replicated_map_test.py index 405d162030..910866186a 100644 --- a/tests/proxy/replicated_map_test.py +++ b/tests/proxy/replicated_map_test.py @@ -9,6 +9,11 @@ class ReplicatedMapTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): self.replicated_map = self.client.get_replicated_map(random_string()).blocking() diff --git a/tests/proxy/ringbuffer_test.py b/tests/proxy/ringbuffer_test.py index c476768c3b..512494c8ee 100644 --- a/tests/proxy/ringbuffer_test.py +++ b/tests/proxy/ringbuffer_test.py @@ -9,6 +9,11 @@ class RingBufferTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + @classmethod def configure_cluster(cls): path = os.path.abspath(__file__) diff --git a/tests/proxy/semaphore_test.py b/tests/proxy/semaphore_test.py deleted file mode 100644 index 6d9d1c21cc..0000000000 --- a/tests/proxy/semaphore_test.py +++ /dev/null @@ -1,46 +0,0 @@ -from tests.base import SingleMemberTestCase -from tests.util import random_string - - -class SemaphoreTest(SingleMemberTestCase): - def setUp(self): - self.semaphore = self.client.get_semaphore(random_string()).blocking() - - def test_init(self): - self.semaphore.init(10) - available_permits = self.semaphore.available_permits() - self.assertEqual(available_permits, 10) - - def test_acquire(self): - self.semaphore.init(10) - self.semaphore.acquire(10) - available_permits = self.semaphore.available_permits() - self.assertEqual(available_permits, 0) - - def test_drain(self): - self.semaphore.init(10) - self.semaphore.drain_permits() - available_permits = self.semaphore.available_permits() - self.assertEqual(available_permits, 0) - - def test_reduce_permits(self): - self.semaphore.init(10) - self.semaphore.reduce_permits(10) - available_permits = self.semaphore.available_permits() - self.assertEqual(available_permits, 0) - - def test_release(self): - self.semaphore.init(10) - self.semaphore.acquire(10) - self.semaphore.release(5) - available_permits = self.semaphore.available_permits() - self.assertEqual(available_permits, 5) - - def test_try_acquire(self): - self.semaphore.init(10) - self.semaphore.try_acquire(10) - available_permits = self.semaphore.available_permits() - self.assertEqual(available_permits, 0) - - def test_str(self): - self.assertTrue(str(self.semaphore).startswith("Semaphore")) diff --git a/tests/proxy/set_test.py b/tests/proxy/set_test.py index 1b051e4907..18e93aa2fa 100644 --- a/tests/proxy/set_test.py +++ b/tests/proxy/set_test.py @@ -8,6 +8,11 @@ class SetTest(SingleMemberTestCase): def setUp(self): self.set = self.client.get_set(random_string()).blocking() + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def test_add_entry_listener_item_added(self): collector = event_collector() self.set.add_listener(include_value=False, item_added_func=collector) diff --git a/tests/proxy/topic_test.py b/tests/proxy/topic_test.py index d3931909bf..d4c90f0d95 100644 --- a/tests/proxy/topic_test.py +++ b/tests/proxy/topic_test.py @@ -3,6 +3,11 @@ class TopicTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): self.topic = self.client.get_topic(random_string()).blocking() diff --git a/tests/proxy/transactional_list_test.py b/tests/proxy/transactional_list_test.py index 42c160adea..3ec99ab760 100644 --- a/tests/proxy/transactional_list_test.py +++ b/tests/proxy/transactional_list_test.py @@ -3,6 +3,11 @@ class TransactionalListTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): self.list = self.client.get_list(random_string()).blocking() diff --git a/tests/proxy/transactional_map_test.py b/tests/proxy/transactional_map_test.py index 2c98c72f73..da4263f9bb 100644 --- a/tests/proxy/transactional_map_test.py +++ b/tests/proxy/transactional_map_test.py @@ -5,6 +5,11 @@ class TransactionalMapTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): self.map = self.client.get_map(random_string()).blocking() diff --git a/tests/proxy/transactional_multi_map_test.py b/tests/proxy/transactional_multi_map_test.py index d4dff69af9..5737d59b8b 100644 --- a/tests/proxy/transactional_multi_map_test.py +++ b/tests/proxy/transactional_multi_map_test.py @@ -4,6 +4,11 @@ class TransactionalMultiMapTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): self.multi_map = self.client.get_multi_map(random_string()).blocking() diff --git a/tests/proxy/transactional_queue_test.py b/tests/proxy/transactional_queue_test.py index 3ac8e62ec7..558f41e8d5 100644 --- a/tests/proxy/transactional_queue_test.py +++ b/tests/proxy/transactional_queue_test.py @@ -7,6 +7,11 @@ class TransactionalQueueTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + @classmethod def configure_cluster(cls): path = os.path.abspath(__file__) diff --git a/tests/proxy/transactional_set_test.py b/tests/proxy/transactional_set_test.py index 3c09b71d37..407264db92 100644 --- a/tests/proxy/transactional_set_test.py +++ b/tests/proxy/transactional_set_test.py @@ -3,6 +3,11 @@ class TransactionalSetTest(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): configure_logging() self.set = self.client.get_set(random_string()).blocking() diff --git a/tests/reconnect_test.py b/tests/reconnect_test.py index 4edfb508c2..e6bb973fe5 100644 --- a/tests/reconnect_test.py +++ b/tests/reconnect_test.py @@ -2,8 +2,8 @@ from time import sleep from hazelcast import ClientConfig -from hazelcast.exception import HazelcastError, TargetDisconnectedError -from hazelcast.lifecycle import LIFECYCLE_STATE_DISCONNECTED, LIFECYCLE_STATE_CONNECTED +from hazelcast.errors import HazelcastError, TargetDisconnectedError +from hazelcast.lifecycle import LifecycleState from hazelcast.util import AtomicInteger from tests.base import HazelcastTestCase from tests.util import configure_logging, event_collector @@ -23,11 +23,10 @@ def tearDown(self): def test_start_client_with_no_member(self): config = ClientConfig() - config.network_config.addresses.append("127.0.0.1:5701") - config.network_config.addresses.append("127.0.0.1:5702") - config.network_config.addresses.append("127.0.0.1:5703") - config.network_config.connection_attempt_limit = 2 - config.network_config.connection_attempt_period = 0.1 + config.network.addresses.append("127.0.0.1:5701") + config.network.addresses.append("127.0.0.1:5702") + config.network.addresses.append("127.0.0.1:5703") + config.connection_strategy.connection_retry.cluster_connect_timeout = 2 with self.assertRaises(HazelcastError): self.create_client(config) @@ -35,14 +34,16 @@ def test_start_client_before_member(self): t = Thread(target=self.cluster.start_member) t.start() config = ClientConfig() - config.network_config.connection_attempt_limit = 10 + config.cluster_name = self.cluster.id + config.connection_strategy.connection_retry.cluster_connect_timeout = 5 self.create_client(config) t.join() def test_restart_member(self): member = self.cluster.start_member() config = ClientConfig() - config.network_config.connection_attempt_limit = 10 + config.cluster_name = self.cluster.id + config.connection_strategy.connection_retry.cluster_connect_timeout = 5 client = self.create_client(config) state = [None] @@ -50,17 +51,18 @@ def test_restart_member(self): def listener(s): state[0] = s - client.lifecycle.add_listener(listener) + client.lifecycle_service.add_listener(listener) member.shutdown() - self.assertTrueEventually(lambda: self.assertEqual(state[0], LIFECYCLE_STATE_DISCONNECTED)) + self.assertTrueEventually(lambda: self.assertEqual(state[0], LifecycleState.DISCONNECTED)) self.cluster.start_member() - self.assertTrueEventually(lambda: self.assertEqual(state[0], LIFECYCLE_STATE_CONNECTED)) + self.assertTrueEventually(lambda: self.assertEqual(state[0], LifecycleState.CONNECTED)) def test_listener_re_register(self): member = self.cluster.start_member() config = ClientConfig() - config.network_config.connection_attempt_limit = 10 + config.cluster_name = self.cluster.id + config.connection_strategy.connection_retry.cluster_connect_timeout = 5 client = self.create_client(config) map = client.get_map("map") @@ -75,7 +77,7 @@ def test_listener_re_register(self): count = AtomicInteger() def assert_events(): - if client.lifecycle.is_live: + if client.lifecycle_service.is_running(): while True: try: map.put("key-%d" % count.get_and_increment(), "value").result() @@ -91,30 +93,34 @@ def assert_events(): def test_member_list_after_reconnect(self): old_member = self.cluster.start_member() config = ClientConfig() - config.network_config.connection_attempt_limit = 10 + config.cluster_name = self.cluster.id + config.connection_strategy.connection_retry.cluster_connect_timeout = 5 client = self.create_client(config) old_member.shutdown() new_member = self.cluster.start_member() def assert_member_list(): - self.assertEqual(1, client.cluster.size()) - self.assertEqual(new_member.uuid, client.cluster.get_member_list()[0].uuid) + members = client.cluster_service.get_members() + self.assertEqual(1, len(members)) + self.assertEqual(new_member.uuid, str(members[0].uuid)) self.assertTrueEventually(assert_member_list) def test_reconnect_toNewNode_ViaLastMemberList(self): old_member = self.cluster.start_member() config = ClientConfig() - config.network_config.addresses.append("127.0.0.1:5701") - config.network_config.smart_routing = False - config.network_config.connection_attempt_limit = 100 + config.cluster_name = self.cluster.id + config.network.addresses.append("127.0.0.1:5701") + config.network.smart_routing = False + config.connection_strategy.connection_retry.cluster_connect_timeout = 10 client = self.create_client(config) new_member = self.cluster.start_member() old_member.shutdown() def assert_member_list(): - self.assertEqual(1, client.cluster.size()) - self.assertEqual(new_member.uuid, client.cluster.get_member_list()[0].uuid) + members = client.cluster_service.get_members() + self.assertEqual(1, len(members)) + self.assertEqual(new_member.uuid, str(members[0].uuid)) self.assertTrueEventually(assert_member_list) diff --git a/tests/serialization/identified_test.py b/tests/serialization/identified_test.py index eaa13967aa..19c1364800 100644 --- a/tests/serialization/identified_test.py +++ b/tests/serialization/identified_test.py @@ -3,7 +3,6 @@ import hazelcast from hazelcast.serialization import SerializationServiceV1 from hazelcast.serialization.api import IdentifiedDataSerializable -from hazelcast.config import ClientProperties FACTORY_ID = 1 @@ -105,14 +104,10 @@ def __eq__(self, other): def create_identified(): - return SerializationV1Identified(99, True, 'c', 11, 1234134, 1341431221, 1.0, 2.0, [1, 2, 3], [True, False, True], - ['a', 'b', 'c'], [1, 2, 3], [4, 2, 3], [11, 2, 3], [1.0, 2.0, 3.0], [11.0, 22.0, 33.0], - "the string text", ["item1", "item2", "item3"]) - -def create_identified_with_bytearray(): - return SerializationV1Identified(99, True, 'c', 11, 1234134, 1341431221, 1.0, 2.0, bytes([1, 2, 3]), [True, False, True], - ['a', 'b', 'c'], [1, 2, 3], [4, 2, 3], [11, 2, 3], [1.0, 2.0, 3.0], [11.0, 22.0, 33.0], - "the string text", ["item1", "item2", "item3"]) + return SerializationV1Identified(99, True, 'c', 11, 1234134, 1341431221, 1.0, 2.0, bytearray([1, 2, 3]), + [True, False, True], ['a', 'b', 'c'], [1, 2, 3], [4, 2, 3], [11, 2, 3], + [1.0, 2.0, 3.0], [11.0, 22.0, 33.0], "the string text", + ["item1", "item2", "item3"]) the_factory = {SerializationV1Identified.CLASS_ID: SerializationV1Identified} @@ -122,26 +117,10 @@ class IdentifiedSerializationTestCase(unittest.TestCase): def test_encode_decode(self): config = hazelcast.ClientConfig() - config.serialization_config.data_serializable_factories[FACTORY_ID] = the_factory - service = SerializationServiceV1(config.serialization_config) + config.serialization.data_serializable_factories[FACTORY_ID] = the_factory + service = SerializationServiceV1(config.serialization) obj = create_identified() data = service.to_data(obj) obj2 = service.to_object(data) self.assertTrue(obj == obj2) - - def test_encode_decode_respect_bytearray_fields(self): - config = hazelcast.ClientConfig() - config.set_property("hazelcast.serialization.input.returns.bytearray", True) - config.serialization_config.data_serializable_factories[FACTORY_ID] = the_factory - service = SerializationServiceV1(config.serialization_config, properties=ClientProperties(config.get_properties())) - obj = create_identified_with_bytearray() - data = service.to_data(obj) - - obj2 = service.to_object(data) - self.assertTrue(obj == obj2) - - service = SerializationServiceV1(config.serialization_config) - - obj2 = service.to_object(data) - self.assertFalse(obj == obj2) diff --git a/tests/serialization/int_serialization_test.py b/tests/serialization/int_serialization_test.py index 367e292302..adf9ed7242 100644 --- a/tests/serialization/int_serialization_test.py +++ b/tests/serialization/int_serialization_test.py @@ -1,7 +1,7 @@ import unittest from hazelcast.config import SerializationConfig, INTEGER_TYPE -from hazelcast.exception import HazelcastSerializationError +from hazelcast.errors import HazelcastSerializationError from hazelcast.serialization.serialization_const import CONSTANT_TYPE_BYTE, CONSTANT_TYPE_SHORT, CONSTANT_TYPE_INTEGER, \ CONSTANT_TYPE_LONG from hazelcast.serialization.service import SerializationServiceV1 diff --git a/tests/serialization/portable_test.py b/tests/serialization/portable_test.py index 550579bb5b..7b379049dd 100644 --- a/tests/serialization/portable_test.py +++ b/tests/serialization/portable_test.py @@ -1,13 +1,12 @@ import unittest import hazelcast -from hazelcast.exception import HazelcastSerializationError +from hazelcast.errors import HazelcastSerializationError from hazelcast.serialization import SerializationServiceV1 from hazelcast.serialization.api import Portable from hazelcast.serialization.portable.classdef import ClassDefinitionBuilder from tests.serialization.identified_test import create_identified, SerializationV1Identified from hazelcast import six -from hazelcast.config import ClientProperties if not six.PY2: long = int @@ -220,8 +219,8 @@ def create_portable(): identified = create_identified() inner_portable = InnerPortable("Inner Text", 666) long_var = long("1341431221l") if six.PY2 else 1341431221 - return SerializationV1Portable(99, True, 'c', 11, 1234134, long_var, 1.0, 2.0, [1, 2, 3], [True, False, True], - ['a', 'b', 'c'], + return SerializationV1Portable(99, True, 'c', 11, 1234134, long_var, 1.0, 2.0, bytearray([1, 2, 3]), + [True, False, True], ['a', 'b', 'c'], [1, 2, 3], [4, 2, 3], [11, 2, 3], [1.0, 2.0, 3.0], [11.0, 22.0, 33.0], "the string text", ["item1", "item2", "item3"], inner_portable, @@ -235,8 +234,8 @@ def create_portable(): class PortableSerializationTestCase(unittest.TestCase): def test_encode_decode(self): config = hazelcast.ClientConfig() - config.serialization_config.portable_factories[FACTORY_ID] = the_factory - service = SerializationServiceV1(config.serialization_config) + config.serialization.portable_factories[FACTORY_ID] = the_factory + service = SerializationServiceV1(config.serialization) obj = create_portable() self.assertTrue(obj.inner_portable) @@ -247,9 +246,9 @@ def test_encode_decode(self): def test_encode_decode_2(self): config = hazelcast.ClientConfig() - config.serialization_config.portable_factories[FACTORY_ID] = the_factory - service = SerializationServiceV1(config.serialization_config) - service2 = SerializationServiceV1(config.serialization_config) + config.serialization.portable_factories[FACTORY_ID] = the_factory + service = SerializationServiceV1(config.serialization) + service2 = SerializationServiceV1(config.serialization) obj = create_portable() self.assertTrue(obj.inner_portable) @@ -259,8 +258,8 @@ def test_encode_decode_2(self): def test_portable_context(self): config = hazelcast.ClientConfig() - config.serialization_config.portable_factories[FACTORY_ID] = the_factory - service = SerializationServiceV1(config.serialization_config) + config.serialization.portable_factories[FACTORY_ID] = the_factory + service = SerializationServiceV1(config.serialization) obj = create_portable() self.assertTrue(obj.inner_portable) @@ -271,11 +270,11 @@ def test_portable_context(self): def test_portable_null_fields(self): config = hazelcast.ClientConfig() - config.serialization_config.portable_factories[FACTORY_ID] = the_factory - service = SerializationServiceV1(config.serialization_config) + config.serialization.portable_factories[FACTORY_ID] = the_factory + service = SerializationServiceV1(config.serialization) service.to_data(create_portable()) - service2 = SerializationServiceV1(config.serialization_config) + service2 = SerializationServiceV1(config.serialization) obj = SerializationV1Portable() data = service.to_data(obj) @@ -313,13 +312,13 @@ def test_portable_class_def(self): class_def = builder.build() config = hazelcast.ClientConfig() - config.serialization_config.portable_factories[FACTORY_ID] = the_factory + config.serialization.portable_factories[FACTORY_ID] = the_factory - config.serialization_config.class_definitions.add(class_def) - config.serialization_config.class_definitions.add(class_def_inner) + config.serialization.class_definitions.add(class_def) + config.serialization.class_definitions.add(class_def_inner) - service = SerializationServiceV1(config.serialization_config) - service2 = SerializationServiceV1(config.serialization_config) + service = SerializationServiceV1(config.serialization) + service2 = SerializationServiceV1(config.serialization) obj = SerializationV1Portable() data = service.to_data(obj) @@ -328,8 +327,8 @@ def test_portable_class_def(self): def test_portable_read_without_factory(self): config = hazelcast.ClientConfig() - config.serialization_config.portable_factories[FACTORY_ID] = the_factory - service = SerializationServiceV1(config.serialization_config) + config.serialization.portable_factories[FACTORY_ID] = the_factory + service = SerializationServiceV1(config.serialization) service2 = SerializationServiceV1(hazelcast.SerializationConfig()) obj = create_portable() self.assertTrue(obj.inner_portable) diff --git a/tests/serialization/serializers_test.py b/tests/serialization/serializers_test.py index 66995bce02..78e447fc12 100644 --- a/tests/serialization/serializers_test.py +++ b/tests/serialization/serializers_test.py @@ -11,6 +11,11 @@ class SerializersTestCase(SingleMemberTestCase): + @classmethod + def configure_client(cls, config): + config.cluster_name = cls.cluster.id + return config + def setUp(self): config = SerializationConfig() config.default_integer_type = INTEGER_TYPE.BIG_INT diff --git a/tests/serialization/string_test.py b/tests/serialization/string_test.py index f0f251636a..60c4b668b0 100644 --- a/tests/serialization/string_test.py +++ b/tests/serialization/string_test.py @@ -1,6 +1,5 @@ # coding=utf-8 import binascii -import struct import unittest from hazelcast.config import SerializationConfig @@ -18,13 +17,15 @@ TEST_DATA_BYTES_ALL = TEST_DATA_ALL.encode("utf8") -def to_data_byte(inp, length): +def to_data_byte(inp): + encoded_data = inp.encode("utf8") + # 4 byte partition hashcode - 4 byte of type id - 4 byte string length bf = bytearray(12) - struct.pack_into(FMT_BE_INT, bf, 0, 0) - struct.pack_into(FMT_BE_INT, bf, 4, CONSTANT_TYPE_STRING) - struct.pack_into(FMT_BE_INT, bf, 8, length) - return bf + bytearray(inp.encode("utf-8")) + BE_INT.pack_into(bf, 0, 0) + BE_INT.pack_into(bf, 4, CONSTANT_TYPE_STRING) + BE_INT.pack_into(bf, 8, len(encoded_data)) + return bf + encoded_data class StringSerializationTestCase(unittest.TestCase): @@ -32,27 +33,27 @@ def setUp(self): self.service = SerializationServiceV1(serialization_config=SerializationConfig()) def test_ascii_encode(self): - data_byte = to_data_byte(TEST_DATA_ASCII, len(TEST_DATA_ASCII)) + data_byte = to_data_byte(TEST_DATA_ASCII) expected = binascii.hexlify(data_byte) data = self.service.to_data(TEST_DATA_ASCII) actual = binascii.hexlify(data._buffer) self.assertEqual(expected, actual) def test_ascii_decode(self): - data_byte = to_data_byte(TEST_DATA_ASCII, len(TEST_DATA_ASCII)) + data_byte = to_data_byte(TEST_DATA_ASCII) data = Data(buff=data_byte) actual_ascii = self.service.to_object(data) self.assertEqual(TEST_DATA_ASCII, actual_ascii) def test_utf8_encode(self): - data_byte = to_data_byte(TEST_DATA_ALL, len(TEST_DATA_ALL)) + data_byte = to_data_byte(TEST_DATA_ALL) expected = binascii.hexlify(data_byte) data = self.service.to_data(TEST_DATA_ALL) actual = binascii.hexlify(data._buffer) self.assertEqual(expected, actual) def test_utf8_decode(self): - data_byte = to_data_byte(TEST_DATA_ALL, len(TEST_DATA_ALL)) + data_byte = to_data_byte(TEST_DATA_ALL) data = Data(buff=data_byte) actual_ascii = self.service.to_object(data) self.assertEqual(TEST_DATA_ALL, actual_ascii) diff --git a/tests/shutdown_test.py b/tests/shutdown_test.py index 36098dd323..932db424cc 100644 --- a/tests/shutdown_test.py +++ b/tests/shutdown_test.py @@ -1,28 +1,57 @@ +import threading + from hazelcast import ClientConfig -from hazelcast.exception import HazelcastClientNotActiveException +from hazelcast.errors import HazelcastClientNotActiveError from tests.base import HazelcastTestCase -from tests.util import configure_logging class ShutdownTest(HazelcastTestCase): rc = None def setUp(self): - configure_logging() self.rc = self.create_rc() self.cluster = self.create_cluster(self.rc) def tearDown(self): self.shutdown_all_clients() + self.rc.terminateCluster(self.cluster.id) self.rc.exit() def test_shutdown_not_hang_on_member_closed(self): config = ClientConfig() + config.cluster_name = self.cluster.id + config.connection_strategy.connection_retry.cluster_connect_timeout = 5 member = self.cluster.start_member() client = self.create_client(config) my_map = client.get_map("test") my_map.put("key", "value").result() member.shutdown() - with self.assertRaises(HazelcastClientNotActiveException): + with self.assertRaises(HazelcastClientNotActiveError): while True: my_map.get("key").result() + + def test_invocations_finalised_when_client_shutdowns(self): + self.cluster.start_member() + config = ClientConfig() + config.cluster_name = self.cluster.id + client = self.create_client(config) + m = client.get_map("test") + m.put("key", "value").result() + + def run(): + for _ in range(1000): + try: + m.get("key").result() + except: + pass + + threads = [] + for _ in range(10): + t = threading.Thread(target=run) + threads.append(t) + t.start() + + client.shutdown() + + for i in range(10): + threads[i].join(5) diff --git a/tests/smart_listener_test.py b/tests/smart_listener_test.py index d4bd41c73a..289515e0b4 100644 --- a/tests/smart_listener_test.py +++ b/tests/smart_listener_test.py @@ -20,7 +20,8 @@ def tearDownClass(cls): def setUp(self): client_config = ClientConfig() - client_config.network_config.smart_routing = True + client_config.cluster_name = self.cluster.id + client_config.network.smart_routing = True self.client = self.create_client(client_config) self.collector = event_collector() diff --git a/tests/ssl/hazelcast-default-ca.xml b/tests/ssl/hazelcast-default-ca.xml index 643f0501f3..7336b24318 100644 --- a/tests/ssl/hazelcast-default-ca.xml +++ b/tests/ssl/hazelcast-default-ca.xml @@ -1,47 +1,8 @@ - - - - - - dev - dev-pass - - http://localhost:8080/mancenter + - 5701 - - - 0 - - - - 224.7.7.7 - 54327 - - - 127.0.0.1 - - - 127.0.0.1 com.hazelcast.nio.ssl.ClasspathSSLContextFactory @@ -54,5 +15,4 @@ - \ No newline at end of file diff --git a/tests/ssl/hazelcast-ma-optional.xml b/tests/ssl/hazelcast-ma-optional.xml index 5d1813f966..e02fecd1b7 100644 --- a/tests/ssl/hazelcast-ma-optional.xml +++ b/tests/ssl/hazelcast-ma-optional.xml @@ -1,47 +1,8 @@ - - - - - - dev - dev-pass - - http://localhost:8080/mancenter + - 5701 - - - 0 - - - - 224.7.7.7 - 54327 - - - 127.0.0.1 - - - 127.0.0.1 com.hazelcast.nio.ssl.ClasspathSSLContextFactory diff --git a/tests/ssl/hazelcast-ma-required.xml b/tests/ssl/hazelcast-ma-required.xml index 0441f0099b..fa56145305 100644 --- a/tests/ssl/hazelcast-ma-required.xml +++ b/tests/ssl/hazelcast-ma-required.xml @@ -1,47 +1,8 @@ - - - - - - dev - dev-pass - - http://localhost:8080/mancenter + - 5701 - - - 0 - - - - 224.7.7.7 - 54327 - - - 127.0.0.1 - - - 127.0.0.1 com.hazelcast.nio.ssl.ClasspathSSLContextFactory diff --git a/tests/ssl/hazelcast-ssl.xml b/tests/ssl/hazelcast-ssl.xml index b35c4b980b..0207d58456 100644 --- a/tests/ssl/hazelcast-ssl.xml +++ b/tests/ssl/hazelcast-ssl.xml @@ -1,47 +1,8 @@ - - - - - - dev - dev-pass - - http://localhost:8080/mancenter + - 5701 - - - 0 - - - - 224.7.7.7 - 54327 - - - 127.0.0.1 - - - 127.0.0.1 com.hazelcast.nio.ssl.ClasspathSSLContextFactory diff --git a/tests/ssl/mutual_authentication_test.py b/tests/ssl/mutual_authentication_test.py index da5faaaf2e..5bf98e5107 100644 --- a/tests/ssl/mutual_authentication_test.py +++ b/tests/ssl/mutual_authentication_test.py @@ -3,11 +3,11 @@ from tests.base import HazelcastTestCase from hazelcast.client import HazelcastClient from hazelcast.config import PROTOCOL -from hazelcast.exception import HazelcastError +from hazelcast.errors import HazelcastError from tests.util import get_ssl_config, configure_logging, get_abs_path, set_attr -@set_attr(category=3.08, enterprise=True) +@set_attr(enterprise=True) class MutualAuthenticationTest(HazelcastTestCase): current_directory = os.path.dirname(__file__) rc = None @@ -27,110 +27,110 @@ def tearDown(self): def test_ma_required_client_and_server_authenticated(self): cluster = self.create_cluster(self.rc, self.configure_cluster(True)) - member = cluster.start_member() - client = HazelcastClient(get_ssl_config(True, + cluster.start_member() + client = HazelcastClient(get_ssl_config(cluster.id, True, get_abs_path(self.current_directory, "server1-cert.pem"), get_abs_path(self.current_directory, "client1-cert.pem"), get_abs_path(self.current_directory, "client1-key.pem"), protocol=PROTOCOL.TLSv1)) - self.assertTrue(client.lifecycle.is_live) + self.assertTrue(client.lifecycle_service.is_running()) client.shutdown() def test_ma_required_server_not_authenticated(self): cluster = self.create_cluster(self.rc, self.configure_cluster(True)) - member = cluster.start_member() + cluster.start_member() with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(True, - get_abs_path(self.current_directory, "server2-cert.pem"), - get_abs_path(self.current_directory, "client1-cert.pem"), - get_abs_path(self.current_directory, "client1-key.pem"), - protocol=PROTOCOL.TLSv1)) + HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server2-cert.pem"), + get_abs_path(self.current_directory, "client1-cert.pem"), + get_abs_path(self.current_directory, "client1-key.pem"), + protocol=PROTOCOL.TLSv1)) def test_ma_required_client_not_authenticated(self): cluster = self.create_cluster(self.rc, self.configure_cluster(True)) - member = cluster.start_member() + cluster.start_member() with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(True, - get_abs_path(self.current_directory, "server1-cert.pem"), - get_abs_path(self.current_directory, "client2-cert.pem"), - get_abs_path(self.current_directory, "client2-key.pem"), - protocol=PROTOCOL.TLSv1)) + HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server1-cert.pem"), + get_abs_path(self.current_directory, "client2-cert.pem"), + get_abs_path(self.current_directory, "client2-key.pem"), + protocol=PROTOCOL.TLSv1)) def test_ma_required_client_and_server_not_authenticated(self): cluster = self.create_cluster(self.rc, self.configure_cluster(True)) - member = cluster.start_member() + cluster.start_member() with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(True, - get_abs_path(self.current_directory, "server2-cert.pem"), - get_abs_path(self.current_directory, "client2-cert.pem"), - get_abs_path(self.current_directory, "client2-key.pem"), - protocol=PROTOCOL.TLSv1)) + HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server2-cert.pem"), + get_abs_path(self.current_directory, "client2-cert.pem"), + get_abs_path(self.current_directory, "client2-key.pem"), + protocol=PROTOCOL.TLSv1)) def test_ma_optional_client_and_server_authenticated(self): cluster = self.create_cluster(self.rc, self.configure_cluster(False)) - member = cluster.start_member() - client = HazelcastClient(get_ssl_config(True, + cluster.start_member() + client = HazelcastClient(get_ssl_config(cluster.id, True, get_abs_path(self.current_directory, "server1-cert.pem"), get_abs_path(self.current_directory, "client1-cert.pem"), get_abs_path(self.current_directory, "client1-key.pem"), protocol=PROTOCOL.TLSv1)) - self.assertTrue(client.lifecycle.is_live) + self.assertTrue(client.lifecycle_service.is_running()) client.shutdown() def test_ma_optional_server_not_authenticated(self): cluster = self.create_cluster(self.rc, self.configure_cluster(False)) - member = cluster.start_member() + cluster.start_member() with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(True, - get_abs_path(self.current_directory, "server2-cert.pem"), - get_abs_path(self.current_directory, "client1-cert.pem"), - get_abs_path(self.current_directory, "client1-key.pem"), - protocol=PROTOCOL.TLSv1)) + HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server2-cert.pem"), + get_abs_path(self.current_directory, "client1-cert.pem"), + get_abs_path(self.current_directory, "client1-key.pem"), + protocol=PROTOCOL.TLSv1)) def test_ma_optional_client_not_authenticated(self): cluster = self.create_cluster(self.rc, self.configure_cluster(False)) - member = cluster.start_member() + cluster.start_member() with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(True, - get_abs_path(self.current_directory, "server1-cert.pem"), - get_abs_path(self.current_directory, "client2-cert.pem"), - get_abs_path(self.current_directory, "client2-key.pem"), - protocol=PROTOCOL.TLSv1)) + HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server1-cert.pem"), + get_abs_path(self.current_directory, "client2-cert.pem"), + get_abs_path(self.current_directory, "client2-key.pem"), + protocol=PROTOCOL.TLSv1)) def test_ma_optional_client_and_server_not_authenticated(self): cluster = self.create_cluster(self.rc, self.configure_cluster(False)) - member = cluster.start_member() + cluster.start_member() with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(True, - get_abs_path(self.current_directory, "server2-cert.pem"), - get_abs_path(self.current_directory, "client2-cert.pem"), - get_abs_path(self.current_directory, "client2-key.pem"), - protocol=PROTOCOL.TLSv1)) + HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server2-cert.pem"), + get_abs_path(self.current_directory, "client2-cert.pem"), + get_abs_path(self.current_directory, "client2-key.pem"), + protocol=PROTOCOL.TLSv1)) def test_ma_required_with_no_cert_file(self): cluster = self.create_cluster(self.rc, self.configure_cluster(True)) - member = cluster.start_member() + cluster.start_member() with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(True, get_abs_path(self.current_directory, "server1-cert.pem"), - protocol=PROTOCOL.TLSv1)) + HazelcastClient(get_ssl_config(cluster.id, True, get_abs_path(self.current_directory, "server1-cert.pem"), + protocol=PROTOCOL.TLSv1)) def test_ma_optional_with_no_cert_file(self): cluster = self.create_cluster(self.rc, self.configure_cluster(False)) - member = cluster.start_member() - client = HazelcastClient(get_ssl_config(True, get_abs_path(self.current_directory, "server1-cert.pem"), - protocol=PROTOCOL.TLSv1)) - self.assertTrue(client.lifecycle.is_live) + cluster.start_member() + client = HazelcastClient( + get_ssl_config(cluster.id, True, get_abs_path(self.current_directory, "server1-cert.pem"), + protocol=PROTOCOL.TLSv1)) + self.assertTrue(client.lifecycle_service.is_running()) client.shutdown() def configure_cluster(self, is_ma_required): file_path = self.ma_req_xml if is_ma_required else self.ma_opt_xml with open(file_path, "r") as f: return f.read() - diff --git a/tests/ssl/ssl_test.py b/tests/ssl/ssl_test.py index bf2c4d74cc..b0ab2b0940 100644 --- a/tests/ssl/ssl_test.py +++ b/tests/ssl/ssl_test.py @@ -2,7 +2,7 @@ from tests.base import HazelcastTestCase from hazelcast.client import HazelcastClient -from hazelcast.exception import HazelcastError +from hazelcast.errors import HazelcastError from hazelcast.config import PROTOCOL from tests.util import get_ssl_config, configure_logging, fill_map, get_abs_path, set_attr @@ -29,15 +29,16 @@ def test_ssl_disabled(self): cluster.start_member() with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(False)) + HazelcastClient(get_ssl_config(cluster.id, False)) def test_ssl_enabled_is_client_live(self): cluster = self.create_cluster(self.rc, self.configure_cluster(self.hazelcast_ssl_xml)) cluster.start_member() - client = HazelcastClient(get_ssl_config(True, get_abs_path(self.current_directory, "server1-cert.pem"), + client = HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server1-cert.pem"), protocol=PROTOCOL.TLSv1)) - self.assertTrue(client.lifecycle.is_live) + self.assertTrue(client.lifecycle_service.is_running()) client.shutdown() def test_ssl_enabled_trust_default_certificates(self): @@ -45,8 +46,8 @@ def test_ssl_enabled_trust_default_certificates(self): cluster = self.create_cluster(self.rc, self.configure_cluster(self.default_ca_xml)) cluster.start_member() - client = HazelcastClient(get_ssl_config(True, protocol=PROTOCOL.TLSv1)) - self.assertTrue(client.lifecycle.is_live) + client = HazelcastClient(get_ssl_config(cluster.id, True, protocol=PROTOCOL.TLSv1)) + self.assertTrue(client.lifecycle_service.is_running()) client.shutdown() def test_ssl_enabled_dont_trust_self_signed_certificates(self): @@ -55,13 +56,14 @@ def test_ssl_enabled_dont_trust_self_signed_certificates(self): cluster.start_member() with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(True, protocol=PROTOCOL.TLSv1)) + HazelcastClient(get_ssl_config(cluster.id, True, protocol=PROTOCOL.TLSv1)) def test_ssl_enabled_map_size(self): cluster = self.create_cluster(self.rc, self.configure_cluster(self.hazelcast_ssl_xml)) cluster.start_member() - client = HazelcastClient(get_ssl_config(True, get_abs_path(self.current_directory, "server1-cert.pem"), + client = HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server1-cert.pem"), protocol=PROTOCOL.TLSv1)) test_map = client.get_map("test_map") fill_map(test_map, 10) @@ -72,10 +74,12 @@ def test_ssl_enabled_with_custom_ciphers(self): cluster = self.create_cluster(self.rc, self.configure_cluster(self.hazelcast_ssl_xml)) cluster.start_member() - client = HazelcastClient(get_ssl_config(True, get_abs_path(self.current_directory, "server1-cert.pem"), + client = HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server1-cert.pem"), protocol=PROTOCOL.TLSv1, - ciphers="DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-DES-CBC3-SHA:DHE-RSA-DES-CBC3-SHA:DHE-DSS-DES-CBC3-SHA")) - self.assertTrue(client.lifecycle.is_live) + ciphers="DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-DES-" + "CBC3-SHA:DHE-RSA-DES-CBC3-SHA:DHE-DSS-DES-CBC3-SHA")) + self.assertTrue(client.lifecycle_service.is_running()) client.shutdown() def test_ssl_enabled_with_invalid_ciphers(self): @@ -83,10 +87,10 @@ def test_ssl_enabled_with_invalid_ciphers(self): cluster.start_member() with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(True, - get_abs_path(self.current_directory, "server1-cert.pem"), - protocol=PROTOCOL.TLSv1, - ciphers="INVALID-CIPHER1:INVALID_CIPHER2")) + HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server1-cert.pem"), + protocol=PROTOCOL.TLSv1, + ciphers="INVALID-CIPHER1:INVALID_CIPHER2")) def test_ssl_enabled_with_protocol_mismatch(self): cluster = self.create_cluster(self.rc, self.configure_cluster(self.hazelcast_ssl_xml)) @@ -94,8 +98,9 @@ def test_ssl_enabled_with_protocol_mismatch(self): # Member configured with TLSv1 with self.assertRaises(HazelcastError): - client = HazelcastClient(get_ssl_config(True, get_abs_path(self.current_directory, "server1-cert.pem"), - protocol=PROTOCOL.SSLv3)) + HazelcastClient(get_ssl_config(cluster.id, True, + get_abs_path(self.current_directory, "server1-cert.pem"), + protocol=PROTOCOL.SSLv3)) def configure_cluster(self, filename): with open(filename, "r") as f: diff --git a/tests/statistics_test.py b/tests/statistics_test.py index 4060fe74ce..007b72f2ed 100644 --- a/tests/statistics_test.py +++ b/tests/statistics_test.py @@ -7,12 +7,10 @@ from hazelcast.config import ClientConfig, ClientProperties, NearCacheConfig from hazelcast.version import CLIENT_VERSION, CLIENT_TYPE from tests.hzrc.ttypes import Lang -from tests.util import random_string, set_attr +from tests.util import random_string -@set_attr(category=3.09) class StatisticsTest(HazelcastTestCase): - DEFAULT_STATS_PERIOD = 3 STATS_PERIOD = 1 @@ -27,9 +25,11 @@ def tearDownClass(cls): cls.rc.exit() def test_statistics_disabled_by_default(self): - client = HazelcastClient() + config = ClientConfig() + config.cluster_name = self.cluster.id + client = HazelcastClient(config) time.sleep(2 * self.DEFAULT_STATS_PERIOD) - client_uuid = client.cluster.uuid + client_uuid = client._connection_manager.client_uuid response = self._get_client_stats_from_server(client_uuid) @@ -39,10 +39,11 @@ def test_statistics_disabled_by_default(self): def test_statistics_disabled_with_wrong_value(self): config = ClientConfig() + config.cluster_name = self.cluster.id config.set_property(ClientProperties.STATISTICS_ENABLED.name, "truee") config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, self.STATS_PERIOD) client = HazelcastClient(config) - client_uuid = client.cluster.uuid + client_uuid = client._connection_manager.client_uuid time.sleep(2 * self.STATS_PERIOD) response = self._get_client_stats_from_server(client_uuid) @@ -53,9 +54,10 @@ def test_statistics_disabled_with_wrong_value(self): def test_statistics_enabled(self): config = ClientConfig() + config.cluster_name = self.cluster.id config.set_property(ClientProperties.STATISTICS_ENABLED.name, True) client = HazelcastClient(config) - client_uuid = client.cluster.uuid + client_uuid = client._connection_manager.client_uuid time.sleep(2 * self.DEFAULT_STATS_PERIOD) self._wait_for_statistics_collection(client_uuid) @@ -67,8 +69,10 @@ def test_statistics_enabled_with_environment_variable(self): environ[ClientProperties.STATISTICS_ENABLED.name] = "true" environ[ClientProperties.STATISTICS_PERIOD_SECONDS.name] = str(self.STATS_PERIOD) - client = HazelcastClient() - client_uuid = client.cluster.uuid + config = ClientConfig() + config.cluster_name = self.cluster.id + client = HazelcastClient(config) + client_uuid = client._connection_manager.client_uuid time.sleep(2 * self.STATS_PERIOD) self._wait_for_statistics_collection(client_uuid) @@ -79,10 +83,11 @@ def test_statistics_enabled_with_environment_variable(self): def test_statistics_period(self): config = ClientConfig() + config.cluster_name = self.cluster.id config.set_property(ClientProperties.STATISTICS_ENABLED.name, True) config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, self.STATS_PERIOD) client = HazelcastClient(config) - client_uuid = client.cluster.uuid + client_uuid = client._connection_manager.client_uuid time.sleep(2 * self.STATS_PERIOD) response1 = self._wait_for_statistics_collection(client_uuid) @@ -95,10 +100,11 @@ def test_statistics_period(self): def test_statistics_enabled_with_negative_period(self): config = ClientConfig() + config.cluster_name = self.cluster.id config.set_property(ClientProperties.STATISTICS_ENABLED.name, True) config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, -1 * self.STATS_PERIOD) client = HazelcastClient(config) - client_uuid = client.cluster.uuid + client_uuid = client._connection_manager.client_uuid time.sleep(2 * self.DEFAULT_STATS_PERIOD) self._wait_for_statistics_collection(client_uuid) @@ -107,24 +113,26 @@ def test_statistics_enabled_with_negative_period(self): def test_statistics_content(self): config = ClientConfig() + config.cluster_name = self.cluster.id config.set_property(ClientProperties.STATISTICS_ENABLED.name, True) config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, self.STATS_PERIOD) map_name = random_string() near_cache_config = NearCacheConfig(map_name) - config.near_cache_configs[map_name] = near_cache_config + config.near_caches[map_name] = near_cache_config client = HazelcastClient(config) - client_uuid = client.cluster.uuid + client_uuid = client._connection_manager.client_uuid - test_map = client.get_map(map_name).blocking() + client.get_map(map_name).blocking() time.sleep(2 * self.STATS_PERIOD) response = self._wait_for_statistics_collection(client_uuid) result = response.result.decode("utf-8") - local_address = self._get_local_address(client) + info = client._internal_cluster_service.get_local_client() + local_address = "%s:%s" % (info.address.host, info.address.port) # Check near cache and client statistics self.assertEqual(1, result.count("clientName=" + client.name)) @@ -148,7 +156,7 @@ def test_statistics_content(self): # in different platforms. So, first try to get these statistics and then check the # response content - s = Statistics(client) + s = Statistics(client, None, None, None, None, None) psutil_stats = s._get_os_and_runtime_stats() for stat_name in psutil_stats: self.assertEqual(1, result.count(stat_name)) @@ -157,18 +165,19 @@ def test_statistics_content(self): def test_special_characters(self): config = ClientConfig() + config.cluster_name = self.cluster.id config.set_property(ClientProperties.STATISTICS_ENABLED.name, True) config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, self.STATS_PERIOD) map_name = random_string() + ",t=es\\t" near_cache_config = NearCacheConfig(map_name) - config.near_cache_configs[map_name] = near_cache_config + config.near_caches[map_name] = near_cache_config client = HazelcastClient(config) - client_uuid = client.cluster.uuid + client_uuid = client._connection_manager.client_uuid - test_map = client.get_map(map_name).blocking() + client.get_map(map_name).blocking() time.sleep(2 * self.STATS_PERIOD) response = self._wait_for_statistics_collection(client_uuid) @@ -181,16 +190,17 @@ def test_special_characters(self): def test_near_cache_stats(self): config = ClientConfig() + config.cluster_name = self.cluster.id config.set_property(ClientProperties.STATISTICS_ENABLED.name, True) config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, self.STATS_PERIOD) map_name = random_string() near_cache_config = NearCacheConfig(map_name) - config.near_cache_configs[map_name] = near_cache_config + config.near_caches[map_name] = near_cache_config client = HazelcastClient(config) - client_uuid = client.cluster.uuid + client_uuid = client._connection_manager.client_uuid test_map = client.get_map(map_name).blocking() @@ -207,10 +217,10 @@ def test_near_cache_stats(self): self.assertEqual(1, result.count("nc." + map_name + ".invalidationRequests=0")) test_map.put(1, 2) # invalidation request - test_map.get(1) # cache miss - test_map.get(1) # cache hit + test_map.get(1) # cache miss + test_map.get(1) # cache hit test_map.put(1, 3) # invalidation + invalidation request - test_map.get(1) # cache miss + test_map.get(1) # cache miss time.sleep(2 * self.STATS_PERIOD) response = self._wait_for_statistics_collection(client_uuid) @@ -227,22 +237,16 @@ def test_near_cache_stats(self): client.shutdown() def _get_client_stats_from_server(self, client_uuid): - script = "clients=instance_0.getClientService().getConnectedClients().toArray()\n" \ - "for(i=0;i