diff --git a/README.md b/README.md
index 2c96aef58c..fe286eb2fe 100644
--- a/README.md
+++ b/README.md
@@ -13,12 +13,10 @@
* [1.4.2. Configuring Hazelcast Python Client](#142-configuring-hazelcast-python-client)
* [1.4.2.1. Cluster Name Setting](#1421-cluster-name-setting)
* [1.4.2.2. Network Settings](#1422-network-settings)
- * [1.4.3. Client System Properties](#143-client-system-properties)
* [1.5. Basic Usage](#15-basic-usage)
* [1.6. Code Samples](#16-code-samples)
* [2. Features](#2-features)
* [3. Configuration Overview](#3-configuration-overview)
- * [3.1. Configuration Options](#31-configuration-options)
* [4. Serialization](#4-serialization)
* [4.1. IdentifiedDataSerializable Serialization](#41-identifieddataserializable-serialization)
* [4.2. Portable Serialization](#42-portable-serialization)
@@ -59,7 +57,7 @@
* [7.5. Distributed Events](#75-distributed-events)
* [7.5.1. Cluster Events](#751-cluster-events)
* [7.5.1.1. Listening for Member Events](#7511-listening-for-member-events)
- * [7.5.1.2. Listenring for Distributed Object Events](#7512-listening-for-distributed-object-events)
+ * [7.5.1.2. Listening for Distributed Object Events](#7512-listening-for-distributed-object-events)
* [7.5.1.3. Listening for Lifecycle Events](#7513-listening-for-lifecycle-events)
* [7.5.2. Distributed Data Structure Events](#752-distributed-data-structure-events)
* [7.5.2.1. Listening for Map Events](#7521-listening-for-map-events)
@@ -303,27 +301,33 @@ These configuration elements are enough for most connection scenarios. Now we wi
### 1.4.2. Configuring Hazelcast Python Client
-Hazelcast Python client can be configured programmatically.
+To configure your Hazelcast Python client, you need to pass configuration options as keyword arguments to your client
+at the startup. The names of the configuration options is similar to `hazelcast.xml` configuration file used when configuring
+the member, but flatter. It is done this way to make it easier to transfer Hazelcast skills to multiple platforms.
-This section describes some network configuration settings to cover common use cases in connecting the client to a cluster. See the [Configuration Overview section](#3-configuration-overview)
-and the following sections for information about detailed network configurations and/or additional features of Hazelcast Python client configuration.
+This section describes some network configuration settings to cover common use cases in connecting the client to a cluster.
+See the [Configuration Overview section](#3-configuration-overview) and the following sections for information about
+detailed network configurations and/or additional features of Hazelcast Python client configuration.
-An easy way to configure your Hazelcast Python client is to create a `ClientConfig` object and set the appropriate options. Then you can
-supply this object to your client at the startup. This is the programmatic configuration approach.
+```python
+import hazelcast
-Once you imported `hazelcast` to your Python project, you may follow programmatic configuration approach.
+client = hazelcast.HazelcastClient(
+ cluster_members=[
+ "some-ip-address:port"
+ ],
+ cluster_name="name-of-your-cluster",
+)
+```
-You need to create a `ClientConfig` object and adjust its properties. Then you can pass this object to the client when starting it.
+It's also possible to omit the keyword arguments in order to use the default settings.
```python
import hazelcast
-config = hazelcast.ClientConfig()
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient()
```
----
-
If you run the Hazelcast IMDG members in a different server than the client, you most probably have configured the members' ports and cluster
names as explained in the previous section. If you did, then you need to make certain changes to the network settings of your client.
@@ -333,8 +337,11 @@ names as explained in the previous section. If you did, then you need to make ce
You need to provide the name of the cluster, if it is defined on the server side, to which you want the client to connect.
```python
-config = hazelcast.ClientConfig()
-config.cluster_name = "name of your cluster"
+import hazelcast
+
+client = hazelcast.HazelcastClient(
+ cluster_name="name-of-your-cluster",
+)
```
#### 1.4.2.2. Network Settings
@@ -344,68 +351,23 @@ You need to provide the IP address and port of at least one member in your clust
```python
import hazelcast
-config = hazelcast.ClientConfig()
-config.network.addresses.append("IP-address:port")
-```
-
-### 1.4.3. Client System Properties
-
-While configuring your Python client, you can use various system properties provided by Hazelcast to tune its clients.
-These properties can be set programmatically through `config.set_property` method or by using an environment variable.
-
-The value of the any property will be:
-
-* the programmatically configured value, if programmatically set,
-* the environment variable value, if the environment variable is set,
-* the default value, if none of the above is set.
-
-See the following for an example client system property configuration:
-
-**Programmatically:**
-
- ```python
-from hazelcast.config import ClientProperties
-
-# Sets invocation timeout as 2 seconds
-config.set_property(ClientProperties.INVOCATION_TIMEOUT_SECONDS.name, 2)
-```
-
-or
-
- ```python
-# Sets invocation timeout as 2 seconds
-config.set_property("hazelcast.client.invocation.timeout.seconds", 2)
+client = hazelcast.HazelcastClient(
+ cluster_members=["some-ip-address:port"]
+)
```
- **By using an environment variable:**
-
-```python
-import os
-
-environ = os.environ
-environ[ClientProperties.INVOCATION_TIMEOUT_SECONDS.name] = "2"
-```
-
-If you set a property both programmatically and via an environment variable, the programmatically set value will be used.
-
-See the [complete list](http://hazelcast.github.io/hazelcast-python-client/4.0/hazelcast.config.html#hazelcast.config.ClientProperties) of client system properties, along with their descriptions, which can be used to configure your Hazelcast Python client.
-
## 1.5. Basic Usage
Now that we have a working cluster and we know how to configure both our cluster and client, we can run a simple program to use a
distributed map in the Python client.
-The following example first creates a configuration object and starts a client.
-
```python
import hazelcast
-# We create a config for illustrative purposes.
-# We do not adjust this config. Therefore it has default settings.
-config = hazelcast.ClientConfig()
+# Connect to Hazelcast cluster
+client = hazelcast.HazelcastClient()
-# Client connects to the cluster with the given configuration.
-client = hazelcast.HazelcastClient(config)
+client.shutdown()
```
This should print logs about the cluster members such as address, port and UUID to the `stderr`.
@@ -451,7 +413,7 @@ personnel_map.put("Clark", "IT")
print("Added IT personnel. Printing all known personnel")
for person, department in personnel_map.entry_set().result():
- print("{} is in {} department".format(person, department))
+ print("%s is in %s department" % (person, department))
client.shutdown()
```
@@ -482,7 +444,9 @@ personnel_map.put("Faith", "Sales")
print("Added Sales personnel. Printing all known personnel")
for person, department in personnel_map.entry_set().result():
- print("{} is in {} department".format(person, department))
+ print("%s is in %s department" % (person, department))
+
+client.shutdown()
```
**Output**
@@ -497,6 +461,8 @@ Clark is in IT department
Bob is in IT department
```
+> **NOTE: For the sake of brevity we are going to omit boilerplate parts, like `import`s, in the later code snippets. Refer to the [Code Samples section](#16-code-samples) to see samples with the complete code.**
+
You will see this time we add only the sales employees but we get the list of all known employees including the ones in IT.
That is because our map lives in the cluster and no matter which client we use, we can access the whole map.
@@ -517,14 +483,13 @@ You may also attach a function to the future objects that will be called, with t
For example, the part where we printed the personnel in above code can be rewritten with a callback attached to the `entry_set()`, as shown below..
```python
-import time
-
def entry_set_cb(future):
for person, department in future.result():
- print("{} is in {} department".format(person, department))
+ print("%s is in %s department" % (person, department))
+
personnel_map.entry_set().add_done_callback(entry_set_cb)
-time.sleep(1) # wait for Future to complete
+time.sleep(1) # wait for Future to complete
```
Asynchronous operations are far more efficient in single threaded Python interpreter but you may want all of your method calls
@@ -543,14 +508,14 @@ over it or attach a callback to it anymore.
```python
for person, department in personnel_map.entry_set():
- print("{} is in {} department".format(person, department))
+ print("%s is in %s department" % (person, department))
```
## 1.6. Code Samples
See the Hazelcast Python [examples](https://github.com/hazelcast/hazelcast-python-client/tree/master/examples) for more code samples.
-You can also see the [latest Hazelcast Python API Documentation](http://hazelcast.github.io/hazelcast-python-client/4.0/index.html) or [global API Documentation page](http://hazelcast.github.io/hazelcast-python-client/).
+You can also see the [API Documentation page](http://hazelcast.github.io/hazelcast-python-client/).
# 2. Features
@@ -603,22 +568,16 @@ Hazelcast Python client supports the following data structures and features:
# 3. Configuration Overview
-This chapter describes the options to configure your Python client.
-
-## 3.1. Configuration Options
-
-You can configure Hazelcast Python client programmatically.
-
-For programmatic configuration of the Hazelcast Python client, just instantiate a `ClientConfig` object and configure the
+For configuration of the Hazelcast Python client, just pass the keyword arguments to the client to configure the
desired aspects. An example is shown below.
```python
-config = hazelcast.ClientConfig()
-config.network.addresses.append("127.0.0.1:5701")
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(
+ cluster_members=["127.0.0.1:5701"]
+)
```
-See the `ClientConfig` class documentation at [Hazelcast Python Client API Docs](http://hazelcast.github.io/hazelcast-python-client/4.0/hazelcast.config.html) for details.
+See the docstring of `HazelcastClient` or the API documentation at [Hazelcast Python Client API Docs](http://hazelcast.github.io/hazelcast-python-client/) for details.
# 4. Serialization
@@ -627,7 +586,7 @@ or transmit it through the network. Its main purpose is to save the state of an
The reverse process is called deserialization. Hazelcast offers you its own native serialization methods.
You will see these methods throughout this chapter.
-Hazelcast serializes all your objects before sending them to the server. The `bool`, `int`, `long` (for Python 2), `float`, `str`, `unicode` (for Python 2), `bytearray` and `bytes` types are serialized natively and you cannot override this behavior.
+Hazelcast serializes all your objects before sending them to the server. The `bool`, `int`, `long` (for Python 2), `float`, `str`, `unicode` (for Python 2) and `bytearray` types are serialized natively and you cannot override this behavior.
The following table is the conversion of types for the Java server side.
| Python | Java |
@@ -639,9 +598,8 @@ The following table is the conversion of types for the Java server side.
| str | String |
| unicode | String |
| bytearray | byte[] |
-| bytes | byte[] |
-> Note: A `int` or `long` type is serialized as `Integer` by default. You can configure this behavior using the `SerializationConfig.default_integer_type`.
+> Note: A `int` or `long` type is serialized as `Integer` by default. You can configure this behavior using the `default_int_type` argument.
Arrays of the above types can be serialized as `boolean[]`, `byte[]`, `short[]`, `int[]`, `float[]`, `double[]`, `long[]` and `string[]` for the Java server side, respectively.
@@ -705,28 +663,35 @@ class Address(IdentifiedDataSerializable):
> **NOTE: Refer to `ObjectDataInput`/`ObjectDataOutput` classes in the `hazelcast.serialization.api` package to understand methods available on the `input`/`output` objects.**
-> **NOTE: For IdentifiedDataSerializable to work in Python client, the class that inherits it should have default valued parameters in its `__init__` method so that an instance of that class can be created without passing any arguments to it.**
-
The IdentifiedDataSerializable uses `get_class_id()` and `get_factory_id()` methods to reconstitute the object.
-To complete the implementation, an `IdentifiedDataSerializable factory` should also be created and registered into the `SerializationConfig` which can be accessed from `config.serialization`.
+To complete the implementation, an `IdentifiedDataSerializable` factory should also be created and registered into the client using the `data_serializable_factories` argument.
A factory is a dictionary that stores class ID and the `IdentifiedDataSerializable` class type pairs as the key and value.
The factory's responsibility is to store the right `IdentifiedDataSerializable` class type for the given class ID.
-A sample `IdentifiedDataSerializable factory` could be created as follows:
+A sample `IdentifiedDataSerializable` factory could be created as follows:
```python
-factory = {1: Address}
+factory = {
+ 1: Address
+}
```
Note that the keys of the dictionary should be the same as the class IDs of their corresponding `IdentifiedDataSerializable` class types.
-The last step is to register the `IdentifiedDataSerializable factory` to the `SerializationConfig`.
+> **NOTE: For IdentifiedDataSerializable to work in Python client, the class that inherits it should have default valued parameters in its `__init__` method
+>so that an instance of that class can be created without passing any arguments to it.**
+
+The last step is to register the `IdentifiedDataSerializable` factory to the client.
```python
-config.serialization.data_serializable_factories[1] = factory
+client = hazelcast.HazelcastClient(
+ data_serializable_factories={
+ 1: factory
+ }
+)
```
-Note that the ID that is passed to the `SerializationConfig` is same as the factory ID that the `Address` class returns.
+Note that the ID that is passed as the key of the factory is same as the factory ID that the `Address` class returns.
## 4.2. Portable Serialization
@@ -740,7 +705,8 @@ To use it, you need to extend the `Portable` class. Portable serialization has t
In order to support these features, a serialized `Portable` object contains meta information like the version and concrete location of the each field in the binary data.
This way Hazelcast is able to navigate in the binary data and deserialize only the required field without actually deserializing the whole object which improves the query performance.
-With multiversion support, you can have two members each having different versions of the same object; Hazelcast stores both meta information and uses the correct one to serialize and deserialize portable objects depending on the member.
+With multiversion support, you can have two members each having different versions of the same object;
+Hazelcast stores both meta information and uses the correct one to serialize and deserialize portable objects depending on the member.
This is very helpful when you are doing a rolling upgrade without shutting down the cluster.
Also note that portable serialization is totally language independent and is used as the binary protocol between Hazelcast server and clients.
@@ -769,26 +735,33 @@ class Foo(Portable):
> **NOTE: Refer to `PortableReader`/`PortableWriter` classes in the `hazelcast.serialization.api` package to understand methods available on the `reader`/`writer` objects.**
-> **NOTE: For Portable to work in Python client, the class that inherits it should have default valued parameters in its `__init__` method so that an instance of that class can be created without passing any arguments to it.**
+> **NOTE: For Portable to work in Python client, the class that inherits it should have default valued parameters in its `__init__` method
+>so that an instance of that class can be created without passing any arguments to it.**
Similar to `IdentifiedDataSerializable`, a `Portable` class must provide the `get_class_id()` and `get_factory_id()` methods.
The factory dictionary will be used to create the `Portable` object given the class ID.
-A sample `Portable factory` could be created as follows:
+A sample `Portable` factory could be created as follows:
```python
-factory = {1: Foo}
+factory = {
+ 1: Foo
+}
```
Note that the keys of the dictionary should be the same as the class IDs of their corresponding `Portable` class types.
-The last step is to register the `Portable factory` to the `SerializationConfig`.
+The last step is to register the `Portable` factory to the client.
```python
-config.serialization.data_serializable_factories[1] = factory
+client = hazelcast.HazelcastClient(
+ portable_factories={
+ 1: factory
+ }
+)
```
-Note that the ID that is passed to the `SerializationConfig` is same as the factory ID that `Foo` class returns.
+Note that the ID that is passed as the key of the factory is same as the factory ID that `Foo` class returns.
### 4.2.1. Versioning for Portable Serialization
@@ -797,10 +770,12 @@ For example, a client may have an older version of a class and the member to whi
Portable serialization supports versioning. It is a global versioning, meaning that all portable classes that are serialized through a member get the globally configured portable version.
-You can declare the version using the `config.serialization.portable_version` option, as shown below.
+You can declare the version using the `portable_version` argument, as shown below.
```python
-config.serialization.portable_version = 0
+client = hazelcast.HazelcastClient(
+ portable_version=1
+)
```
If you update the class by changing the type of one of the fields or by adding a new field, it is a good idea to upgrade the version of the class, rather than sticking to the global version specified in the configuration.
@@ -890,11 +865,15 @@ class MusicianSerializer(StreamSerializer):
```
Note that the serializer `id` must be unique as Hazelcast will use it to lookup the `MusicianSerializer` while it deserializes the object.
-Now the last required step is to register the `MusicianSerializer` to the configuration.
+Now the last required step is to register the `MusicianSerializer` to the client.
```python
-config.serialization.set_custom_serializer(Musician, MusicianSerializer)
+client = hazelcast.HazelcastClient(
+ custom_serializers={
+ Musician: MusicianSerializer
+ }
+)
```
From now on, Hazelcast will use `MusicianSerializer` to serialize `Musician` objects.
@@ -930,8 +909,8 @@ You can query JSON objects in the cluster using the `Predicate`s of your choice.
```python
# Get the objects whose age is less than 6
result = json_map.values(is_less_than_or_equal_to("age", 6))
-print("Retrieved {} values whose age is less than 6.".format(len(result)))
-print("Entry is {}".format(result[0].to_string()))
+print("Retrieved %s values whose age is less than 6." % len(result))
+print("Entry is", result[0].to_string())
```
## 4.5. Global Serialization
@@ -968,24 +947,31 @@ class GlobalSerializer(StreamSerializer):
return some_third_party_serializer.deserialize(input.read_utf())
```
-You should register the global serializer in the configuration.
+You should register the global serializer to the client.
```python
-config.serialization.global_serializer = GlobalSerializer
+client = hazelcast.HazelcastClient(
+ global_serializer=GlobalSerializer
+)
```
# 5. Setting Up Client Network
-Main parts of network related configuration for Hazelcast Python client may be tuned via the `ClientNetworkConfig`.
+Main parts of network related configuration for Hazelcast Python client may be tuned via the arguments described in this section.
Here is an example of configuring the network for Python client.
```python
-config.network.addresses = ["10.1.1.21""10.1.1.22:5703"]
-config.network.smart_routing = True
-config.network.redo_operation = True
-config.network.connection_timeout = 6.0
+client = hazelcast.HazelcastClient(
+ cluster_members=[
+ "10.1.1.21",
+ "10.1.1.22:5703"
+ ],
+ smart_routing=True,
+ redo_operation=False,
+ connection_timeout=6.0
+)
```
## 5.1. Providing Member Addresses
@@ -995,7 +981,12 @@ list to find an alive member. Although it may be enough to give only one address
(since all members communicate with each other), it is recommended that you give the addresses for all the members.
```python
-config.network.addresses = ["10.1.1.23", "10.1.1.22:5703"]
+client = hazelcast.HazelcastClient(
+ cluster_members=[
+ "10.1.1.21",
+ "10.1.1.22:5703"
+ ]
+)
```
If the port part is omitted, then `5701`, `5702` and `5703` will be tried in a random order.
@@ -1009,7 +1000,9 @@ Smart routing defines whether the client mode is smart or unisocket. See the [Py
for the description of smart and unisocket modes.
```python
-config.network.smart_routing = True
+client = hazelcast.HazelcastClient(
+ smart_routing=True,
+)
```
Its default value is `True` (smart client mode).
@@ -1020,7 +1013,9 @@ It enables/disables redo-able operations. While sending the requests to the rela
Read-only operations are retried by default. If you want to enable retry for the other operations, you can set the `redo_operation` to `True`.
```python
-config.network.redo_operation = True
+client = hazelcast.HazelcastClient(
+ redo_operation=False
+)
```
Its default value is `False` (disabled).
@@ -1030,7 +1025,9 @@ Its default value is `False` (disabled).
Connection timeout is the timeout value in seconds for the members to accept the client connection requests.
```python
-config.network.connection_timeout = 6.0
+client = hazelcast.HazelcastClient(
+ connection_timeout=6.0
+)
```
Its default value is `5.0` seconds.
@@ -1047,14 +1044,13 @@ See the [Mutual Authentication section](#813-mutual-authentication).
## 5.6. Enabling Hazelcast Cloud Discovery
Hazelcast Python client can discover and connect to Hazelcast clusters running on [Hazelcast Cloud](https://cloud.hazelcast.com/).
-For this, provide authentication information as `cluster_name`, enable `cloud_config` and set your `discovery_token` as shown below.
-The following is the example configuration.
+For this, provide authentication information as `cluster_name`, enable cloud discovery by setting your `cloud_discovery_token` as shown below.
```python
-config.cluster_name = "hz-cluster"
-
-config.network.cloud.enabled = True
-config.network.cloud.discovery_token = "EXAMPLE_TOKEN"
+client = hazelcast.HazelcastClient(
+ cluster_name="name-of-your-cluster",
+ cloud_discovery_token="discovery-token"
+)
```
If you have enabled encryption for your cluster, you should also enable TLS/SSL configuration for the client to secure communication between your
@@ -1063,13 +1059,13 @@ client and cluster members as described in the [TLS/SSL for Hazelcast Python Cli
# 6. Client Connection Strategy
Hazelcast Python client can be configured to connect to a cluster in an async manner during the client start and reconnecting
-after a cluster disconnect. Both of these options are configured via `ConnectionStrategyConfig`.
+after a cluster disconnect. Both of these options are configured via arguments below.
You can configure the client’s starting mode as async or sync using the configuration element `async_start`.
When it is set to `True` (async), the behavior of `hazelcast.HazelcastClient()` call changes.
-It resolves a client instance without waiting to establish a cluster connection.
+It returns a client instance without waiting to establish a cluster connection.
In this case, the client rejects any network dependent operation with `ClientOfflineError` immediately until it connects to the cluster.
-If it is `False`, the call is not resolved and the client is not created until a connection with the cluster is established.
+If it is `False`, the call is not returned and the client is not created until a connection with the cluster is established.
Its default value is `False` (sync).
You can also configure how the client reconnects to the cluster after a disconnection. This is configured using the
@@ -1081,34 +1077,40 @@ configuration element `reconnect_mode`; it has three options:
Its default value is `ON`.
-The example configuration below show how to configure a Node.js client’s starting and reconnecting modes.
+The example configuration below show how to configure a Python client’s starting and reconnecting modes.
```python
-config.connection_strategy.async_start = False
-config.connection_strategy.reconnect_mode = RECONNECT_MODE.ON
+from hazelcast.config import ReconnectMode
+...
+
+client = hazelcast.HazelcastClient(
+ async_start=False,
+ reconnect_mode=ReconnectMode.ON
+)
```
## 6.1. Configuring Client Connection Retry
When the client is disconnected from the cluster, it searches for new connections to reconnect.
-You can configure the frequency of the reconnection attempts and client shutdown behavior using the `ConnectionRetryConfig`.
+You can configure the frequency of the reconnection attempts and client shutdown behavior using the argumentes below.
```python
-retry_config = config.connection_strategy.connection_retry
-retry_config.initial_backoff = 1
-retry_config.max_backoff = 60
-retry_config.multiplier = 2
-retry_config.cluster_connect_timeout = 50
-retry_config.jitter = 0.2
+client = hazelcast.HazelcastClient(
+ retry_initial_backoff=1,
+ retry_max_backoff=15,
+ retry_multiplier=1.5,
+ retry_jitter=0.2,
+ cluster_connect_timeout=20
+)
```
The following are configuration element descriptions:
-* `initial_backoff`: Specifies how long to wait (backoff), in seconds, after the first failure before retrying. Its default value is `1` s. It must be non-negative.
-* `max_backoff`: Specifies the upper limit for the backoff in seconds. Its default value is `30` s. It must be non-negative.
-* `multiplier`: Factor to multiply the backoff after a failed retry. Its default value is `1`. It must be greater than or equal to `1`.
-* `clusterConnectTimeoutMillis`: Timeout value in seconds for the client to give up to connect to the current cluster. Its default value is `20` s.
-* `jitter`: Specifies by how much to randomize backoffs. Its default value is `0`. It must be in range `0` to `1`.
+* `retry_initial_backoff`: Specifies how long to wait (backoff), in seconds, after the first failure before retrying. Its default value is `1`. It must be non-negative.
+* `retry_max_backoff`: Specifies the upper limit for the backoff in seconds. Its default value is `30`. It must be non-negative.
+* `retry_multiplier`: Factor to multiply the backoff after a failed retry. Its default value is `1`. It must be greater than or equal to `1`.
+* `retry_jitter`: Specifies by how much to randomize backoffs. Its default value is `0`. It must be in range `0` to `1`.
+* `cluster_connect_timeout`: Timeout value in seconds for the client to give up to connect to the current cluster. Its default value is `20`.
A pseudo-code is as follows:
@@ -1136,22 +1138,17 @@ Hazelcast Python client is designed to be fully asynchronous. See the [Basic Usa
If you are ready to go, let's start to use Hazelcast Python client.
-The first step is configuration. See the [Configuration Options section](#31-configuration-options) for details.
+The first step is configuration. See the [Configuration Overview section](#3-configuration-overview) for details.
-The following is an example on how to create a `ClientConfig` object and configure it programmatically:
+The following is an example on how to configure and initialize the `HazelcastClient` to connect to the cluster:
```python
-import hazelcast
-
-config = hazelcast.ClientConfig()
-config.cluster_name = "dev"
-config.network.addresses = ["10.90.0.1"]
-```
-
-The second step is initializing the `HazelcastClient` to be connected to the cluster:
-
-```python
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(
+ cluster_name="dev",
+ cluster_members=[
+ "198.51.100.2"
+ ]
+)
```
This client object is your gateway to access all the Hazelcast distributed objects.
@@ -1159,7 +1156,10 @@ This client object is your gateway to access all the Hazelcast distributed objec
Let's create a map and populate it with some data, as shown below.
```python
+# Get a Map called 'my-distributed-map'
customer_map = client.get_map("customers").blocking()
+
+# Write and read some data
customer_map.put("1", "John Stiles")
customer_map.put("2", "Richard Miles")
customer_map.put("3", "Judy Doe")
@@ -1207,7 +1207,7 @@ While sending the requests to the related members, the operations can fail due t
Read-only operations are retried by default. If you want to enable retrying for the other operations, you can set the `redo_operation` to `True`.
See the [Enabling Redo Operation section](#53-enabling-redo-operation).
-You can set a timeout for retrying the operations sent to a member. This can be provided by using the property `hazelcast.client.invocation.timeout.seconds` via `config.set_property` method.
+You can set a timeout for retrying the operations sent to a member. This can be tuned by passing the `invocation_timeout` argument to the client.
The client will retry an operation within this given period, of course, if it is a read-only operation or you enabled the `redo_operation` as stated in the above.
This timeout value is important when there is a failure resulted by either of the following causes:
@@ -1223,13 +1223,9 @@ For example, assume that your client sent a `queue.offer` operation to the membe
Since there will be no response for this operation, you will not know whether it has run on the member or not. I
f you enabled `redo_operation`, it means this operation may run again, which may cause two instances of the same object in the queue.
-When invocation is being retried, the client may wait some time before it retries again. This duration can be configured using the following property:
-
- ```python
-config.set_property("hazelcast.client.invocation.retry.pause.millis", 500)
-```
+When invocation is being retried, the client may wait some time before it retries again. This duration can be configured using the `invocation_retry_pause` argument:
-The default retry wait time is `1` second.
+The default retry pause time is `1` second.
## 7.4. Using Distributed Data Structures
@@ -1372,10 +1368,10 @@ my_list.add("item1")
my_list.add("item2")
# Remove the first element
-print("Removed: {}".format(my_list.remove_at(0))) # Outputs 'Removed: item1'
+print("Removed:", my_list.remove_at(0)) # Outputs 'Removed: item1'
# There is only one element left
-print("Current size is {}".format(my_list.size())) # Outputs 'Current size is 1'
+print("Current size is", my_list.size()) # Outputs 'Current size is 1'
# Clear the list
my_list.clear()
@@ -1402,10 +1398,10 @@ ringbuffer.add(200)
# We start from the oldest item.
# If you want to start from the next item, call ringbuffer.tail_sequence()+1
sequence = ringbuffer.head_sequence()
-print(ringbuffer.read_one(sequence)) # Outputs '100'
+print(ringbuffer.read_one(sequence)) # Outputs '100'
sequence += 1
-print(ringbuffer.read_one(sequence)) # Outputs '200'
+print(ringbuffer.read_one(sequence)) # Outputs '200'
```
### 7.4.8. Using Topic
@@ -1511,28 +1507,29 @@ For details, see the [FlakeIdGenerator section](https://docs.hazelcast.org/docs/
generator = client.get_flake_id_generator("flake-id-generator").blocking()
# Generate a some unique identifier
-print("ID: {}".format(generator.new_id()))
+print("ID:", generator.new_id())
```
#### 7.4.11.1 Configuring Flake ID Generator
-You may configure `FlakeIdGenerator`s as the following:
+You may configure Flake ID Generators using the `flake_id_generators` argument:
```python
-generator_config = FlakeIdGeneratorConfig()
-generator_config.name = "flake-id-generator"
-generator_config.prefetch_count = 123
-generator_config.prefetch_validity_in_millis = 150000
-config.add_flake_id_generator_config(generator_config)
+client = hazelcast.HazelcastClient(
+ flake_id_generators={
+ "flake-id-generator": {
+ "prefetch_count": 123,
+ "prefetch_validity": 150
+ }
+ }
+)
```
The following are the descriptions of configuration elements and attributes:
-* `name`: Name of your Flake ID Generator.
-* `prefetchCount`: Count of IDs which are pre-fetched on the background when one call to `FlakeIdGenerator.newId()` is made. Its value must be in the range `1` - `100,000`. Its default value is `100`.
-* `prefetchValidityMillis`: Specifies for how long the pre-fetched IDs can be used. After this time elapses, a new batch of IDs are fetched. Time unit is milliseconds. Its default value is `600,000` milliseconds (`10` minutes). The IDs contain a timestamp component, which ensures a rough global ordering of them. If an ID is assigned to an object that was created later, it will be out of order. If ordering is not important, set this value to `0`.
-
-> **NOTE: When you use `default` as the Flake ID Generator configuration key, it has a special meaning. Hazelcast client will use that configuration as the default one for all Flake ID Generators, unless there is a specific configuration for the generator.**
+* keys of the dictionary: Name of the Flake ID Generator.
+* `prefetch_count`: Count of IDs which are pre-fetched on the background when one call to `generator.newId()` is made. Its value must be in the range `1` - `100,000`. Its default value is `100`.
+* `prefetch_validity`: Specifies for how long the pre-fetched IDs can be used. After this time elapses, a new batch of IDs are fetched. Time unit is seconds. Its default value is `600` seconds (`10` minutes). The IDs contain a timestamp component, which ensures a rough global ordering of them. If an ID is assigned to an object that was created later, it will be out of order. If ordering is not important, set this value to `0`.
## 7.5. Distributed Events
@@ -1560,17 +1557,33 @@ The following is a membership listener registration by using the `add_listener()
```python
def added_listener(member):
- print("Member Added: The address is {}".format(member.address))
+ print("Member Added: The address is", member.address)
+
def removed_listener(member):
- print("Member Removed. The address is {}".format(member.address))
+ print("Member Removed. The address is", member.address)
-client.cluster_service.add_listener(member_added=added_listener, member_removed=removed_listener, fire_for_existing=True)
+
+client.cluster_service.add_listener(
+ member_added=added_listener,
+ member_removed=removed_listener,
+ fire_for_existing=True
+)
```
Also, you can set the `fire_for_existing` flag to `True` to receive the events for list of available members when the
listener is registered.
+Membership listeners can also be added during the client startup using the `membership_listeners` argument.
+
+```python
+client = hazelcast.HazelcastClient(
+ membership_listeners=[
+ (added_listener, removed_listener)
+ ]
+)
+```
+
#### 7.5.1.2. Listening for Distributed Object Events
The events for distributed objects are invoked when they are created and destroyed in the cluster. When an event
@@ -1587,7 +1600,10 @@ The following is example of adding a distributed object listener to a client.
def distributed_object_listener(event):
print("Distributed object event >>>", event.name, event.service_name, event.event_type)
-client.add_distributed_object_listener(listener_func=distributed_object_listener)
+
+client.add_distributed_object_listener(
+ listener_func=distributed_object_listener
+)
map_name = "test_map"
@@ -1609,7 +1625,7 @@ Distributed object event >>> test_map hz:impl:mapService DESTROYED
#### 7.5.1.3. Listening for Lifecycle Events
-The `Lifecycle Listener` notifies for the following events:
+The lifecycle listener is notified for the following events:
* `STARTING`: The client is starting.
* `STARTED`: The client has started.
@@ -1618,14 +1634,18 @@ The `Lifecycle Listener` notifies for the following events:
* `DISCONNECTED`: The client disconnected from a member.
* `SHUTDOWN`: The client has shutdown.
-The following is an example of the `Lifecycle listener` that is added to the `ClientConfig` object and its output.
+The following is an example of the lifecycle listener that is added to client during startup and its output.
```python
-config.add_lifecycle_listener(lambda s: print("Lifecycle Event >>> {}".format(s)))
+def lifecycle_listener(state):
+ print("Lifecycle Event >>>", state)
-client = hazelcast.HazelcastClient(config)
-client.shutdown()
+client = hazelcast.HazelcastClient(
+ lifecycle_listeners=[
+ lifecycle_listener
+ ]
+)
```
**Output:**
@@ -1668,6 +1688,12 @@ Sep 03, 2020 05:00:29 PM HazelcastClient
INFO: [4.0.0] [dev] [hz.client_0] Client shutdown.
```
+You can also add lifecycle listeners after client initialization using the `LifecycleService`.
+
+```python
+client.lifecycle_service.add_listener(lifecycle_listener)
+```
+
### 7.5.2. Distributed Data Structure Events
You can add event listeners to the distributed data structures.
@@ -1687,28 +1713,30 @@ You can listen to map-wide or entry-based events by attaching functions to the `
You can also filter the events using `key` or `predicate`. There is also an option called `include_value`. When this option is set to true, event will also include the value.
-An entry-based event is fired after the operations that affect a specific entry. For example, `Map.put()`, `Map.remove()` or `Map.evict()`. An `EntryEvent` object is passed to the listener function.
+An entry-based event is fired after the operations that affect a specific entry. For example, `map.put()`, `map.remove()` or `map.evict()`. An `EntryEvent` object is passed to the listener function.
See the following example.
```python
def added(event):
- print("Entry Added: {}-{}".format(event.key, event.value))
+ print("Entry Added: %s-%s" % (event.key, event.value))
+
customer_map.add_entry_listener(include_value=True, added_func=added)
-customer_map.put("4", "Jane Doe").result() # Outputs 'Entry Added: 4-Jane Doe'
+customer_map.put("4", "Jane Doe")
```
-A map-wide event is fired as a result of a map-wide operation. For example, `Map.clear()` or `Map.evict_all()`. An `EntryEvent` object is passed to the listener function.
+A map-wide event is fired as a result of a map-wide operation. For example, `map.clear()` or `map.evict_all()`. An `EntryEvent` object is passed to the listener function.
See the following example.
```python
def cleared(event):
- print("Map Cleared: {}".format(event.number_of_affected_entries))
+ print("Map Cleared:", event.number_of_affected_entries)
+
customer_map.add_entry_listener(include_value=True, clear_all_func=cleared)
-customer_map.clear().result() # Outputs 'Map Cleared: 4'
+customer_map.clear().result()
```
## 7.6. Distributed Computing
@@ -1854,7 +1882,7 @@ distributed_map = client.get_map("my-distributed-map").blocking()
distributed_map.put("key", "not-processed")
distributed_map.execute_on_key("key", IdentifiedEntryProcessor("processed"))
-print(distributed_map.get("key")) # Outputs 'processed'
+print(distributed_map.get("key")) # Outputs 'processed'
```
## 7.7. Distributed Query
@@ -2142,7 +2170,6 @@ You can configure this using `metadata-policy` element for the map configuration
```
-
## 7.8. Performance
### 7.8.1. Near Cache
@@ -2162,22 +2189,27 @@ Near Cache is highly recommended for maps that are mostly read.
#### 7.8.1.1. Configuring Near Cache
-The following snippet show how a Near Cache is configured in the Python client, presenting all available values for each element:
+The following snippet show how a Near Cache is configured in the Python client using the `near_caches` argument,
+presenting all available values for each element.
+When an element is missing from the configuration, its default value is used.
```python
-from hazelcast.config import NearCacheConfig, IN_MEMORY_FORMAT, EVICTION_POLICY
+from hazelcast.config import InMemoryFormat, EvictionPolicy
-near_cache_config = NearCacheConfig("mostly-read-map")
-near_cache_config.invalidate_on_change = False
-near_cache_config.time_to_live_seconds = 600
-near_cache_config.max_idle_seconds = 5
-near_cache_config.in_memory_format = IN_MEMORY_FORMAT.OBJECT
-near_cache_config.eviction_policy = EVICTION_POLICY.LRU
-near_cache_config.eviction_max_size = 100
-near_cache_config.eviction_sampling_count = 8
-near_cache_config.eviction_sampling_pool_size = 16
-
-config.add_near_cache_config(near_cache_config)
+client = hazelcast.HazelcastClient(
+ near_caches={
+ "mostly-read-map": {
+ "invalidate_on_change": True,
+ "time_to_live": 60,
+ "max_idle": 30,
+ "in_memory_format": InMemoryFormat.OBJECT,
+ "eviction_policy": EvictionPolicy.LRU,
+ "eviction_max_size": 100,
+ "eviction_sampling_count": 8,
+ "eviction_sampling_pool_size": 16
+ }
+ }
+)
```
Following are the descriptions of all configuration elements:
@@ -2186,12 +2218,12 @@ Following are the descriptions of all configuration elements:
- `BINARY`: Data will be stored in serialized binary format (default value).
- `OBJECT`: Data will be stored in deserialized format.
- `invalidate_on_change`: Specifies whether the cached entries are evicted when the entries are updated or removed. Its default value is `True`.
-- `time_to_live_seconds`: Maximum number of seconds for each entry to stay in the Near Cache. Entries that are older than this period are automatically evicted from the Near Cache. Regardless of the eviction policy used, `time_to_live_seconds` still applies. Any non-negative number can be assigned. Its default value is `None`. `None` means infinite.
-- `max_idle_seconds`: Maximum number of seconds each entry can stay in the Near Cache as untouched (not read). Entries that are not read more than this period are removed from the Near Cache. Any non-negative number can be assigned. Its default value is `None`. `None` means infinite.
+- `time_to_live`: Maximum number of seconds for each entry to stay in the Near Cache. Entries that are older than this period are automatically evicted from the Near Cache. Regardless of the eviction policy used, `time_to_live_seconds` still applies. Any non-negative number can be assigned. Its default value is `None`. `None` means infinite.
+- `max_idle`: Maximum number of seconds each entry can stay in the Near Cache as untouched (not read). Entries that are not read more than this period are removed from the Near Cache. Any non-negative number can be assigned. Its default value is `None`. `None` means infinite.
- `eviction_policy`: Eviction policy configuration. Available values are as follows:
- `LRU`: Least Recently Used (default value).
- `LFU`: Least Frequently Used.
- - `NONE`: No items are evicted and the `eviction_max_size` property is ignored. You still can combine it with `time_to_live_seconds` and `max_idle_seconds` to evict items from the Near Cache.
+ - `NONE`: No items are evicted and the `eviction_max_size` property is ignored. You still can combine it with `time_to_live` and `max_idle` to evict items from the Near Cache.
- `RANDOM`: A random item is evicted.
- `eviction_max_size`: Maximum number of entries kept in the memory before eviction kicks in.
- `eviction_sampling_count`: Number of random entries that are evaluated to see if some of them are already expired. If there are expired entries, those are removed and there is no need for eviction.
@@ -2199,16 +2231,22 @@ Following are the descriptions of all configuration elements:
#### 7.8.1.2. Near Cache Example for Map
-The following is an example configuration for a Near Cache defined in the `mostly-read-map` map. According to this configuration, the entries are stored as `OBJECT`'s in this Near Cache and eviction starts when the count of entries reaches `5000`; entries are evicted based on the `LRU` (Least Recently Used) policy. In addition, when an entry is updated or removed on the member side, it is eventually evicted on the client side.
+The following is an example configuration for a Near Cache defined in the `mostly-read-map` map.
+According to this configuration, the entries are stored as `OBJECT`'s in this Near Cache and eviction starts when the count of entries reaches `5000`;
+entries are evicted based on the `LRU` (Least Recently Used) policy. In addition, when an entry is updated or removed on the member side,
+it is eventually evicted on the client side.
```python
-near_cache_config = NearCacheConfig("mostly-read-map")
-near_cache_config.invalidate_on_change = True
-near_cache_config.in_memory_format = IN_MEMORY_FORMAT.OBJECT
-near_cache_config.eviction_policy = EVICTION_POLICY.LRU
-near_cache_config.eviction_max_size = 5000
-
-config.add_near_cache_config(near_cache_config)
+client = hazelcast.HazelcastClient(
+ near_caches={
+ "mostly-read-map": {
+ "invalidate_on_change": True,
+ "in_memory_format": InMemoryFormat.OBJECT,
+ "eviction_policy": EvictionPolicy.LRU,
+ "eviction_max_size": 5000,
+ }
+ }
+)
```
#### 7.8.1.3. Near Cache Eviction
@@ -2223,8 +2261,8 @@ Once the eviction is triggered, the configured `eviction_policy` determines whic
Expiration means the eviction of expired records. A record is expired:
-- If it is not touched (accessed/read) for `max_idle_seconds`
-- `time_to_live_seconds` passed since it is put to Near Cache
+- If it is not touched (accessed/read) for `max_idle` seconds
+- `time_to_live` seconds passed since it is put to Near Cache
The actual expiration is performed when a record is accessed: it is checked if the record is expired or not. If it is expired, it is evicted and `KeyError` is raised to the caller.
@@ -2239,21 +2277,23 @@ See the [Near Cache Invalidation section](https://docs.hazelcast.org/docs/latest
You can monitor your clients using Hazelcast Management Center.
-As a prerequisite, you need to enable the client statistics before starting your clients. There are two properties related to client statistics:
+As a prerequisite, you need to enable the client statistics before starting your clients. There are two arguments of `HazelcastClient` related to client statistics:
-- `hazelcast.client.statistics.enabled`: If set to `True`, it enables collecting the client statistics and sending them to the cluster. When it is `True` you can monitor the clients that are connected to your Hazelcast cluster, using Hazelcast Management Center. Its default value is `False`.
+- `statistics_enabled`: If set to `True`, it enables collecting the client statistics and sending them to the cluster. When it is `True` you can monitor the clients that are connected to your Hazelcast cluster, using Hazelcast Management Center. Its default value is `False`.
-- `hazelcast.client.statistics.period.seconds`: Period in seconds the client statistics are collected and sent to the cluster. Its default value is `3`.
+- `statistics_period`: Period in seconds the client statistics are collected and sent to the cluster. Its default value is `3`.
You can enable client statistics and set a non-default period in seconds as follows:
```python
-config = hazelcast.ClientConfig()
-config.set_property(ClientProperties.STATISTICS_ENABLED.name, True)
-config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, 4)
+client = hazelcast.HazelcastClient(
+ statistics_enabled=True,
+ statistics_period=4
+)
```
-Hazelcast Python client can collect statistics related to the client and Near Caches without an extra dependency. However, to get the statistics about the runtime and operating system, [psutil](https://pypi.org/project/psutil/) is used as an extra dependency.
+Hazelcast Python client can collect statistics related to the client and Near Caches without an extra dependency.
+However, to get the statistics about the runtime and operating system, [psutil](https://pypi.org/project/psutil/) is used as an extra dependency.
If the `psutil` is installed, runtime and operating system statistics will be sent to cluster along with statistics related to the client and Near Caches.
If not, only the client and Near Cache statistics will be sent.
@@ -2265,19 +2305,7 @@ If not, only the client and Near Cache statistics will be sent.
pip install hazelcast-python-client[stats]
```
-**From source**This can be done by setting the `hazelcast.client.statistics.enabled` system property to `true` on the **member** as the following:
-
-```xml
-
- ...
-
- true
-
- ...
-
-```
-
-Also, you need to enable the client statistics in the Python client.
+**From source**
```
pip install -e .[stats]
@@ -2285,23 +2313,18 @@ pip install -e .[stats]
After enabling the client statistics, you can monitor your clients using Hazelcast Management Center. Please refer to the [Monitoring Clients section](https://docs.hazelcast.org/docs/management-center/latest/manual/html/index.html#monitoring-clients) in the Hazelcast Management Center Reference Manual for more information on the client statistics.
+> **NOTE: Statistics sent by Hazelcast Python client 4.0 are compatible with Management Center 4.0. Management Center 4.2020.08 and newer versions will be supported in version 4.1 of the client.**
+
### 7.9.2 Logging Configuration
-Hazelcast Python client allows you to configure the logging through the `LoggerConfig` in the `ClientConfig` class.
+Hazelcast Python client allows you to configure the logging through the arguments below.
-`LoggerConfig` contains options that allow you to set the logging level and a custom logging configuration file to the Hazelcast Python client.
+These arguments allow you to set the logging level and a custom logging configuration to the Hazelcast Python client.
-By default, Hazelcast Python client will log to the `sys.stderr` with the `INFO` logging level and `%(asctime)s %(name)s\n%(levelname)s: %(version_message)s %(message)s` format where the `version_message` contains the information about the client version, group name and client name.
+By default, Hazelcast Python client will log to the `sys.stderr` with the `INFO` logging level and `%(asctime)s %(name)s\n%(levelname)s: %(version_message)s %(message)s` format where the `version_message` contains the information about the client version, cluster name and client name.
Below is an example of the default logging configuration.
-```python
-import hazelcast
-
-client = hazelcast.HazelcastClient()
-client.shutdown()
-```
-
**Output to the `sys.stderr`**
```
Sep 03, 2020 05:41:35 PM HazelcastClient.LifecycleService
@@ -2335,11 +2358,9 @@ Sep 03, 2020 05:41:35 PM HazelcastClient
INFO: [4.0.0] [dev] [hz.client_0] Client shutdown.
```
-Let's go over the `LoggerConfig` options one by one.
-
#### Setting Logging Level
-Although you can not change the logging levels used within the Hazelcast Python client, you can specify a logging level that is used to threshold the logs that are at least as severe as your specified level using `ClientConfig.logger_config.level`.
+Although you can not change the logging levels used within the Hazelcast Python client, you can specify a logging level that is used to threshold the logs that are at least as severe as your specified level using `logging_level` argument.
Here is the table listing the default logging levels that come with the `logging` module and numeric values that represent their severity:
@@ -2356,18 +2377,19 @@ For example, setting the logging level to `logging.DEBUG` will cause all the log
By default, the logging level is set to `logging.INFO`.
-To turn off the logging, you can set `ClientConfig.logger.level` to a value greater than the numeric value of `logging.CRITICAL`. For example, the configuration below turns off the logging for the Hazelcast Python client.
-
```python
-config.logger.level = 100 # Any value greater than 50 will turn off the logging
-client = hazelcast.HazelcastClient(config)
+import logging
+
+client = hazelcast.HazelcastClient(
+ logging_level=logging.DEBUG
+)
```
#### Setting a Custom Logging Configuration
-`ClientConfig.logger_config.config_file` can be used to configure the logger for the Hazelcast Python client entirely.
+`logging_config` argument can be used to configure the logger for the Hazelcast Python client entirely.
-When set, this field should contain the absolute path of the JSON file that contains the logging configuration as described in the [Configuration dictionary schema](https://docs.python.org/3/library/logging.config.html#logging-config-dictschema). This file will be read and the contents of it will be directly fed into the `logging.dictConfig` function.
+When set, this argument should contain the logging configuration as described in the [Configuration dictionary schema](https://docs.python.org/3/library/logging.config.html#logging-config-dictschema).
When this field is set, the `level` field is simply discarded and configuration in this file is used.
@@ -2375,11 +2397,30 @@ All Hazelcast Python client related loggers have `HazelcastClient` as their pare
Let's replicate the default configuration used within the Hazelcast client with this configuration method.
-**config.json**
-```json
-{
+**some_package/log.py**
+```python
+import logging
+
+from hazelcast.version import CLIENT_VERSION
+
+class VersionMessageFilter(logging.Filter):
+ def filter(self, record):
+ record.version_message = "[" + CLIENT_VERSION + "]"
+ return True
+
+class HazelcastFormatter(logging.Formatter):
+ def format(self, record):
+ client_name = getattr(record, "client_name", None)
+ cluster_name = getattr(record, "cluster_name", None)
+ if client_name and cluster_name:
+ record.msg = "[" + cluster_name + "] [" + client_name + "] " + record.msg
+ return super(HazelcastFormatter, self).format(record)
+```
+
+```python
+logging_config = {
"version": 1,
- "disable_existing_loggers": false,
+ "disable_existing_loggers": False,
"filters": {
"version_message_filter": {
"()": "some_package.log.VersionMessageFilter"
@@ -2407,36 +2448,10 @@ Let's replicate the default configuration used within the Hazelcast client with
}
}
}
-```
-**some_package/log.py**
-```python
-import logging
-
-from hazelcast.version import CLIENT_VERSION
-
-class VersionMessageFilter(logging.Filter):
- def filter(self, record):
- record.version_message = "[" + CLIENT_VERSION + "]"
- return True
-
-class HazelcastFormatter(logging.Formatter):
- def format(self, record):
- client_name = getattr(record, "client_name", None)
- group_name = getattr(record, "group_name", None)
- if client_name and group_name:
- record.msg = "[" + group_name + "] [" + client_name + "] " + record.msg
- return super(HazelcastFormatter, self).format(record)
-```
-
-**some_package/test.py**
-```python
-import hazelcast
-
-config = hazelcast.ClientConfig()
-config.logger.config_file = "/home/hazelcast/config.json"
-
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(
+ logging_config=logging_config
+)
## Some operations
@@ -2457,8 +2472,12 @@ See the [related section](https://docs.hazelcast.org/docs/management-center/late
You can define the client labels using the `labels` config option. See the below example.
```python
-config.labels.add("role admin")
-config.labels.add("region foo")
+client = hazelcast.HazelcastClient(
+ labels=[
+ "role admin",
+ "region foo"
+ ]
+)
```
## 7.11. Defining Client Name
@@ -2470,7 +2489,9 @@ This id is incremented and set by the client, so it may not be unique between di
You can set the client name using the `client_name` configuration element.
```python
-config.client_name = "blue_client_0"
+client = hazelcast.HazelcastClient(
+ client_name="blue_client_0"
+)
```
## 7.12. Configuring Load Balancer
@@ -2486,14 +2507,16 @@ You can use one of them by setting the `load_balancer` config option.
The following are example configurations.
-```javascript
-from hazelcast.cluster import RandomLB
+```python
+from hazelcast.util import RandomLB
-config.load_balancer = RandomLB()
+client = hazelcast.HazelcastClient(
+ load_balancer=RandomLB()
+)
```
You can also provide a custom load balancer implementation to use different load balancing policies.
-To do so, you should provide a class that implements the `AbstractLoadBalancer`s interface or extend the `AbstractLoadBalancer` class for that purpose and provide the load balancer object into the `load_balancer` config option.
+To do so, you should provide a class that implements the `LoadBalancer`s interface or extend the `AbstractLoadBalancer` class for that purpose and provide the load balancer object into the `load_balancer` config option.
# 8. Securing Client Connection
@@ -2522,41 +2545,45 @@ TLS/SSL for the Hazelcast Python client can be configured using the `SSLConfig`
Let's first give an example of a sample configuration and then go over the configuration options one by one:
```python
-import hazelcast
-from hazelcast.config import PROTOCOL
+from hazelcast.config import SSLProtocol
-config = hazelcast.ClientConfig()
-config.network.ssl.enabled = True
-config.network.ssl.cafile = "/home/hazelcast/cafile.pem"
-config.network.ssl.certfile = "/home/hazelcast/certfile.pem"
-config.network.ssl.keyfile = "/home/hazelcast/keyfile.pem"
-config.network.ssl.password = "hazelcast"
-config.network.ssl.protocol = PROTOCOL.TLSv1_3
-config.network.ssl.ciphers = "DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA"
+client = hazelcast.HazelcastClient(
+ ssl_enabled=True,
+ ssl_cafile="/home/hazelcast/cafile.pem",
+ ssl_certfile="/home/hazelcast/certfile.pem",
+ ssl_keyfile="/home/hazelcast/keyfile.pem",
+ ssl_password="keyfile-password",
+ ssl_protocol=SSLProtocol.TLSv1_3,
+ ssl_ciphers="DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA"
+)
```
##### Enabling TLS/SSL
-TLS/SSL for the Hazelcast Python client can be enabled/disabled using the `enabled` option. When this option is set to `True`, TLS/SSL will be configured with respect to the other `SSLConfig` options.
-Setting this option to `False` will result in discarding the other `SSLConfig` options.
+TLS/SSL for the Hazelcast Python client can be enabled/disabled using the `ssl_enabled` option. When this option is set to `True`, TLS/SSL will be configured with respect to the other SSL options.
+Setting this option to `False` will result in discarding the other SSL options.
The following is an example configuration:
```python
-config.network.ssl.enabled = True
+client = hazelcast.HazelcastClient(
+ ssl_enabled=True
+)
```
Default value is `False` (disabled).
##### Setting CA File
-Certificates of the Hazelcast members can be validated against `cafile`. This option should point to the absolute path of the concatenated CA certificates in PEM format.
-When SSL is enabled and `cafile` is not set, a set of default CA certificates from default locations will be used.
+Certificates of the Hazelcast members can be validated against `ssl_cafile`. This option should point to the absolute path of the concatenated CA certificates in PEM format.
+When SSL is enabled and `ssl_cafile` is not set, a set of default CA certificates from default locations will be used.
The following is an example configuration:
```python
-config.network.ssl.cafile = "/home/hazelcast/cafile.pem"
+client = hazelcast.HazelcastClient(
+ ssl_cafile="/home/hazelcast/cafile.pem"
+)
```
##### Setting Client Certificate
@@ -2564,19 +2591,21 @@ config.network.ssl.cafile = "/home/hazelcast/cafile.pem"
When mutual authentication is enabled on the member side, clients or other members should also provide a certificate file that identifies themselves.
Then, Hazelcast members can use these certificates to validate the identity of their peers.
-Client certificate can be set using the `certfile`. This option should point to the absolute path of the client certificate in PEM format.
+Client certificate can be set using the `ssl_certfile`. This option should point to the absolute path of the client certificate in PEM format.
The following is an example configuration:
```python
-config.network.ssl.certfile = "/home/hazelcast/certfile.pem"
+client = hazelcast.HazelcastClient(
+ ssl_certfile="/home/hazelcast/certfile.pem"
+)
```
##### Setting Private Key
-Private key of the `certfile` can be set using the `keyfile`. This option should point to the absolute path of the private key file for the client certificate in the PEM format.
+Private key of the `ssl_certfile` can be set using the `ssl_keyfile`. This option should point to the absolute path of the private key file for the client certificate in the PEM format.
-If this option is not set, private key will be taken from `certfile`. In this case, `certfile` should be in the following format.
+If this option is not set, private key will be taken from `ssl_certfile`. In this case, `ssl_certfile` should be in the following format.
```
-----BEGIN RSA PRIVATE KEY-----
@@ -2590,52 +2619,56 @@ If this option is not set, private key will be taken from `certfile`. In this ca
The following is an example configuration:
```python
-config.network.ssl.keyfile = "/home/hazelcast/keyfile.pem"
+client = hazelcast.HazelcastClient(
+ ssl_keyfile="/home/hazelcast/keyfile.pem"
+)
```
##### Setting Password of the Private Key
-If the private key is encrypted using a password, `password` will be used to decrypt it. The `password` may be a function to call to get the password.
+If the private key is encrypted using a password, `ssl_password` will be used to decrypt it. The `ssl_password` may be a function to call to get the password.
In that case, it will be called with no arguments, and it should return a string, bytes or bytearray. If the return value is a string it will be encoded as UTF-8 before using it to decrypt the key.
-Alternatively a string, bytes or bytearray value may be supplied directly as the password.
+Alternatively a string, `bytes` or `bytearray` value may be supplied directly as the password.
The following is an example configuration:
```python
-config.network.ssl.password = "hazelcast"
+client = hazelcast.HazelcastClient(
+ ssl_password="keyfile-password"
+)
```
##### Setting the Protocol
-`protocol` can be used to select the protocol that will be used in the TLS/SSL communication. Hazelcast Python client offers the following protocols:
+`ssl_protocol` can be used to select the protocol that will be used in the TLS/SSL communication. Hazelcast Python client offers the following protocols:
* **SSLv2** : SSL 2.0 Protocol. *RFC 6176 prohibits the usage of SSL 2.0.*
* **SSLv3** : SSL 3.0 Protocol. *RFC 7568 prohibits the usage of SSL 3.0.*
-* **SSL** : Alias for SSL 3.0
* **TLSv1** : TLS 1.0 Protocol described in RFC 2246
* **TLSv1_1** : TLS 1.1 Protocol described in RFC 4346
* **TLSv1_2** : TLS 1.2 Protocol described in RFC 5246
* **TLSv1_3** : TLS 1.3 Protocol described in RFC 8446
-* **TLS** : Alias for TLS 1.2
> Note that TLSv1+ requires at least Python 2.7.9 or Python 3.4 built with OpenSSL 1.0.1+, and TLSv1_3 requires at least Python 2.7.15 or Python 3.7 built with OpenSSL 1.1.1+.
-These protocol versions can be selected using the `hazelcast.config.PROTOCOL` as follows:
+These protocol versions can be selected using the `ssl_protocol` as follows:
```python
-from hazelcast.config import PROTOCOL
+from hazelcast.config import SSLProtocol
-config.network.ssl.protocol = PROTOCOL.TLSv1_3
+client = hazelcast.HazelcastClient(
+ ssl_protocol=SSLProtocol.TLSv1_3
+)
```
> Note that the Hazelcast Python client and the Hazelcast members should have the same protocol version in order for TLS/SSL to work. In case of the protocol mismatch, connection attempts will be refused.
-Default value is `PROTOCOL.TLS` which is an alias for `PROTOCOL.TLSv1_2`.
+Default value is `SSLProtocol.TLSv1_2`.
##### Setting Cipher Suites
-Cipher suites that will be used in the TLS/SSL communication can be set using the `ciphers` option. Cipher suites should be in the
+Cipher suites that will be used in the TLS/SSL communication can be set using the `ssl_ciphers` option. Cipher suites should be in the
OpenSSL cipher list format. More than one cipher suite can be set by separating them with a colon.
TLS/SSL implementation will honor the cipher suite order. So, Hazelcast Python client will offer the ciphers to the Hazelcast members with the given order.
@@ -2645,7 +2678,9 @@ Note that, when this option is not set, all the available ciphers will be offere
The following is an example configuration:
```python
-config.network.ssl.ciphers = "DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA"
+client = hazelcast.HazelcastClient(
+ ssl_ciphers="DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA"
+)
```
#### 8.1.3. Mutual Authentication
@@ -2668,7 +2703,7 @@ To enable mutual authentication, firstly, you need to set the following property
You can see the details of setting mutual authentication on the server side in the [Mutual Authentication section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#mutual-authentication) of the Hazelcast IMDG Reference Manual.
-On the client side, you have to provide `SSLConfig.cafile`, `SSLConfig.certfile` and `SSLConfig.keyfile` on top of the other TLS/SSL configurations. See the [TLS/SSL for Hazelcast Python Clients](#812-tlsssl-for-hazelcast-python-clients) for the details of these options.
+On the client side, you have to provide `ssl_cafile`, `ssl_certfile` and `ssl_keyfile` on top of the other TLS/SSL configurations. See the [TLS/SSL for Hazelcast Python Clients](#812-tlsssl-for-hazelcast-python-clients) for the details of these options.
# 9. Development and Testing
diff --git a/benchmarks/simple_map_nearcache_bench.py b/benchmarks/simple_map_nearcache_bench.py
index d8a68eeb57..7ef9196b41 100644
--- a/benchmarks/simple_map_nearcache_bench.py
+++ b/benchmarks/simple_map_nearcache_bench.py
@@ -5,7 +5,7 @@
import time
import hazelcast
-from hazelcast.config import NearCacheConfig, IN_MEMORY_FORMAT
+from hazelcast.config import NearCacheConfig, InMemoryFormat
from hazelcast import six
from hazelcast.six.moves import range
@@ -32,7 +32,7 @@ def init():
config.network.addresses.append("127.0.0.1")
near_cache_config = NearCacheConfig(MAP_NAME)
- near_cache_config.in_memory_format = IN_MEMORY_FORMAT.OBJECT
+ near_cache_config.in_memory_format = InMemoryFormat.OBJECT
config.add_near_cache_config(near_cache_config)
try:
diff --git a/examples/cloud-discovery/hazelcast_cloud_discovery_example.py b/examples/cloud-discovery/hazelcast_cloud_discovery_example.py
index 25ea54035c..b1e87ddd8b 100644
--- a/examples/cloud-discovery/hazelcast_cloud_discovery_example.py
+++ b/examples/cloud-discovery/hazelcast_cloud_discovery_example.py
@@ -1,24 +1,18 @@
import hazelcast
-config = hazelcast.ClientConfig()
-
-# Set up cluster name for authentication
-config.cluster_name.name = "YOUR_CLUSTER_NAME"
-
-# Enable Hazelcast.Cloud configuration and set the token of your cluster.
-config.network.cloud.enabled = True
-config.network.cloud.discovery_token = "YOUR_CLUSTER_DISCOVERY_TOKEN"
-
-# If you have enabled encryption for your cluster, also configure TLS/SSL for the client.
-# Otherwise, skip this step.
-config.network.ssl.enabled = True
-config.network.ssl.cafile = "/path/to/ca.pem"
-config.network.ssl.certfile = "/path/to/cert.pem"
-config.network.ssl.keyfile = "/path/to/key.pem"
-config.network.ssl.password = "YOUR_KEY_STORE_PASSWORD"
-
-# Start a new Hazelcast client with this configuration.
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(
+ # Set up cluster name for authentication
+ cluster_name="YOUR_CLUSTER_NAME",
+ # Set the token of your cloud cluster
+ cloud_discovery_token="YOUR_CLUSTER_DISCOVERY_TOKEN",
+ # If you have enabled encryption for your cluster, also configure TLS/SSL for the client.
+ # Otherwise, skip options below.
+ ssl_enabled=True,
+ ssl_cafile="/path/to/ca.pem",
+ ssl_certfile="/path/to/cert.pem",
+ ssl_keyfile="/path/to/key.pem",
+ ssl_password="YOUR_KEY_STORE_PASSWORD"
+)
my_map = client.get_map("map-on-the-cloud").blocking()
my_map.put("key", "value")
diff --git a/examples/flake-id-generator/flake_id_generator_example.py b/examples/flake-id-generator/flake_id_generator_example.py
index b9e37cc03f..84e61b2b17 100644
--- a/examples/flake-id-generator/flake_id_generator_example.py
+++ b/examples/flake-id-generator/flake_id_generator_example.py
@@ -1,20 +1,15 @@
import hazelcast
-config = hazelcast.ClientConfig()
-flake_id_generator_config = hazelcast.FlakeIdGeneratorConfig()
-
-# Default value is 600000 (10 minutes)
-flake_id_generator_config.prefetch_validity_in_millis = 30000
-
-# Default value is 100
-flake_id_generator_config.prefetch_count = 50
-
-config.add_flake_id_generator_config(flake_id_generator_config)
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(flake_id_generators={
+ "id-generator": {
+ "prefetch_count": 50,
+ "prefetch_validity": 30,
+ }
+})
generator = client.get_flake_id_generator("id-generator").blocking()
for _ in range(100):
- print("Id: {}".format(generator.new_id()))
+ print("Id:", generator.new_id())
client.shutdown()
diff --git a/examples/learning-basics/1-configure_client.py b/examples/learning-basics/1-configure_client.py
index 4935e472ea..7f71bcbfc1 100644
--- a/examples/learning-basics/1-configure_client.py
+++ b/examples/learning-basics/1-configure_client.py
@@ -1,16 +1,15 @@
import hazelcast
-# Create configuration for the client
-config = hazelcast.ClientConfig()
-print("Cluster name: {}".format(config.cluster_name))
-
-# Add member's host:port to the configuration.
-# For each member on your Hazelcast cluster, you should add its host:port pair to the configuration.
-config.network.addresses.append("127.0.0.1:5701")
-config.network.addresses.append("127.0.0.1:5702")
-
-# Create a client using the configuration above
-client = hazelcast.HazelcastClient(config)
+# Create a client using the configuration below
+client = hazelcast.HazelcastClient(
+ # Add member's host:port to the configuration.
+ # For each member on your Hazelcast cluster, you should add its host:port pair to the configuration.
+ # If the port is not specified, by default 5701, 5702 and 5703 will be tried.
+ cluster_members=[
+ "127.0.0.1:5701",
+ "127.0.0.1:5702",
+ ]
+)
# Disconnect the client and shutdown
client.shutdown()
diff --git a/examples/learning-basics/2-create_a_map.py b/examples/learning-basics/2-create_a_map.py
index 447051df9e..a459ae04f2 100644
--- a/examples/learning-basics/2-create_a_map.py
+++ b/examples/learning-basics/2-create_a_map.py
@@ -1,15 +1,17 @@
import hazelcast
# Connect
-config = hazelcast.ClientConfig()
-config.network.addresses.append("127.0.0.1:5701")
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(
+ cluster_members=[
+ "127.0.0.1:5701"
+ ]
+)
# Get a map that is stored on the server side. We can access it from the client
-greetings_map = client.get_map("greetings-map")
+greetings_map = client.get_map("greetings-map").blocking()
# Map is empty on the first run. It will be non-empty if Hazelcast has data on this map
-print("Map: {}, Size: {}".format(greetings_map.name, greetings_map.size().result()))
+print("Size before:", greetings_map.size())
# Write data to map. If there is a data with the same key already, it will be overwritten
greetings_map.put("English", "hello world")
@@ -19,7 +21,7 @@
greetings_map.put("French", "bonjour monde")
# 5 data is added to the map. There should be at least 5 data on the server side
-print("Map: {}, Size: {}".format(greetings_map.name, greetings_map.size().result()))
+print("Size after:", greetings_map.size())
# Shutdown the client
client.shutdown()
diff --git a/examples/learning-basics/3-read_from_a_map.py b/examples/learning-basics/3-read_from_a_map.py
index f0ac79669c..c882040414 100644
--- a/examples/learning-basics/3-read_from_a_map.py
+++ b/examples/learning-basics/3-read_from_a_map.py
@@ -1,19 +1,21 @@
import hazelcast
# Connect
-config = hazelcast.ClientConfig()
-config.network.addresses.append("127.0.0.1:5701")
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(
+ cluster_members=[
+ "127.0.0.1:5701"
+ ]
+)
# We can access maps on the server from the client. Let's access the greetings map that we created already
-greetings_map = client.get_map("greetings-map")
+greetings_map = client.get_map("greetings-map").blocking()
-# Get the keys of the map
-keys = greetings_map.key_set().result()
+# Get the entry set of the map
+entry_set = greetings_map.entry_set()
# Print key-value pairs
-for key in keys:
- print("{} -> {}".format(key, greetings_map.get(key).result()))
+for key, value in entry_set:
+ print("%s -> %s" % (key, value))
# Shutdown the client
client.shutdown()
diff --git a/examples/map/map_async_example.py b/examples/map/map_async_example.py
index 8e02bd603b..e007ee62f9 100644
--- a/examples/map/map_async_example.py
+++ b/examples/map/map_async_example.py
@@ -5,15 +5,15 @@
def fill_map(hz_map, count=10):
entries = {"key-" + str(i): "value-" + str(i) for i in range(count)}
- hz_map.put_all(entries)
+ hz_map.put_all(entries).result()
def put_callback(future):
- print("Map put: {}".format(future.result()))
+ print("Map put:", future.result())
def contains_callback(future):
- print("Map contains: {}".format(future.result()))
+ print("Map contains:", future.result())
client = hazelcast.HazelcastClient()
@@ -21,12 +21,12 @@ def contains_callback(future):
my_map = client.get_map("async-map")
fill_map(my_map)
-print("Map size: {}".format(my_map.size().result()))
+print("Map size: %d" % my_map.size().result())
my_map.put("key", "async-value").add_done_callback(put_callback)
key = random.random()
-print("Random key: {}".format(key))
+print("Random key:", key)
my_map.contains_key(key).add_done_callback(contains_callback)
time.sleep(3)
diff --git a/examples/map/map_basic_example.py b/examples/map/map_basic_example.py
index 2d1ea1dd98..676d20764e 100644
--- a/examples/map/map_basic_example.py
+++ b/examples/map/map_basic_example.py
@@ -9,15 +9,15 @@
my_map.put("2", "Paris")
my_map.put("3", "Istanbul")
-print("Entry with key 3: {}".format(my_map.get("3").result()))
+print("Entry with key 3:", my_map.get("3").result())
-print("Map size: {}".format(my_map.size().result()))
+print("Map size:", my_map.size().result())
# Print the map
print("\nIterating over the map: \n")
entries = my_map.entry_set().result()
for key, value in entries:
- print("{} -> {}".format(key, value))
+ print("%s -> %s" % (key, value))
client.shutdown()
diff --git a/examples/map/map_blocking_example.py b/examples/map/map_blocking_example.py
index 97f634ed87..14d522ce87 100644
--- a/examples/map/map_blocking_example.py
+++ b/examples/map/map_blocking_example.py
@@ -1,6 +1,5 @@
import hazelcast
import random
-import time
def fill_map(hz_map, count=10):
@@ -13,20 +12,20 @@ def fill_map(hz_map, count=10):
my_map = client.get_map("sync-map").blocking()
fill_map(my_map)
-print("Map size: {}".format(my_map.size()))
+print("Map size:", my_map.size())
random_key = random.random()
my_map.put(random_key, "value")
-print("Map contains {}: {}".format(random_key, my_map.contains_key(random_key)))
-print("Map size: {}".format(my_map.size()))
+print("Map contains %s: %s" % (random_key, my_map.contains_key(random_key)))
+print("Map size:", my_map.size())
my_map.remove(random_key)
-print("Map contains {}: {}".format(random_key, my_map.contains_key(random_key)))
-print("Map size: {}".format(my_map.size()))
+print("Map contains %s: %s" % (random_key, my_map.contains_key(random_key)))
+print("Map size:", my_map.size())
print("\nIterate over the map\n")
for key, value in my_map.entry_set():
- print("Key: {} -> Value: {}".format(key, value))
+ print("Key: %s -> Value: %s" % (key, value))
client.shutdown()
diff --git a/examples/map/map_listener_example.py b/examples/map/map_listener_example.py
index 96f04d82ef..8fdd8b8aa5 100644
--- a/examples/map/map_listener_example.py
+++ b/examples/map/map_listener_example.py
@@ -4,17 +4,15 @@
def entry_added(event):
- print("Entry added with key: {}, value: {}".format(event.key, event.value))
+ print("Entry added with key: %s, value: %s" % (event.key, event.value))
def entry_removed(event):
- print("Entry removed with key: {}".format(event.key))
+ print("Entry removed with key:", event.key)
def entry_updated(event):
- print("Entry updated with key: {}, old value: {}, new value: {}".format(event.key,
- event.old_value,
- event.value))
+ print("Entry updated with key: %s, old value: %s, new value: %s" % (event.key, event.old_value, event.value))
client = hazelcast.HazelcastClient()
diff --git a/examples/map/map_portable_query_example.py b/examples/map/map_portable_query_example.py
index b919947319..979c36f065 100644
--- a/examples/map/map_portable_query_example.py
+++ b/examples/map/map_portable_query_example.py
@@ -34,11 +34,11 @@ def __eq__(self, other):
return isinstance(other, Employee) and self.name == other.name and self.age == other.age
-config = hazelcast.ClientConfig()
-
-config.serialization.portable_factories[Employee.FACTORY_ID] = {Employee.CLASS_ID: Employee}
-
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(portable_factories={
+ Employee.FACTORY_ID: {
+ Employee.CLASS_ID: Employee
+ }
+})
my_map = client.get_map("employee-map")
@@ -46,16 +46,16 @@ def __eq__(self, other):
my_map.put(1, Employee("Jane", 29))
my_map.put(2, Employee("Joe", 30))
-print("Map Size: {}".format(my_map.size().result()))
+print("Map Size:", my_map.size().result())
predicate = sql("age <= 29")
def values_callback(f):
result_set = f.result()
- print("Query Result Size: {}".format(len(result_set)))
+ print("Query Result Size:", len(result_set))
for value in result_set:
- print("value: {}".format(value))
+ print("Value:", value)
my_map.values(predicate).add_done_callback(values_callback)
diff --git a/examples/map/map_portable_versioning_example.py b/examples/map/map_portable_versioning_example.py
index 558d8f038b..3dbcd4ca8d 100644
--- a/examples/map/map_portable_versioning_example.py
+++ b/examples/map/map_portable_versioning_example.py
@@ -125,17 +125,23 @@ def __eq__(self, other):
# Let's now configure 3 clients with 3 different versions of Employee.
-config = hazelcast.ClientConfig()
-config.serialization.portable_factories[Employee.FACTORY_ID] = {Employee.CLASS_ID: Employee}
-client = hazelcast.HazelcastClient(config)
-
-config2 = hazelcast.ClientConfig()
-config2.serialization.portable_factories[Employee2.FACTORY_ID] = {Employee2.CLASS_ID: Employee2}
-client2 = hazelcast.HazelcastClient(config2)
-
-config3 = hazelcast.ClientConfig()
-config3.serialization.portable_factories[Employee3.FACTORY_ID] = {Employee3.CLASS_ID: Employee3}
-client3 = hazelcast.HazelcastClient(config3)
+client = hazelcast.HazelcastClient(portable_factories={
+ Employee.FACTORY_ID: {
+ Employee.CLASS_ID: Employee
+ }
+})
+
+client2 = hazelcast.HazelcastClient(portable_factories={
+ Employee2.FACTORY_ID: {
+ Employee2.CLASS_ID: Employee2
+ }
+})
+
+client3 = hazelcast.HazelcastClient(portable_factories={
+ Employee3.FACTORY_ID: {
+ Employee3.CLASS_ID: Employee3
+ }
+})
# Assume that a member joins a cluster with a newer version of a class.
# If you modified the class by adding a new field, the new member's put operations include that
@@ -147,7 +153,7 @@ def __eq__(self, other):
my_map.put(0, Employee("Jack", 28))
my_map2.put(1, Employee2("Jane", 29, "Josh"))
-print('Map Size: {}'.format(my_map.size()))
+print('Map Size: %s' % my_map.size())
# If this new member tries to get an object that was put from the older members, it
# gets null for the newly added field.
@@ -161,7 +167,7 @@ def __eq__(self, other):
my_map3 = client3.get_map("employee-map").blocking()
my_map3.put(2, Employee3("Joe", "30", "Mary"))
-print('Map Size: {}'.format(my_map.size()))
+print('Map Size: %s' % my_map.size())
# As clients with incompatible versions of the class try to access each other, a HazelcastSerializationError
# is raised (caused by a TypeError).
@@ -169,13 +175,13 @@ def __eq__(self, other):
# Client that has class with int type age field tries to read Employee3 object with String age field.
print(my_map.get(2))
except HazelcastSerializationError as ex:
- print("Failed due to: {}".format(ex))
+ print("Failed due to: %s" % ex)
try:
# Client that has class with String type age field tries to read Employee object with int age field.
print(my_map3.get(0))
except HazelcastSerializationError as ex:
- print("Failed due to: {}".format(ex))
+ print("Failed due to: %s" % ex)
client.shutdown()
client2.shutdown()
diff --git a/examples/map/map_predicate_example.py b/examples/map/map_predicate_example.py
index 17389baffb..b12af7bea7 100644
--- a/examples/map/map_predicate_example.py
+++ b/examples/map/map_predicate_example.py
@@ -4,15 +4,15 @@
client = hazelcast.HazelcastClient()
-predicate_map = client.get_map("predicate-map")
+predicate_map = client.get_map("predicate-map").blocking()
for i in range(10):
predicate_map.put("key" + str(i), i)
predicate = is_between("this", 3, 5)
-entry_set = predicate_map.entry_set(predicate).result()
+entry_set = predicate_map.entry_set(predicate)
for key, value in entry_set:
- print("{} -> {}".format(key, value))
+ print("%s -> %s" % (key, value))
client.shutdown()
diff --git a/examples/monitoring/distributed_object_listener.py b/examples/monitoring/distributed_object_listener.py
index 75deaade75..ccf026d5a1 100644
--- a/examples/monitoring/distributed_object_listener.py
+++ b/examples/monitoring/distributed_object_listener.py
@@ -21,7 +21,7 @@ def distributed_object_listener(event):
# This causes a DESTROYED event
test_map.destroy()
-# Deregister the listener
+# De-register the listener
client.remove_distributed_object_listener(reg_id)
client.shutdown()
diff --git a/examples/monitoring/lifecycle_listener_example.py b/examples/monitoring/lifecycle_listener_example.py
index ab9af95d4d..921bac825f 100644
--- a/examples/monitoring/lifecycle_listener_example.py
+++ b/examples/monitoring/lifecycle_listener_example.py
@@ -2,12 +2,11 @@
def on_state_change(state):
- print("State changed to {}".format(state))
+ print("State changed to", state)
-config = hazelcast.ClientConfig()
-config.add_lifecycle_listener(on_state_change)
-
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(lifecycle_listeners=[
+ on_state_change
+])
client.shutdown()
diff --git a/examples/multi-map/multi_map_example.py b/examples/multi-map/multi_map_example.py
index b5dddc71c3..d541b65505 100644
--- a/examples/multi-map/multi_map_example.py
+++ b/examples/multi-map/multi_map_example.py
@@ -2,26 +2,26 @@
client = hazelcast.HazelcastClient()
-multi_map = client.get_multi_map("multi-map")
+multi_map = client.get_multi_map("multi-map").blocking()
multi_map.put("key1", "value1")
multi_map.put("key1", "value2")
multi_map.put("key2", "value3")
multi_map.put("key3", "value4")
-value = multi_map.get("key1").result()
-print("Get: {}".format(value))
+value = multi_map.get("key1")
+print("Get:", value)
-values = multi_map.values().result()
-print("Values: {}".format(values))
+values = multi_map.values()
+print("Values:", values)
-key_set = multi_map.key_set().result()
-print("Key Set: {}".format(key_set))
+key_set = multi_map.key_set()
+print("Key Set:", key_set)
-size = multi_map.size().result()
-print("Size: {}".format(size))
+size = multi_map.size()
+print("Size:", size)
-for key, value in multi_map.entry_set().result():
- print("{} -> {}".format(key, value))
+for key, value in multi_map.entry_set():
+ print("%s -> %s" % (key, value))
client.shutdown()
diff --git a/examples/org-website/custom_serializer_sample.py b/examples/org-website/custom_serializer_sample.py
index 05886a5c9f..3614895ab3 100644
--- a/examples/org-website/custom_serializer_sample.py
+++ b/examples/org-website/custom_serializer_sample.py
@@ -10,14 +10,10 @@ def __init__(self, value=None):
class CustomSerializer(StreamSerializer):
def write(self, out, obj):
- out.write_int(len(obj.value))
- out.write_from(obj.value)
+ out.write_utf(obj.value)
def read(self, inp):
- length = inp.read_int()
- result = bytearray(length)
- inp.read_into(result, 0, length)
- return CustomSerializableType(result.decode("utf-8"))
+ return CustomSerializableType(inp.read_utf())
def get_type_id(self):
return 10
@@ -26,10 +22,10 @@ def destroy(self):
pass
-config = hazelcast.ClientConfig()
-config.serialization.set_custom_serializer(CustomSerializableType, CustomSerializer)
-
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
-hz = hazelcast.HazelcastClient(config)
+hz = hazelcast.HazelcastClient(custom_serializers={
+ CustomSerializableType: CustomSerializer
+})
+
# CustomSerializer will serialize/deserialize CustomSerializable objects
hz.shutdown()
diff --git a/examples/org-website/entry_processor_sample.py b/examples/org-website/entry_processor_sample.py
index 02a21c9340..540996c52b 100644
--- a/examples/org-website/entry_processor_sample.py
+++ b/examples/org-website/entry_processor_sample.py
@@ -23,12 +23,12 @@ def get_class_id(self):
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
hz = hazelcast.HazelcastClient()
# Get the Distributed Map from Cluster.
-map = hz.get_map("my-distributed-map").blocking()
+my_map = hz.get_map("my-distributed-map").blocking()
# Put the integer value of 0 into the Distributed Map
-map.put("key", 0)
+my_map.put("key", 0)
# Run the IncEntryProcessor class on the Hazelcast Cluster Member holding the key called "key"
-map.execute_on_key("key", IncEntryProcessor())
+my_map.execute_on_key("key", IncEntryProcessor())
# Show that the IncEntryProcessor updated the value.
-print("new value: {}".format(map.get("key")))
+print("new value:", my_map.get("key"))
# Shutdown this Hazelcast Client
hz.shutdown()
diff --git a/examples/org-website/global_serializer_sample.py b/examples/org-website/global_serializer_sample.py
index a69710c47a..82ad38b7d9 100644
--- a/examples/org-website/global_serializer_sample.py
+++ b/examples/org-website/global_serializer_sample.py
@@ -1,6 +1,5 @@
import hazelcast
-from hazelcast import ClientConfig
from hazelcast.serialization.api import StreamSerializer
@@ -20,9 +19,7 @@ def destroy(self):
pass
-config = ClientConfig()
-config.serialization.global_serializer = GlobalSerializer
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
-hz = hazelcast.HazelcastClient(config)
+hz = hazelcast.HazelcastClient(global_serializer=GlobalSerializer)
# GlobalSerializer will serialize/deserialize all non-builtin types
hz.shutdown()
diff --git a/examples/org-website/identified_data_serializable_sample.py b/examples/org-website/identified_data_serializable_sample.py
index 4dc2e8545b..2d3205f282 100644
--- a/examples/org-website/identified_data_serializable_sample.py
+++ b/examples/org-website/identified_data_serializable_sample.py
@@ -1,6 +1,5 @@
import hazelcast
-from hazelcast import ClientConfig
from hazelcast.serialization.api import IdentifiedDataSerializable
@@ -27,10 +26,11 @@ def get_class_id(self):
return self.CLASS_ID
-config = ClientConfig()
-my_factory = {Employee.CLASS_ID: Employee}
-config.serialization.add_data_serializable_factory(Employee.FACTORY_ID, my_factory)
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
-hz = hazelcast.HazelcastClient(config)
+hz = hazelcast.HazelcastClient(data_serializable_factories={
+ Employee.FACTORY_ID: {
+ Employee.CLASS_ID: Employee
+ }
+})
# Employee can be used here
hz.shutdown()
diff --git a/examples/org-website/list_sample.py b/examples/org-website/list_sample.py
index 50378c2fac..e6618dcbf8 100644
--- a/examples/org-website/list_sample.py
+++ b/examples/org-website/list_sample.py
@@ -3,16 +3,16 @@
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
hz = hazelcast.HazelcastClient()
# Get the Distributed List from Cluster.
-list = hz.get_list("my-distributed-list").blocking()
+my_list = hz.get_list("my-distributed-list").blocking()
# Add element to the list
-list.add("item1")
-list.add("item2")
+my_list.add("item1")
+my_list.add("item2")
# Remove the first element
-print("Removed: {}".format(list.remove_at(0)))
+print("Removed:", my_list.remove_at(0))
# There is only one element left
-print("Current size is {}".format(list.size()))
+print("Current size is", my_list.size())
# Clear the list
-list.clear()
+my_list.clear()
# Shutdown this Hazelcast Client
hz.shutdown()
diff --git a/examples/org-website/map_sample.py b/examples/org-website/map_sample.py
index a16d16f03d..cf7c345931 100644
--- a/examples/org-website/map_sample.py
+++ b/examples/org-website/map_sample.py
@@ -3,12 +3,12 @@
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
hz = hazelcast.HazelcastClient()
# Get the Distributed Map from Cluster.
-map = hz.get_map("my-distributed-map").blocking()
+my_map = hz.get_map("my-distributed-map").blocking()
# Standard Put and Get
-map.put("key", "value")
-map.get("key")
+my_map.put("key", "value")
+my_map.get("key")
# Concurrent Map methods, optimistic updating
-map.put_if_absent("somekey", "somevalue")
-map.replace_if_same("key", "value", "newvalue")
+my_map.put_if_absent("somekey", "somevalue")
+my_map.replace_if_same("key", "value", "newvalue")
# Shutdown this Hazelcast Client
hz.shutdown()
diff --git a/examples/org-website/portable_serializable_sample.py b/examples/org-website/portable_serializable_sample.py
index 40442cb9af..bacd9baf12 100644
--- a/examples/org-website/portable_serializable_sample.py
+++ b/examples/org-website/portable_serializable_sample.py
@@ -1,6 +1,5 @@
import hazelcast
-from hazelcast import ClientConfig
from hazelcast.serialization.api import Portable
@@ -30,10 +29,11 @@ def get_class_id(self):
return self.CLASS_ID
-config = ClientConfig()
-my_factory = {Customer.CLASS_ID: Customer}
-config.serialization.add_portable_factory(Customer.FACTORY_ID, my_factory)
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
-hz = hazelcast.HazelcastClient(config)
+hz = hazelcast.HazelcastClient(portable_factories={
+ Customer.FACTORY_ID: {
+ Customer.CLASS_ID: Customer
+ }
+})
# Customer can be used here
hz.shutdown()
diff --git a/examples/org-website/query_sample.py b/examples/org-website/query_sample.py
index de026737b3..55156c431a 100644
--- a/examples/org-website/query_sample.py
+++ b/examples/org-website/query_sample.py
@@ -1,6 +1,5 @@
import hazelcast
-from hazelcast import ClientConfig
from hazelcast.serialization.api import Portable
from hazelcast.serialization.predicate import sql, and_, is_between, is_equal_to
@@ -40,22 +39,23 @@ def generate_users(users):
users.put("Freddy", User("Freddy", 23, True))
-config = ClientConfig()
-portable_factory = {User.CLASS_ID: User}
-config.serialization.add_portable_factory(User.FACTORY_ID, portable_factory)
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
-hz = hazelcast.HazelcastClient(config)
+hz = hazelcast.HazelcastClient(portable_factories={
+ User.FACTORY_ID: {
+ User.CLASS_ID: User
+ }
+})
# Get a Distributed Map called "users"
-users = hz.get_map("users").blocking()
+users_map = hz.get_map("users").blocking()
# Add some users to the Distributed Map
-generate_users(users)
+generate_users(users_map)
# Create a Predicate from a String (a SQL like Where clause)
sql_query = sql("active AND age BETWEEN 18 AND 21)")
# Creating the same Predicate as above but with a builder
criteria_query = and_(is_equal_to("active", True), is_between("age", 18, 21))
# Get result collections using the two different Predicates
-result1 = users.values(sql_query)
-result2 = users.values(criteria_query)
+result1 = users_map.values(sql_query)
+result2 = users_map.values(criteria_query)
# Print out the results
print(result1)
print(result2)
diff --git a/examples/org-website/replicated_map_sample.py b/examples/org-website/replicated_map_sample.py
index afac051456..3d6e375577 100644
--- a/examples/org-website/replicated_map_sample.py
+++ b/examples/org-website/replicated_map_sample.py
@@ -3,14 +3,14 @@
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
hz = hazelcast.HazelcastClient()
# Get a Replicated Map called "my-replicated-map"
-map = hz.get_replicated_map("my-replicated-map").blocking()
+rmap = hz.get_replicated_map("my-replicated-map").blocking()
# Put and Get a value from the Replicated Map
-replaced_value = map.put("key", "value")
+replaced_value = rmap.put("key", "value")
# key/value replicated to all members
-print("replaced value = {}".format(replaced_value))
+print("replaced value =", replaced_value)
# Will be None as its first update
-value = map.get("key")
+value = rmap.get("key")
# the value is retrieved from a random member in the cluster
-print("value for key = {}".format(value))
+print("value for key =", value)
# Shutdown this Hazelcast Client
hz.shutdown()
diff --git a/examples/org-website/set_sample.py b/examples/org-website/set_sample.py
index 97e80ac5de..17ceababd4 100644
--- a/examples/org-website/set_sample.py
+++ b/examples/org-website/set_sample.py
@@ -3,16 +3,16 @@
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
hz = hazelcast.HazelcastClient()
# Get the Distributed Set from Cluster.
-set = hz.get_set("my-distributed-set").blocking()
+my_set = hz.get_set("my-distributed-set").blocking()
# Add items to the set with duplicates
-set.add("item1")
-set.add("item1")
-set.add("item2")
-set.add("item2")
-set.add("item2")
-set.add("item3")
+my_set.add("item1")
+my_set.add("item1")
+my_set.add("item2")
+my_set.add("item2")
+my_set.add("item2")
+my_set.add("item3")
# Get the items. Note that there are no duplicates.
-for item in set.get_all():
+for item in my_set.get_all():
print(item)
# Shutdown this Hazelcast Client
hz.shutdown()
diff --git a/examples/org-website/topic_sample.py b/examples/org-website/topic_sample.py
index 450b7687c9..695b1b5379 100644
--- a/examples/org-website/topic_sample.py
+++ b/examples/org-website/topic_sample.py
@@ -2,7 +2,7 @@
def print_on_message(topic_message):
- print("Got message ", topic_message.message)
+ print("Got message", topic_message.message)
# Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1
diff --git a/examples/pn-counter/pn_counter_example.py b/examples/pn-counter/pn_counter_example.py
index 36eea24e89..717c72388a 100644
--- a/examples/pn-counter/pn_counter_example.py
+++ b/examples/pn-counter/pn_counter_example.py
@@ -4,12 +4,12 @@
pn_counter = client.get_pn_counter("pn-counter").blocking()
-print("Counter is initialized with {}".format(pn_counter.get()))
+print("Counter is initialized with", pn_counter.get())
for i in range(10):
- print("Added {} to the counter. Current value is {}".format(i, pn_counter.add_and_get(i)))
+ print("Added %s to the counter. Current value is %s" % (i, pn_counter.add_and_get(i)))
print("Incremented the counter after getting the current value. "
- "Previous value is {}".format(pn_counter.get_and_increment()))
+ "Previous value is", pn_counter.get_and_increment())
-print("Final value is {}".format(pn_counter.get()))
+print("Final value is", pn_counter.get())
diff --git a/examples/ring-buffer/ring_buffer_example.py b/examples/ring-buffer/ring_buffer_example.py
index b7c5524e88..4f282c5998 100644
--- a/examples/ring-buffer/ring_buffer_example.py
+++ b/examples/ring-buffer/ring_buffer_example.py
@@ -2,13 +2,13 @@
client = hazelcast.HazelcastClient()
-ring_buffer = client.get_ringbuffer("ring-buffer")
-print("Capacity of the ring buffer: {}".format(ring_buffer.capacity().result()))
+rb = client.get_ringbuffer("ring-buffer").blocking()
+print("Capacity of the ring buffer:", rb.capacity())
-sequence = ring_buffer.add("First item").result()
-print("Size: {}".format(ring_buffer.size().result()))
+sequence = rb.add("First item")
+print("Size:", rb.size())
-item = ring_buffer.read_one(sequence).result()
-print("The item at the sequence {} is {}".format(sequence, item))
+item = rb.read_one(sequence)
+print("The item at the sequence %s is %s" % (sequence, item))
client.shutdown()
diff --git a/examples/serialization/custom_serialization_example.py b/examples/serialization/custom_serialization_example.py
index 20cb884e8c..d3aefef0e4 100644
--- a/examples/serialization/custom_serialization_example.py
+++ b/examples/serialization/custom_serialization_example.py
@@ -9,6 +9,9 @@ def __init__(self, hour, minute, second):
self.minute = minute
self.second = second
+ def __repr__(self):
+ return "TimeOfDay(hour=%s, minute=%s, second=%s)" % (self.hour, self.minute, self.second)
+
class CustomSerializer(StreamSerializer):
CUSTOM_SERIALIZER_ID = 4 # Should be greater than 0 and unique to each serializer
@@ -33,16 +36,15 @@ def destroy(self):
pass
-config = hazelcast.ClientConfig()
-config.serialization.set_custom_serializer(type(TimeOfDay), CustomSerializer)
-
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(custom_serializers={
+ TimeOfDay: CustomSerializer
+})
-my_map = client.get_map("map")
+my_map = client.get_map("map").blocking()
time_of_day = TimeOfDay(13, 36, 59)
my_map.put("time", time_of_day)
-time = my_map.get("time").result()
-print("Time is {}:{}:{}".format(time.hour, time.minute, time.second))
+time = my_map.get("time")
+print("Time is", time)
client.shutdown()
diff --git a/examples/serialization/global_serialization_example.py b/examples/serialization/global_serialization_example.py
index 411e4f4a00..f19570c097 100644
--- a/examples/serialization/global_serialization_example.py
+++ b/examples/serialization/global_serialization_example.py
@@ -12,6 +12,9 @@ def __init__(self, id, name, colors):
self.name = name
self.colors = colors
+ def __repr__(self):
+ return "ColorGroup(id=%s, name=%s, colors=%s)" % (self.id, self.name, self.colors)
+
class GlobalSerializer(StreamSerializer):
GLOBAL_SERIALIZER_ID = 5 # Should be greater than 0 and unique to each serializer
@@ -34,22 +37,18 @@ def destroy(self):
pass
-config = hazelcast.ClientConfig()
-config.serialization.global_serializer = GlobalSerializer
-
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(global_serializer=GlobalSerializer)
-group = ColorGroup(id=1, name="Reds",
+group = ColorGroup(id=1,
+ name="Reds",
colors=["Crimson", "Red", "Ruby", "Maroon"])
-my_map = client.get_map("map")
+my_map = client.get_map("map").blocking()
my_map.put("group1", group)
-color_group = my_map.get("group1").result()
+color_group = my_map.get("group1")
-print("ID: {}\nName: {}\nColor: {}".format(color_group.id,
- color_group.name,
- color_group.colors))
+print("Received:", color_group)
client.shutdown()
diff --git a/examples/serialization/identified_data_serializable_example.py b/examples/serialization/identified_data_serializable_example.py
index 2b16b48ec5..ad5a47c586 100644
--- a/examples/serialization/identified_data_serializable_example.py
+++ b/examples/serialization/identified_data_serializable_example.py
@@ -28,12 +28,15 @@ def get_factory_id(self):
def get_class_id(self):
return self.CLASS_ID
+ def __repr__(self):
+ return "Student(id=%s, name=%s, gpa=%s)" % (self.id, self.name, self.gpa)
-config = hazelcast.ClientConfig()
-factory = {Student.CLASS_ID: Student}
-config.serialization.add_data_serializable_factory(Student.FACTORY_ID, factory)
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(data_serializable_factories={
+ Student.FACTORY_ID: {
+ Student.CLASS_ID: Student
+ }
+})
my_map = client.get_map("map")
@@ -43,8 +46,6 @@ def get_class_id(self):
returned_student = my_map.get("student1").result()
-print("ID: {}\nName: {}\nGPA: {}".format(returned_student.id,
- returned_student.name,
- returned_student.gpa))
+print("Received:", returned_student)
client.shutdown()
diff --git a/examples/serialization/portable_example.py b/examples/serialization/portable_example.py
index 2cb9c93c05..3feedff83e 100644
--- a/examples/serialization/portable_example.py
+++ b/examples/serialization/portable_example.py
@@ -28,12 +28,15 @@ def get_factory_id(self):
def get_class_id(self):
return self.CLASS_ID
+ def __repr__(self):
+ return "Engineer(name=%s, age=%s, languages=%s)" % (self.name, self.age, self.languages)
-config = hazelcast.ClientConfig()
-factory = {Engineer.CLASS_ID: Engineer}
-config.serialization.add_portable_factory(Engineer.FACTORY_ID, factory)
-client = hazelcast.HazelcastClient(config)
+client = hazelcast.HazelcastClient(portable_factories={
+ Engineer.FACTORY_ID: {
+ Engineer.CLASS_ID: Engineer
+ }
+})
my_map = client.get_map("map")
@@ -43,8 +46,6 @@ def get_class_id(self):
returned_engineer = my_map.get("engineer1").result()
-print("Name: {}\nAge: {}\nLanguages: {}".format(returned_engineer.name,
- returned_engineer.age,
- returned_engineer.languages))
+print("Received", returned_engineer)
client.shutdown()
diff --git a/examples/set/set_example.py b/examples/set/set_example.py
index daa3a2f1e6..e2806e3a87 100644
--- a/examples/set/set_example.py
+++ b/examples/set/set_example.py
@@ -9,10 +9,10 @@
my_set.add("Item2")
found = my_set.contains("Item2").result()
-print("Set contains Item2: {}".format(found))
+print("Set contains Item2:", found)
items = my_set.get_all().result()
-print("Size of set: {}".format(len(items)))
+print("Size of set:", len(items))
print("\nAll Items:")
for item in items:
diff --git a/examples/ssl/ssl_example.py b/examples/ssl/ssl_example.py
index 3f7b4ddb4f..8728b58424 100644
--- a/examples/ssl/ssl_example.py
+++ b/examples/ssl/ssl_example.py
@@ -1,30 +1,20 @@
-import os
import hazelcast
-from hazelcast.config import PROTOCOL
+from hazelcast.config import SSLProtocol
# Hazelcast server should be started with SSL enabled to use SSLConfig
-config = hazelcast.ClientConfig()
-
-# SSL Config
-ssl_config = hazelcast.SSLConfig()
-ssl_config.enabled = True
-
-# Absolute path of PEM file should be given
-ssl_config.cafile = os.path.abspath("server.pem")
-
-# Select the protocol used in SSL communication. This step is optional. Default is TLSv1_2
-ssl_config.protocol = PROTOCOL.TLSv1_3
-
-config.network.ssl = ssl_config
-
-config.network.addresses.append("foo.bar.com:8888")
# Start a new Hazelcast client with SSL configuration.
-client = hazelcast.HazelcastClient(config)
-
-hz_map = client.get_map("ssl-map")
+client = hazelcast.HazelcastClient(cluster_members=["foo.bar.com:8888"],
+ ssl_enabled=True,
+ # Absolute paths of PEM files must be given
+ ssl_cafile="/path/of/server.pem",
+ # Select the protocol used in SSL communication.
+ # This step is optional. Default is TLSv1_2
+ ssl_protocol=SSLProtocol.TLSv1_3)
+
+hz_map = client.get_map("ssl-map").blocking()
hz_map.put("key", "value")
-print(hz_map.get("key").result())
+print(hz_map.get("key"))
client.shutdown()
diff --git a/examples/ssl/ssl_mutual_authentication_example.py b/examples/ssl/ssl_mutual_authentication_example.py
index 5c91ddc8be..09d8e0f8b7 100644
--- a/examples/ssl/ssl_mutual_authentication_example.py
+++ b/examples/ssl/ssl_mutual_authentication_example.py
@@ -1,38 +1,25 @@
-import os
import hazelcast
-from hazelcast.config import PROTOCOL
+from hazelcast.config import SSLProtocol
# To use SSLConfig with mutual authentication, Hazelcast server should be started with
# SSL and mutual authentication enabled
-config = hazelcast.ClientConfig()
-
-# SSL Config
-ssl_config = hazelcast.SSLConfig()
-ssl_config.enabled = True
-
-# Absolute path of PEM files should be given
-ssl_config.cafile = os.path.abspath("server.pem")
-
-# To use mutual authentication client certificate and private key should be provided
-ssl_config.certfile = os.path.abspath("client.pem")
-ssl_config.keyfile = os.path.abspath("client-key.pem")
-
-# If private key file is encrypted, password is required to decrypt it
-ssl_config.password = "key-file-password"
-
-# Select the protocol used in SSL communication. This step is optional. Default is TLSv1_2
-ssl_config.protocol = PROTOCOL.TLSv1_3
-
-config.network.ssl = ssl_config
-
-config.network.addresses.append("foo.bar.com:8888")
# Start a new Hazelcast client with SSL configuration.
-client = hazelcast.HazelcastClient(config)
-
-hz_map = client.get_map("ssl-map")
+client = hazelcast.HazelcastClient(cluster_members=["foo.bar.com:8888"],
+ ssl_enable=True,
+ # Absolute paths of PEM files must be given
+ ssl_cafile="/path/of/server.pem",
+ ssl_certfile="/path/of/client.pem",
+ ssl_keyfile="/path/of/client-private.pem",
+ # If private key is not password protected, skip the option below.
+ ssl_password="ssl_keyfile_password",
+ # Select the protocol used in SSL communication.
+ # This step is optional. Default is TLSv1_2
+ ssl_protocol=SSLProtocol.TLSv1_3)
+
+hz_map = client.get_map("ssl-map").blocking()
hz_map.put("key", "value")
-print(hz_map.get("key").result())
+print(hz_map.get("key"))
client.shutdown()
diff --git a/examples/topic/topic_example.py b/examples/topic/topic_example.py
index 4290ad1e78..47ff0d3c86 100644
--- a/examples/topic/topic_example.py
+++ b/examples/topic/topic_example.py
@@ -3,8 +3,8 @@
def on_message(event):
- print("Got message: {}".format(event.message))
- print("Publish time: {}\n".format(event.publish_time))
+ print("Got message:", event.message)
+ print("Publish time:", event.publish_time)
client = hazelcast.HazelcastClient()
diff --git a/hazelcast/__init__.py b/hazelcast/__init__.py
index 923daf1629..a9a89be286 100644
--- a/hazelcast/__init__.py
+++ b/hazelcast/__init__.py
@@ -1,5 +1,3 @@
from hazelcast.client import HazelcastClient
-from hazelcast.config import ClientConfig, ClientNetworkConfig, SerializationConfig, SSLConfig, \
- ClientCloudConfig, FlakeIdGeneratorConfig
from hazelcast.version import CLIENT_VERSION_INFO as __version_info__
from hazelcast.version import CLIENT_VERSION as __version__
diff --git a/hazelcast/client.py b/hazelcast/client.py
index 2e10bf85c8..5eb895d7a4 100644
--- a/hazelcast/client.py
+++ b/hazelcast/client.py
@@ -1,11 +1,10 @@
import logging
import logging.config
-import sys
-import json
import threading
-from hazelcast.cluster import ClusterService, RoundRobinLB, _InternalClusterService
-from hazelcast.config import ClientConfig, ClientProperties
+from hazelcast import six
+from hazelcast.cluster import ClusterService, _InternalClusterService
+from hazelcast.config import _Config
from hazelcast.connection import ConnectionManager, DefaultAddressProvider
from hazelcast.core import DistributedObjectInfo, DistributedObjectEvent
from hazelcast.invocation import InvocationService, Invocation
@@ -23,8 +22,8 @@
from hazelcast.serialization import SerializationServiceV1
from hazelcast.statistics import Statistics
from hazelcast.transaction import TWO_PHASE, TransactionManager
-from hazelcast.util import AtomicInteger, DEFAULT_LOGGING
-from hazelcast.discovery import HazelcastCloudAddressProvider, HazelcastCloudDiscovery
+from hazelcast.util import AtomicInteger, DEFAULT_LOGGING, RoundRobinLB
+from hazelcast.discovery import HazelcastCloudAddressProvider
from hazelcast.errors import IllegalStateError
@@ -35,16 +34,16 @@ class HazelcastClient(object):
_CLIENT_ID = AtomicInteger()
logger = logging.getLogger("HazelcastClient")
- def __init__(self, config=None):
+ def __init__(self, **kwargs):
+ config = _Config.from_dict(kwargs)
+ self.config = config
self._context = _ClientContext()
- self.config = config or ClientConfig()
- self.properties = ClientProperties(self.config.get_properties())
- self._id = HazelcastClient._CLIENT_ID.get_and_increment()
- self.name = self._create_client_name()
+ client_id = HazelcastClient._CLIENT_ID.get_and_increment()
+ self.name = self._create_client_name(client_id)
self._init_logger()
self._logger_extras = {"client_name": self.name, "cluster_name": self.config.cluster_name}
self._reactor = AsyncoreReactor(self._logger_extras)
- self._serialization_service = SerializationServiceV1(serialization_config=self.config.serialization)
+ self._serialization_service = SerializationServiceV1(config)
self._near_cache_manager = NearCacheManager(self, self._serialization_service)
self._internal_lifecycle_service = _InternalLifecycleService(self, self._logger_extras)
self.lifecycle_service = LifecycleService(self._internal_lifecycle_service)
@@ -61,7 +60,7 @@ def __init__(self, config=None):
self._invocation_service,
self._near_cache_manager,
self._logger_extras)
- self._load_balancer = self._init_load_balancer(self.config)
+ self._load_balancer = self._init_load_balancer(config)
self._listener_service = ListenerService(self, self._connection_manager,
self._invocation_service,
self._logger_extras)
@@ -91,13 +90,12 @@ def _start(self):
self._internal_lifecycle_service.start()
self._invocation_service.start(self._internal_partition_service, self._connection_manager,
self._listener_service)
- self._load_balancer.init(self.cluster_service, self.config)
+ self._load_balancer.init(self.cluster_service)
membership_listeners = self.config.membership_listeners
self._internal_cluster_service.start(self._connection_manager, membership_listeners)
self._cluster_view_listener.start()
self._connection_manager.start(self._load_balancer)
- connection_strategy = self.config.connection_strategy
- if not connection_strategy.async_start:
+ if not self.config.async_start:
self._internal_cluster_service.wait_initial_member_list_fetched()
self._connection_manager.connect_to_all_cluster_members()
@@ -238,7 +236,7 @@ def add_distributed_object_listener(self, listener_func):
:param listener_func: Function to be called when a distributed object is created or destroyed.
:return: (str), a registration id which is used as a key to remove the listener.
"""
- is_smart = self.config.network.smart_routing
+ is_smart = self.config.smart_routing
request = client_add_distributed_object_listener_codec.encode_request(is_smart)
def handle_distributed_object_event(name, service_name, event_type, source):
@@ -306,53 +304,41 @@ def shutdown(self):
self._internal_lifecycle_service.fire_lifecycle_event(LifecycleState.SHUTDOWN)
def _create_address_provider(self):
- network_config = self.config.network
- address_list_provided = len(network_config.addresses) != 0
- cloud_config = network_config.cloud
- cloud_enabled = cloud_config.enabled or cloud_config.discovery_token != ""
+ config = self.config
+ cluster_members = config.cluster_members
+ address_list_provided = len(cluster_members) > 0
+ cloud_discovery_token = config.cloud_discovery_token
+ cloud_enabled = cloud_discovery_token is not None
if address_list_provided and cloud_enabled:
raise IllegalStateError("Only one discovery method can be enabled at a time. "
"Cluster members given explicitly: %s, Hazelcast Cloud enabled: %s"
% (address_list_provided, cloud_enabled))
- cloud_address_provider = self._init_cloud_address_provider(cloud_config)
- if cloud_address_provider:
- return cloud_address_provider
-
- return DefaultAddressProvider(network_config.addresses)
-
- def _init_cloud_address_provider(self, cloud_config):
- if cloud_config.enabled:
- discovery_token = cloud_config.discovery_token
- host, url = HazelcastCloudDiscovery.get_host_and_url(self.config.get_properties(), discovery_token)
- return HazelcastCloudAddressProvider(host, url, self._get_connection_timeout(), self._logger_extras)
-
- cloud_token = self.properties.get(self.properties.HAZELCAST_CLOUD_DISCOVERY_TOKEN)
- if cloud_token != "":
- host, url = HazelcastCloudDiscovery.get_host_and_url(self.config.get_properties(), cloud_token)
- return HazelcastCloudAddressProvider(host, url, self._get_connection_timeout(), self._logger_extras)
-
- return None
+ if cloud_enabled:
+ connection_timeout = self._get_connection_timeout(config)
+ return HazelcastCloudAddressProvider(cloud_discovery_token, connection_timeout, self._logger_extras)
- def _get_connection_timeout(self):
- network_config = self.config.network
- conn_timeout = network_config.connection_timeout
- return sys.maxsize if conn_timeout == 0 else conn_timeout
-
- def _create_client_name(self):
- if self.config.client_name:
- return self.config.client_name
- return "hz.client_" + str(self._id)
+ return DefaultAddressProvider(cluster_members)
def _init_logger(self):
- logger_config = self.config.logger
- if logger_config.config_file is not None:
- with open(logger_config.config_file, "r") as f:
- json_config = json.loads(f.read())
- logging.config.dictConfig(json_config)
+ config = self.config
+ logging_config = config.logging_config
+ if logging_config:
+ logging.config.dictConfig(logging_config)
else:
logging.config.dictConfig(DEFAULT_LOGGING)
- self.logger.setLevel(logger_config.level)
+ self.logger.setLevel(config.logging_level)
+
+ def _create_client_name(self, client_id):
+ client_name = self.config.client_name
+ if client_name:
+ return client_name
+ return "hz.client_%s" % client_id
+
+ @staticmethod
+ def _get_connection_timeout(config):
+ timeout = config.connection_timeout
+ return six.MAXSIZE if timeout == 0 else timeout
@staticmethod
def _init_load_balancer(config):
diff --git a/hazelcast/cluster.py b/hazelcast/cluster.py
index daff7de985..b8afecf35e 100644
--- a/hazelcast/cluster.py
+++ b/hazelcast/cluster.py
@@ -1,5 +1,4 @@
import logging
-import random
import threading
import uuid
from collections import OrderedDict
@@ -272,75 +271,6 @@ def _create_snapshot(version, member_infos):
return _MemberListSnapshot(version, new_members)
-class AbstractLoadBalancer(object):
- """Load balancer allows you to send operations to one of a number of endpoints (Members).
- It is up to the implementation to use different load balancing policies.
-
- If the client is configured with smart routing,
- only the operations that are not key based will be routed to the endpoint
- returned by the load balancer. If it is not, the load balancer will not be used.
- """
- def __init__(self):
- self._cluster_service = None
- self._members = []
-
- def init(self, cluster_service, config):
- """
- Initializes the load balancer.
-
- :param cluster_service: (:class:`~hazelcast.cluster.ClusterService`), The cluster service to select members from
- :param config: (:class:`~hazelcast.config.ClientConfig`), The client config
- :return:
- """
- self._cluster_service = cluster_service
- cluster_service.add_listener(self._listener, self._listener, True)
-
- def next(self):
- """
- Returns the next member to route to.
- :return: (:class:`~hazelcast.core.Member`), Returns the next member or None if no member is available
- """
- raise NotImplementedError("next")
-
- def _listener(self, _):
- self._members = self._cluster_service.get_members()
-
-
-class RoundRobinLB(AbstractLoadBalancer):
- """A load balancer implementation that relies on using round robin
- to a next member to send a request to.
-
- Round robin is done based on best effort basis, the order of members for concurrent calls to
- the next() is not guaranteed.
- """
-
- def __init__(self):
- super(RoundRobinLB, self).__init__()
- self._idx = 0
-
- def next(self):
- members = self._members
- if not members:
- return None
-
- n = len(members)
- idx = self._idx % n
- self._idx += 1
- return members[idx]
-
-
-class RandomLB(AbstractLoadBalancer):
- """A load balancer that selects a random member to route to.
- """
-
- def next(self):
- members = self._members
- if not members:
- return None
- idx = random.randrange(0, len(members))
- return members[idx]
-
-
class VectorClock(object):
"""
Vector clock consisting of distinct replica logical clocks.
diff --git a/hazelcast/config.py b/hazelcast/config.py
index 0ebe61d36f..0f7fdc1e10 100644
--- a/hazelcast/config.py
+++ b/hazelcast/config.py
@@ -3,1019 +3,1333 @@
"""
import logging
-import os
import re
-from hazelcast.serialization.api import StreamSerializer
-from hazelcast.util import validate_type, validate_serializer, enum, TimeUnit, check_not_none
+from hazelcast import six
+from hazelcast.errors import InvalidConfigurationError
+from hazelcast.serialization.api import StreamSerializer, IdentifiedDataSerializable, Portable
+from hazelcast.serialization.portable.classdef import ClassDefinition
+from hazelcast.util import check_not_none, with_reversed_items, number_types, LoadBalancer, none_type
-INTEGER_TYPE = enum(VAR=0, BYTE=1, SHORT=2, INT=3, LONG=4, BIG_INT=5)
-"""
-Integer type options that can be used by serialization service.
-
-* VAR : variable size integer (this option can be problematic on static type clients like java or .NET)
-* BYTE: Python int will be interpreted as a single byte int
-* SHORT: Python int will be interpreted as a double byte int
-* INT: Python int will be interpreted as a four byte int
-* LONG: Python int will be interpreted as an eight byte int
-* BIG_INT: Python int will be interpreted as Java BigInteger. This option can handle python long values with "bit_length > 64"
-"""
-EVICTION_POLICY = enum(NONE=0, LRU=1, LFU=2, RANDOM=3)
-"""
-Near Cache eviction policy options
+@with_reversed_items
+class IntType(object):
+ """
+ Integer type options that can be used by serialization service.
+ """
+ VAR = 0
+ """
+ Integer types will be serialized as 8, 16, 32, 64 bit integers
+ or as Java BigInteger according to their value. This option may
+ cause problems when the Python client is used in conjunction with
+ statically typed language clients such as Java or .NET.
+ """
-* NONE : No eviction
-* LRU : Least Recently Used items will be evicted
-* LFU : Least frequently Used items will be evicted
-* RANDOM : Items will be evicted randomly
+ BYTE = 1
+ """
+ Integer types will be serialized as a 8 bit integer(as Java byte)
+ """
-"""
+ SHORT = 2
+ """
+ Integer types will be serialized as a 16 bit integer(as Java short)
+ """
-IN_MEMORY_FORMAT = enum(BINARY=0, OBJECT=1)
-"""
-Near Cache in memory format of the values.
+ INT = 3
+ """
+ Integer types will be serialized as a 32 bit integer(as Java int)
+ """
-* BINARY : Binary format, hazelcast serialized bytearray format
-* OBJECT : The actual objects used
-"""
+ LONG = 4
+ """
+ Integer types will be serialized as a 64 bit integer(as Java long)
+ """
-PROTOCOL = enum(SSLv2=0, SSLv3=1, SSL=2, TLSv1=3, TLSv1_1=4, TLSv1_2=5, TLSv1_3=6, TLS=7)
-"""
-SSL protocol options.
-
-* SSLv2 : SSL 2.O Protocol. RFC 6176 prohibits SSL 2.0. Please use TLSv1+
-* SSLv3 : SSL 3.0 Protocol. RFC 7568 prohibits SSL 3.0. Please use TLSv1+
-* SSL : Alias for SSL 3.0
-* TLSv1 : TLS 1.0 Protocol described in RFC 2246
-* TLSv1_1 : TLS 1.1 Protocol described in RFC 4346
-* TLSv1_2 : TLS 1.2 Protocol described in RFC 5246
-* TLSv1_3 : TLS 1.3 Protocol described in RFC 8446
-* TLS : Alias for TLS 1.2
-* TLSv1+ requires at least Python 2.7.9 or Python 3.4 build with OpenSSL 1.0.1+
-* TLSv1_3 requires at least Python 2.7.15 or Python 3.7 build with OpenSSL 1.1.1+
-"""
+ BIG_INT = 5
+ """
+ Integer types will be serialized as Java BigInteger. This option can
+ handle integer types which are less than -2^63 or greater than or
+ equal to 2^63. However, when this option is set, serializing/de-serializing
+ integer types is costly.
+ """
-QUERY_CONSTANTS = enum(KEY_ATTRIBUTE_NAME="__key", THIS_ATTRIBUTE_NAME="this")
-"""
-Contains constants for Query.
-* KEY_ATTRIBUTE_NAME : Attribute name of the key.
-* THIS_ATTRIBUTE_NAME : Attribute name of the "this"
-"""
-UNIQUE_KEY_TRANSFORMATION = enum(OBJECT=0, LONG=1, RAW=2)
-"""
-Defines an assortment of transformations which can be applied to
-BitmapIndexOptions#getUniqueKey() unique key values.
-* OBJECT : Extracted unique key value is interpreted as an object value.
- Non-negative unique ID is assigned to every distinct object value.
-* LONG : Extracted unique key value is interpreted as a whole integer value of byte, short, int or long type.
- The extracted value is upcasted to long (if necessary) and unique non-negative ID is assigned
- to every distinct value.
-* RAW : Extracted unique key value is interpreted as a whole integer value of byte, short, int or long type.
- The extracted value is upcasted to long (if necessary) and the resulting value is used directly as an ID.
-"""
+@with_reversed_items
+class EvictionPolicy(object):
+ """
+ Near Cache eviction policy options.
+ """
-INDEX_TYPE = enum(SORTED=0, HASH=1, BITMAP=2)
-"""
-Type of the index.
-* SORTED : Sorted index. Can be used with equality and range predicates.
-* HASH : Hash index. Can be used with equality predicates.
-* BITMAP : Bitmap index. Can be used with equality predicates.
-"""
+ NONE = 0
+ """
+ No eviction.
+ """
+
+ LRU = 1
+ """
+ Least Recently Used items will be evicted.
+ """
-_DEFAULT_CLUSTER_NAME = "dev"
+ LFU = 2
+ """
+ Least frequently Used items will be evicted.
+ """
-_DEFAULT_MAX_ENTRY_COUNT = 10000
-_DEFAULT_SAMPLING_COUNT = 8
-_DEFAULT_SAMPLING_POOL_SIZE = 16
+ RANDOM = 3
+ """
+ Items will be evicted randomly.
+ """
-_MAXIMUM_PREFETCH_COUNT = 100000
+@with_reversed_items
+class InMemoryFormat(object):
+ """
+ Near Cache in memory format of the values.
+ """
-class ClientConfig(object):
+ BINARY = 0
+ """
+ As Hazelcast serialized bytearray data.
"""
- The root configuration for hazelcast python client.
- >>> client_config = ClientConfig()
- >>> client = HazelcastClient(client_config)
+ OBJECT = 1
+ """
+ As the actual object.
"""
- def __init__(self):
- self.client_name = None
- """Name of the client"""
-
- self.cluster_name = _DEFAULT_CLUSTER_NAME
- """Name of the cluster to connect to. By default, set to `dev`."""
-
- self.network = ClientNetworkConfig()
- """The network configuration for addresses to connect, smart-routing, socket-options..."""
-
- self.connection_strategy = ConnectionStrategyConfig()
- """Connection strategy config of the client"""
-
- self.serialization = SerializationConfig()
- """Hazelcast serialization configuration"""
-
- self.near_caches = {} # map_name:NearCacheConfig
- """Near Cache configuration which maps "map-name : NearCacheConfig"""
-
- self._properties = {}
- """Config properties"""
-
- self.load_balancer = None
- """Custom load balancer used to distribute the operations to multiple Endpoints."""
- self.membership_listeners = []
- """Membership listeners, an array of tuple (member_added, member_removed, fire_for_existing)"""
+@with_reversed_items
+class SSLProtocol(object):
+ """
+ SSL protocol options.
+
+ TLSv1+ requires at least Python 2.7.9 or Python 3.4 build with OpenSSL 1.0.1+
+ TLSv1_3 requires at least Python 2.7.15 or Python 3.7 build with OpenSSL 1.1.1+
+ """
+
+ SSLv2 = 0
+ """
+ SSL 2.O Protocol. RFC 6176 prohibits SSL 2.0. Please use TLSv1+.
+ """
+
+ SSLv3 = 1
+ """
+ SSL 3.0 Protocol. RFC 7568 prohibits SSL 3.0. Please use TLSv1+.
+ """
+
+ TLSv1 = 2
+ """
+ TLS 1.0 Protocol described in RFC 2246.
+ """
+
+ TLSv1_1 = 3
+ """
+ TLS 1.1 Protocol described in RFC 4346.
+ """
+
+ TLSv1_2 = 4
+ """
+ TLS 1.2 Protocol described in RFC 5246.
+ """
- self.lifecycle_listeners = []
- """ Lifecycle Listeners, an array of Functions of f(state)"""
+ TLSv1_3 = 5
+ """
+ TLS 1.3 Protocol described in RFC 8446.
+ """
- self.flake_id_generators = {}
- """Flake ID generator configuration which maps "config-name" : FlakeIdGeneratorConfig """
- self.logger = LoggerConfig()
- """Logger configuration."""
+@with_reversed_items
+class QueryConstants(object):
+ """
+ Contains constants for Query.
+ """
- self.labels = set()
- """Labels for the client to be sent to the cluster."""
+ KEY_ATTRIBUTE_NAME = "__key"
+ """
+ Attribute name of the key.
+ """
- def add_membership_listener(self, member_added=None, member_removed=None, fire_for_existing=False):
- """
- Helper method for adding membership listeners
+ THIS_ATTRIBUTE_NAME = "this"
+ """
+ Attribute name of the value.
+ """
- :param member_added: (Function), Function to be called when a member is added, in the form of f(member)
- (optional).
- :param member_removed: (Function), Function to be called when a member is removed, in the form of f(member)
- (optional).
- :param fire_for_existing: if True, already existing members will fire member_added event (optional).
- :return: `self` for cascading configuration
- """
- self.membership_listeners.append((member_added, member_removed, fire_for_existing))
- return self
- def add_lifecycle_listener(self, lifecycle_state_changed=None):
- """
- Helper method for adding lifecycle listeners.
+@with_reversed_items
+class UniqueKeyTransformation(object):
+ """
+ Defines an assortment of transformations which can be applied to
+ unique key values.
+ """
- :param lifecycle_state_changed: (Function), Function to be called when lifecycle state is changed (optional).
- In the form of f(state).
- :return: `self` for cascading configuration
- """
- if lifecycle_state_changed:
- self.lifecycle_listeners.append(lifecycle_state_changed)
- return self
+ OBJECT = 0
+ """
+ Extracted unique key value is interpreted as an object value.
+ Non-negative unique ID is assigned to every distinct object value.
+ """
- def add_near_cache_config(self, near_cache_config):
- """
- Helper method to add a new NearCacheConfig.
+ LONG = 1
+ """
+ Extracted unique key value is interpreted as a whole integer value of byte, short, int or long type.
+ The extracted value is up casted to long (if necessary) and unique non-negative ID is assigned
+ to every distinct value.
+ """
- :param near_cache_config: (NearCacheConfig), the near_cache config to add.
- :return: `self` for cascading configuration.
- """
- self.near_caches[near_cache_config.name] = near_cache_config
- return self
+ RAW = 2
+ """
+ Extracted unique key value is interpreted as a whole integer value of byte, short, int or long type.
+ The extracted value is up casted to long (if necessary) and the resulting value is used directly as an ID.
+ """
- def add_flake_id_generator_config(self, flake_id_generator_config):
- """
- Helper method to add a new FlakeIdGeneratorConfig.
- :param flake_id_generator_config: (FlakeIdGeneratorConfig), the configuration to add
- :return: `self` for cascading configuration
- """
- self.flake_id_generators[flake_id_generator_config.name] = flake_id_generator_config
- return self
+@with_reversed_items
+class IndexType(object):
+ """
+ Type of the index.
+ """
- def get_property_or_default(self, key, default):
- """
- Client property accessor with fallback to default value.
+ SORTED = 0
+ """
+ Sorted index. Can be used with equality and range predicates.
+ """
- :param key: (Object), property key to access.
- :param default: (Object), the default value for fallback.
- :return: (Object), property value if it exist or the default value otherwise.
- """
- try:
- return self._properties[key]
- except KeyError:
- return default
+ HASH = 1
+ """
+ Hash index. Can be used with equality predicates.
+ """
- def get_properties(self):
- """
- Gets the configuration properties.
+ BITMAP = 2
+ """
+ Bitmap index. Can be used with equality predicates.
+ """
- :return: (dict), Client configuration properties.
- """
- return self._properties
- def set_property(self, key, value):
- """
- Sets the value of a named property.
+@with_reversed_items
+class ReconnectMode(object):
+ """
+ Reconnect options.
+ """
- :param key: Property name
- :param value: Value of the property
- :return: `self` for cascading configuration.
- """
- self._properties[key] = value
- return self
+ OFF = 0
+ """
+ Prevent reconnect to cluster after a disconnect.
+ """
+ ON = 1
+ """
+ Reconnect to cluster by blocking invocations.
+ """
-class ClientNetworkConfig(object):
- """
- Network related configuration parameters.
+ ASYNC = 2
+ """
+ Reconnect to cluster without blocking invocations. Invocations will receive ClientOfflineError
"""
- def __init__(self):
- self.addresses = []
- """The candidate address list that client will use to establish initial connection
-
- >>> addresses.append("127.0.0.1:5701")
- """
- self.connection_timeout = 5.0
- """
- Socket connection timeout is a float, giving in seconds, or None.
- Setting a timeout of None disables the timeout feature and is equivalent to block the socket until it connects.
- Setting a timeout of zero is the same as disables blocking on connect.
- """
+class BitmapIndexOptions(object):
+ __slots__ = ("_unique_key", "_unique_key_transformation")
- self.socket_options = []
- """
- Array of Unix socket options.
+ def __init__(self, unique_key=None, unique_key_transformation=None):
+ self._unique_key = QueryConstants.KEY_ATTRIBUTE_NAME
+ if unique_key is not None:
+ self.unique_key = unique_key
- Example usage:
+ self._unique_key_transformation = UniqueKeyTransformation.OBJECT
+ if unique_key_transformation is not None:
+ self.unique_key_transformation = unique_key_transformation
- >>> import socket
- >>> client_network_config.socket_options.append(SocketOption(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1))
- >>> client_network_config.socket_options.append(SocketOption(socket.SOL_SOCKET, socket.SO_SNDBUF, 32768))
- >>> client_network_config.socket_options.append(SocketOption(socket.SOL_SOCKET, socket.SO_RCVBUF, 32768))
+ @property
+ def unique_key(self):
+ return self._unique_key
- Please see the Unix manual for level and option. Level and option constant are in python std lib socket module
- """
+ @unique_key.setter
+ def unique_key(self, value):
+ if value in QueryConstants.reverse:
+ self._unique_key = value
+ else:
+ raise TypeError("unique_key must be of type QueryConstants")
+
+ @property
+ def unique_key_transformation(self):
+ return self._unique_key_transformation
+
+ @unique_key_transformation.setter
+ def unique_key_transformation(self, value):
+ if value in UniqueKeyTransformation.reverse:
+ self._unique_key_transformation = value
+ else:
+ raise TypeError("unique_key_transformation must be of type UniqueKeyTransformation")
- self.redo_operation = False
- """
- If true, client will redo the operations that were executing on the server and client lost the connection.
- This can be because of network, or simply because the member died. However it is not clear whether the
- application is performed or not. For idempotent operations this is harmless, but for non idempotent ones
- retrying can cause to undesirable effects. Note that the redo can perform on any member.
- """
+ @classmethod
+ def from_dict(cls, d):
+ options = cls()
+ for k, v in six.iteritems(d):
+ try:
+ options.__setattr__(k, v)
+ except AttributeError:
+ raise InvalidConfigurationError("Unrecognized config option for the bitmap index options: %s" % k)
+ return options
- self.smart_routing = True
- """
- If true, client will route the key based operations to owner of the key at the best effort. Note that it uses a
- cached value of partition count and doesn't guarantee that the operation will always be executed on the owner.
- The cached table is updated every 10 seconds.
- """
+ def __repr__(self):
+ return "BitmapIndexOptions(unique_key=%s, unique_key_transformation=%s)" \
+ % (self.unique_key, self.unique_key_transformation)
- self.ssl = SSLConfig()
- """SSL configurations for the client."""
- self.cloud = ClientCloudConfig()
- """Hazelcast Cloud configuration to let the client connect the cluster via Hazelcast.cloud"""
+class IndexConfig(object):
+ __slots__ = ("_name", "_type", "_attributes", "_bitmap_index_options")
+ def __init__(self, name=None, type=None, attributes=None, bitmap_index_options=None):
+ self._name = name
+ if name is not None:
+ self.name = name
-class SocketOption(object):
- """
- Advanced configuration for fine-tune the TCP options.
- A Socket option represent the unix socket option, that will be passed to python socket.setoption(level,`option, value)`
- See the Unix manual for level and option.
- """
+ self._type = IndexType.SORTED
+ if type is not None:
+ self.type = type
- def __init__(self, level, option, value):
- self.level = level
- """Option level. See the Unix manual for detail."""
+ self._attributes = []
+ if attributes is not None:
+ self.attributes = attributes
- self.option = option
- """The actual socket option. The actual socket option."""
+ self._bitmap_index_options = BitmapIndexOptions()
+ if bitmap_index_options is not None:
+ self.bitmap_index_options = bitmap_index_options
- self.value = value
- """Socket option value. The value argument can either be an integer or a string"""
+ def add_attribute(self, attribute):
+ IndexUtil.validate_attribute(attribute)
+ self.attributes.append(attribute)
+ @property
+ def name(self):
+ return self._name
-class SerializationConfig(object):
- """
- Hazelcast Serialization Service configuration options can be set from this class.
- """
+ @name.setter
+ def name(self, value):
+ if isinstance(value, (six.string_types, none_type)):
+ self._name = value
+ else:
+ raise TypeError("name must be a string or None")
- def __init__(self):
- self.portable_version = 0
- """
- Portable version will be used to differentiate two versions of the same class that have changes on the class,
- like adding/removing a field or changing a type of a field.
- """
+ @property
+ def type(self):
+ return self._type
+
+ @type.setter
+ def type(self, value):
+ if value in IndexType.reverse:
+ self._type = value
+ else:
+ raise TypeError("type must be of type IndexType")
+
+ @property
+ def attributes(self):
+ return self._attributes
- self.data_serializable_factories = {}
- """
- Dictionary of factory-id and corresponding IdentifiedDataserializable factories. A Factory is a simple
- dictionary with entries of class-id : class-constructor-function pairs.
+ @attributes.setter
+ def attributes(self, value):
+ if isinstance(value, list):
+ self._attributes = value
+ else:
+ raise TypeError("attributes must be a list")
+
+ @property
+ def bitmap_index_options(self):
+ return self._bitmap_index_options
+
+ @bitmap_index_options.setter
+ def bitmap_index_options(self, value):
+ if isinstance(value, dict):
+ self._bitmap_index_options = BitmapIndexOptions.from_dict(value)
+ elif isinstance(value, BitmapIndexOptions):
+ # This branch should only be taken by the client protocol
+ self._bitmap_index_options = value
+ else:
+ raise TypeError("bitmap_index_options must be a dict")
+
+ @classmethod
+ def from_dict(cls, d):
+ config = cls()
+ for k, v in six.iteritems(d):
+ if v is not None:
+ try:
+ config.__setattr__(k, v)
+ except AttributeError:
+ raise InvalidConfigurationError("Unrecognized config option for the index config: %s" % k)
+ return config
+
+ def __repr__(self):
+ return "IndexConfig(name=%s, type=%s, attributes=%s, bitmap_index_options=%s)" \
+ % (self.name, self.type, self.attributes, self.bitmap_index_options)
+
+
+class IndexUtil(object):
+ _MAX_ATTRIBUTES = 255
+ """Maximum number of attributes allowed in the index."""
+
+ _THIS_PATTERN = re.compile(r"^this\.")
+ """Pattern to stripe away "this." prefix."""
+
+ @staticmethod
+ def validate_attribute(attribute):
+ check_not_none(attribute, "Attribute name cannot be None")
+
+ stripped_attribute = attribute.strip()
+ if not stripped_attribute:
+ raise ValueError("Attribute name cannot be empty")
+
+ if stripped_attribute.endswith("."):
+ raise ValueError("Attribute name cannot end with dot: %s" % attribute)
+
+ @staticmethod
+ def validate_and_normalize(map_name, index_config):
+ original_attributes = index_config.attributes
+ if not original_attributes:
+ raise ValueError("Index must have at least one attribute: %s" % index_config)
- Example:
+ if len(original_attributes) > IndexUtil._MAX_ATTRIBUTES:
+ raise ValueError(
+ "Index cannot have more than %s attributes %s" % (IndexUtil._MAX_ATTRIBUTES, index_config))
+
+ if index_config.type == IndexType.BITMAP and len(original_attributes) > 1:
+ raise ValueError("Composite bitmap indexes are not supported: %s" % index_config)
+
+ normalized_attributes = []
+ for original_attribute in original_attributes:
+ IndexUtil.validate_attribute(original_attribute)
+
+ original_attribute = original_attribute.strip()
+ normalized_attribute = IndexUtil.canonicalize_attribute(original_attribute)
+
+ try:
+ idx = normalized_attributes.index(normalized_attribute)
+ except ValueError:
+ pass
+ else:
+ duplicate_original_attribute = original_attributes[idx]
+ if duplicate_original_attribute == original_attribute:
+ raise ValueError("Duplicate attribute name [attribute_name=%s, index_config=%s]"
+ % (original_attribute, index_config))
+ else:
+ raise ValueError("Duplicate attribute names [attribute_name1=%s, attribute_name2=%s, "
+ "index_config=%s]"
+ % (duplicate_original_attribute, original_attribute, index_config))
+
+ normalized_attributes.append(normalized_attribute)
+
+ name = index_config.name
+ if name and not name.strip():
+ name = None
+
+ normalized_config = IndexUtil.build_normalized_config(map_name, index_config.type, name,
+ normalized_attributes)
+ if index_config.type == IndexType.BITMAP:
+ unique_key = index_config.bitmap_index_options.unique_key
+ unique_key_transformation = index_config.bitmap_index_options.unique_key_transformation
+ IndexUtil.validate_attribute(unique_key)
+ unique_key = IndexUtil.canonicalize_attribute(unique_key)
+ normalized_config.bitmap_index_options.unique_key = unique_key
+ normalized_config.bitmap_index_options.unique_key_transformation = unique_key_transformation
- >>> my_factory = {MyPersonClass.CLASS_ID : MyPersonClass, MyAddressClass.CLASS_ID : MyAddressClass}
- >>> serialization_config.data_serializable_factories[FACTORY_ID] = my_factory
+ return normalized_config
- """
+ @staticmethod
+ def canonicalize_attribute(attribute):
+ return re.sub(IndexUtil._THIS_PATTERN, "", attribute)
- self.portable_factories = {}
- """
- Dictionary of factory-id and corresponding portable factories. A Factory is a simple dictionary with entries of
- class-id : class-constructor-function pairs.
+ @staticmethod
+ def build_normalized_config(map_name, index_type, index_name, normalized_attributes):
+ new_config = IndexConfig()
+ new_config.type = index_type
- Example:
+ name = map_name + "_" + IndexUtil._index_type_to_name(index_type) if index_name is None else None
+ for normalized_attribute in normalized_attributes:
+ new_config.add_attribute(normalized_attribute)
+ if name:
+ name += "_" + normalized_attribute
- >>> portable_factory = {PortableClass_0.CLASS_ID : PortableClass_0, PortableClass_1.CLASS_ID : PortableClass_1}
- >>> serialization_config.portable_factories[FACTORY_ID] = portable_factory
- """
+ if name:
+ index_name = name
- self.class_definitions = set()
- """
- Set of all Portable class definitions.
- """
+ new_config.name = index_name
+ return new_config
- self.check_class_def_errors = True
- """Configured Portable Class definitions should be validated for errors or not."""
+ @staticmethod
+ def _index_type_to_name(index_type):
+ if index_type == IndexType.SORTED:
+ return "sorted"
+ elif index_type == IndexType.HASH:
+ return "hash"
+ elif index_type == IndexType.BITMAP:
+ return "bitmap"
+ else:
+ raise ValueError("Unsupported index type %s" % index_type)
- self.is_big_endian = True
- """Hazelcast Serialization is big endian or not."""
- self.default_integer_type = INTEGER_TYPE.INT
- """
- Python has variable length int/long type. In order to match this with static fixed length Java server, this option
- defines the length of the int/long.
- One of the values of :const:`INTEGER_TYPE` can be assigned. Please see :const:`INTEGER_TYPE` documentation for details of the options.
- """
+class _Config(object):
+ __slots__ = ("_cluster_members", "_cluster_name", "_client_name",
+ "_connection_timeout", "_socket_options", "_redo_operation",
+ "_smart_routing", "_ssl_enabled", "_ssl_cafile",
+ "_ssl_certfile", "_ssl_keyfile", "_ssl_password",
+ "_ssl_protocol", "_ssl_ciphers", "_cloud_discovery_token",
+ "_async_start", "_reconnect_mode", "_retry_initial_backoff",
+ "_retry_max_backoff", "_retry_jitter", "_retry_multiplier",
+ "_cluster_connect_timeout", "_portable_version", "_data_serializable_factories",
+ "_portable_factories", "_class_definitions", "_check_class_definition_errors",
+ "_is_big_endian", "_default_int_type", "_global_serializer",
+ "_custom_serializers", "_near_caches", "_load_balancer",
+ "_membership_listeners", "_lifecycle_listeners", "_flake_id_generators",
+ "_labels", "_heartbeat_interval", "_heartbeat_timeout",
+ "_invocation_timeout", "_invocation_retry_pause", "_statistics_enabled",
+ "_statistics_period", "_shuffle_member_list", "_logging_config",
+ "_logging_level")
+ def __init__(self):
+ self._cluster_members = []
+ self._cluster_name = "dev"
+ self._client_name = None
+ self._connection_timeout = 5.0
+ self._socket_options = []
+ self._redo_operation = False
+ self._smart_routing = True
+ self._ssl_enabled = False
+ self._ssl_cafile = None
+ self._ssl_certfile = None
+ self._ssl_keyfile = None
+ self._ssl_password = None
+ self._ssl_protocol = SSLProtocol.TLSv1_2
+ self._ssl_ciphers = None
+ self._cloud_discovery_token = None
+ self._async_start = False
+ self._reconnect_mode = ReconnectMode.ON
+ self._retry_initial_backoff = 1.0
+ self._retry_max_backoff = 30.0
+ self._retry_jitter = 0.0
+ self._retry_multiplier = 1.0
+ self._cluster_connect_timeout = 20.0
+ self._portable_version = 0
+ self._data_serializable_factories = {}
+ self._portable_factories = {}
+ self._class_definitions = []
+ self._check_class_definition_errors = True
+ self._is_big_endian = True
+ self._default_int_type = IntType.INT
self._global_serializer = None
self._custom_serializers = {}
+ self._near_caches = {}
+ self._load_balancer = None
+ self._membership_listeners = []
+ self._lifecycle_listeners = []
+ self._flake_id_generators = {}
+ self._labels = []
+ self._heartbeat_interval = 5.0
+ self._heartbeat_timeout = 60.0
+ self._invocation_timeout = 120.0
+ self._invocation_retry_pause = 1.0
+ self._statistics_enabled = False
+ self._statistics_period = 3.0
+ self._shuffle_member_list = True
+ self._logging_config = None
+ self._logging_level = logging.INFO
+
+ @property
+ def cluster_members(self):
+ return self._cluster_members
+
+ @cluster_members.setter
+ def cluster_members(self, value):
+ if isinstance(value, list):
+ for address in value:
+ if not isinstance(address, six.string_types):
+ raise TypeError("cluster_members must be list of strings")
- def add_data_serializable_factory(self, factory_id, factory):
- """
- Helper method for adding IdentifiedDataSerializable factory.
- example:
- >>> my_factory = {MyPersonClass.CLASS_ID : MyPersonClass, MyAddressClass.CLASS_ID : MyAddressClass}
- >>> serialization_config.add_data_serializable_factory(factory_id, my_factory)
-
- :param factory_id: (int), factory-id to register.
- :param factory: (Dictionary), the factory dictionary of class-id:class-constructor-function pairs.
- """
- self.data_serializable_factories[factory_id] = factory
-
- def add_portable_factory(self, factory_id, factory):
- """
- Helper method for adding Portable factory.
- example:
- >>> portable_factory = {PortableClass_0.CLASS_ID : PortableClass_0, PortableClass_1.CLASS_ID : PortableClass_1}
- >>> serialization_config.portable_factories[FACTORY_ID] = portable_factory
-
- :param factory_id: (int), factory-id to register.
- :param factory: (Dictionary), the factory dictionary of class-id:class-constructor-function pairs.
- """
- self.portable_factories[factory_id] = factory
-
- def set_custom_serializer(self, _type, serializer):
- """
- Assign a serializer for the type.
-
- :param _type: (Type), the target type of the serializer
- :param serializer: (Serializer), Custom Serializer constructor function
- """
- validate_type(_type)
- validate_serializer(serializer, StreamSerializer)
- self._custom_serializers[_type] = serializer
+ self._cluster_members = value
+ else:
+ raise TypeError("cluster_members must be a list")
@property
- def custom_serializers(self):
- """
- All custom serializers.
+ def cluster_name(self):
+ return self._cluster_name
- :return: (Dictionary), dictionary of type-custom serializer pairs.
- """
- return self._custom_serializers
+ @cluster_name.setter
+ def cluster_name(self, value):
+ if isinstance(value, six.string_types):
+ self._cluster_name = value
+ else:
+ raise TypeError("cluster_name must be a string")
@property
- def global_serializer(self):
- """
- The Global serializer property for serialization service. The assigned value should be a class constructor
- function. It handles every object if no other serializer found.
+ def client_name(self):
+ return self._client_name
- Global serializers should extend `hazelcast.serializer.api.StreamSerializer`
- """
- return self._global_serializer
+ @client_name.setter
+ def client_name(self, value):
+ if isinstance(value, six.string_types):
+ self._client_name = value
+ else:
+ raise TypeError("client_name must be a string")
- @global_serializer.setter
- def global_serializer(self, global_serializer):
- validate_serializer(global_serializer, StreamSerializer)
- self._global_serializer = global_serializer
+ @property
+ def connection_timeout(self):
+ return self._connection_timeout
+
+ @connection_timeout.setter
+ def connection_timeout(self, value):
+ if isinstance(value, number_types):
+ if value < 0:
+ raise ValueError("connection_timeout must be non-negative")
+ self._connection_timeout = value
+ else:
+ raise TypeError("connection_timeout must be a number")
+ @property
+ def socket_options(self):
+ return self._socket_options
-class NearCacheConfig(object):
- """
- Map Near cache configuration for a specific map by name.
- """
+ @socket_options.setter
+ def socket_options(self, value):
+ if isinstance(value, list):
+ try:
+ for _, _, _ in value:
+ # Must be a tuple of length 3
+ pass
- def __init__(self, name="default"):
- self._name = name
- self.invalidate_on_change = True
- """Should a value is invalidated and removed in case of any map data
- updating operations such as replace, remove etc.
- """
+ self._socket_options = value
+ except ValueError:
+ raise TypeError("socket_options must contain tuples of length 3 as items")
+ else:
+ raise TypeError("socket_options must be a list")
+
+ @property
+ def redo_operation(self):
+ return self._redo_operation
- self._in_memory_format = IN_MEMORY_FORMAT.BINARY
- self._time_to_live_seconds = None
- self._max_idle_seconds = None
- self._eviction_policy = EVICTION_POLICY.LRU
- self._eviction_max_size = _DEFAULT_MAX_ENTRY_COUNT
- self._eviction_sampling_count = _DEFAULT_SAMPLING_COUNT
- self._eviction_sampling_pool_size = _DEFAULT_SAMPLING_POOL_SIZE
+ @redo_operation.setter
+ def redo_operation(self, value):
+ if isinstance(value, bool):
+ self._redo_operation = value
+ else:
+ raise TypeError("redo_operation must be a boolean")
@property
- def name(self):
- """Name of the map that this near cache belong. Cannot be None."""
- return self._name
+ def smart_routing(self):
+ return self._smart_routing
- @name.setter
- def name(self, name):
- if name is None:
- raise ValueError("Name of the map cannot be None")
- self._name = name
+ @smart_routing.setter
+ def smart_routing(self, value):
+ if isinstance(value, bool):
+ self._smart_routing = value
+ else:
+ raise TypeError("smart_routing must be a boolean")
@property
- def in_memory_format(self):
- """Internal representation of the stored data in near cache."""
- return self._in_memory_format
+ def ssl_enabled(self):
+ return self._ssl_enabled
- @in_memory_format.setter
- def in_memory_format(self, in_memory_format=IN_MEMORY_FORMAT.BINARY):
- if in_memory_format not in IN_MEMORY_FORMAT.reverse:
- raise ValueError("Invalid in-memory-format :{}".format(in_memory_format))
- self._in_memory_format = in_memory_format
+ @ssl_enabled.setter
+ def ssl_enabled(self, value):
+ if isinstance(value, bool):
+ self._ssl_enabled = value
+ else:
+ raise TypeError("ssl_enabled must be a boolean")
@property
- def time_to_live_seconds(self):
- """The maximum number of seconds for each entry to stay in the near cache."""
- return self._time_to_live_seconds
+ def ssl_cafile(self):
+ return self._ssl_cafile
- @time_to_live_seconds.setter
- def time_to_live_seconds(self, time_to_live_seconds):
- if time_to_live_seconds < 0:
- raise ValueError("'time_to_live_seconds' cannot be less than 0")
- self._time_to_live_seconds = time_to_live_seconds
+ @ssl_cafile.setter
+ def ssl_cafile(self, value):
+ if isinstance(value, six.string_types):
+ self._ssl_cafile = value
+ else:
+ raise TypeError("ssl_cafile must be a string")
@property
- def max_idle_seconds(self):
- """Maximum number of seconds each entry can stay in the near cache as untouched (not-read)."""
- return self._max_idle_seconds
+ def ssl_certfile(self):
+ return self._ssl_certfile
- @max_idle_seconds.setter
- def max_idle_seconds(self, max_idle_seconds):
- if max_idle_seconds < 0:
- raise ValueError("'max_idle_seconds' cannot be less than 0")
- self._max_idle_seconds = max_idle_seconds
+ @ssl_certfile.setter
+ def ssl_certfile(self, value):
+ if isinstance(value, six.string_types):
+ self._ssl_certfile = value
+ else:
+ raise TypeError("ssl_certfile must be a string")
@property
- def eviction_policy(self):
- """The eviction policy for the near cache"""
- return self._eviction_policy
+ def ssl_keyfile(self):
+ return self._ssl_keyfile
- @eviction_policy.setter
- def eviction_policy(self, eviction_policy):
- if eviction_policy not in EVICTION_POLICY.reverse:
- raise ValueError("Invalid eviction_policy :{}".format(eviction_policy))
- self._eviction_policy = eviction_policy
+ @ssl_keyfile.setter
+ def ssl_keyfile(self, value):
+ if isinstance(value, six.string_types):
+ self._ssl_keyfile = value
+ else:
+ raise TypeError("ssl_keyfile must be a string")
@property
- def eviction_max_size(self):
- """The limit for number of entries until the eviction start."""
- return self._eviction_max_size
+ def ssl_password(self):
+ return self._ssl_password
- @eviction_max_size.setter
- def eviction_max_size(self, eviction_max_size):
- if eviction_max_size < 1:
- raise ValueError("'Eviction-max-size' cannot be less than 1")
- self._eviction_max_size = eviction_max_size
+ @ssl_password.setter
+ def ssl_password(self, value):
+ if isinstance(value, (six.string_types, six.binary_type, bytearray)) or callable(value):
+ self._ssl_password = value
+ else:
+ raise TypeError("ssl_password must be string, bytes, bytearray or callable")
@property
- def eviction_sampling_count(self):
- """The entry count of the samples for the internal eviction sampling algorithm taking samples in each
- operation."""
- return self._eviction_sampling_count
+ def ssl_protocol(self):
+ return self._ssl_protocol
- @eviction_sampling_count.setter
- def eviction_sampling_count(self, eviction_sampling_count):
- if eviction_sampling_count < 1:
- raise ValueError("'eviction_sampling_count' cannot be less than 1")
- self._eviction_sampling_count = eviction_sampling_count
+ @ssl_protocol.setter
+ def ssl_protocol(self, value):
+ if value in SSLProtocol.reverse:
+ self._ssl_protocol = value
+ else:
+ raise TypeError("ssl_protocol must be of type SSLProtocol")
@property
- def eviction_sampling_pool_size(self):
- """The size of the internal eviction sampling algorithm has a pool of best candidates for eviction."""
- return self._eviction_sampling_pool_size
+ def ssl_ciphers(self):
+ return self._ssl_ciphers
- @eviction_sampling_pool_size.setter
- def eviction_sampling_pool_size(self, eviction_sampling_pool_size):
- if eviction_sampling_pool_size < 1:
- raise ValueError("'eviction_sampling_pool_size' cannot be less than 1")
- self._eviction_sampling_pool_size = eviction_sampling_pool_size
+ @ssl_ciphers.setter
+ def ssl_ciphers(self, value):
+ if isinstance(value, six.string_types):
+ self._ssl_ciphers = value
+ else:
+ raise TypeError("ssl_ciphers must be a string")
+ @property
+ def cloud_discovery_token(self):
+ return self._cloud_discovery_token
-RECONNECT_MODE = enum(OFF=0, ON=1, ASYNC=2)
-"""
-* OFF : Prevent reconnect to cluster after a disconnect.
-* ON : Reconnect to cluster by blocking invocations.
-* ASYNC : Reconnect to cluster without blocking invocations. Invocations will receive ClientOfflineError
-"""
+ @cloud_discovery_token.setter
+ def cloud_discovery_token(self, value):
+ if isinstance(value, six.string_types):
+ self._cloud_discovery_token = value
+ else:
+ raise TypeError("cloud_discovery_token must be a string")
+ @property
+ def async_start(self):
+ return self._async_start
-class ConnectionStrategyConfig(object):
- """Connection strategy configuration is used for setting custom strategies and configuring strategy parameters."""
+ @async_start.setter
+ def async_start(self, value):
+ if isinstance(value, bool):
+ self._async_start = value
+ else:
+ raise TypeError("async_start must be a boolean")
- def __init__(self):
- self.async_start = False
- """Enables non-blocking start mode of HazelcastClient. When set to True, the client
- creation will not wait to connect to cluster. The client instance will throw exceptions
- until it connects to cluster and becomes ready. If set to False, HazelcastClient will block
- until a cluster connection established and it is ready to use the client instance.
- By default, set to False.
- """
+ @property
+ def reconnect_mode(self):
+ return self._reconnect_mode
- self.reconnect_mode = RECONNECT_MODE.ON
- """Defines how a client reconnects to cluster after a disconnect."""
+ @reconnect_mode.setter
+ def reconnect_mode(self, value):
+ if value in ReconnectMode.reverse:
+ self._reconnect_mode = value
+ else:
+ raise TypeError("reconnect_mode must be a type of ReconnectMode")
- self.connection_retry = ConnectionRetryConfig()
- """Connection retry config to be used by the client."""
+ @property
+ def retry_initial_backoff(self):
+ return self._retry_initial_backoff
+
+ @retry_initial_backoff.setter
+ def retry_initial_backoff(self, value):
+ if isinstance(value, number_types):
+ if value < 0:
+ raise ValueError("retry_initial_backoff must be non-negative")
+ self._retry_initial_backoff = value
+ else:
+ raise TypeError("retry_initial_backoff must be a number")
+ @property
+ def retry_max_backoff(self):
+ return self._retry_max_backoff
+
+ @retry_max_backoff.setter
+ def retry_max_backoff(self, value):
+ if isinstance(value, number_types):
+ if value < 0:
+ raise ValueError("retry_max_backoff must be non-negative")
+ self._retry_max_backoff = value
+ else:
+ raise TypeError("retry_max_backoff must be a number")
-_DEFAULT_INITIAL_BACKOFF = 1
-_DEFAULT_MAX_BACKOFF = 30
-_DEFAULT_CLUSTER_CONNECT_TIMEOUT = 20
-_DEFAULT_MULTIPLIER = 1
-_DEFAULT_JITTER = 0
+ @property
+ def retry_jitter(self):
+ return self._retry_jitter
+
+ @retry_jitter.setter
+ def retry_jitter(self, value):
+ if isinstance(value, number_types):
+ if value < 0 or value > 1:
+ raise ValueError("retry_jitter must be in range [0.0, 1.0]")
+ self._retry_jitter = value
+ else:
+ raise TypeError("retry_jitter must be a number")
+ @property
+ def retry_multiplier(self):
+ return self._retry_multiplier
+
+ @retry_multiplier.setter
+ def retry_multiplier(self, value):
+ if isinstance(value, number_types):
+ if value < 1:
+ raise ValueError("retry_multiplier must be greater than or equal to 1.0")
+ self._retry_multiplier = value
+ else:
+ raise TypeError("retry_multiplier must be a number")
-class ConnectionRetryConfig(object):
- """Connection retry config controls the period among connection establish retries
- and defines when the client should give up retrying. Supports exponential behaviour
- with jitter for wait periods.
- """
+ @property
+ def cluster_connect_timeout(self):
+ return self._cluster_connect_timeout
+
+ @cluster_connect_timeout.setter
+ def cluster_connect_timeout(self, value):
+ if isinstance(value, number_types):
+ if value < 0:
+ raise ValueError("cluster_connect_timeout must be non-negative")
+ self._cluster_connect_timeout = value
+ else:
+ raise TypeError("cluster_connect_timeout must be a number")
- def __init__(self):
- self.initial_backoff = _DEFAULT_INITIAL_BACKOFF
- """Defines wait period in seconds after the first failure before retrying.
- Must be non-negative. By default, set to 1.
- """
+ @property
+ def portable_version(self):
+ return self._portable_version
+
+ @portable_version.setter
+ def portable_version(self, value):
+ if isinstance(value, number_types):
+ if value < 0:
+ raise ValueError("portable_version must be non-negative")
+ self._portable_version = value
+ else:
+ raise TypeError("portable_version must be a number")
- self.max_backoff = _DEFAULT_MAX_BACKOFF
- """Defines an upper bound for the backoff interval in seconds. Must be non-negative.
- By default, set to 30 seconds.
- """
+ @property
+ def data_serializable_factories(self):
+ return self._data_serializable_factories
- self.cluster_connect_timeout = _DEFAULT_CLUSTER_CONNECT_TIMEOUT
- """Defines timeout value in seconds for the client to give up a connection
- attempt to the cluster. Must be non-negative. By default, set to 20 seconds.
- """
+ @data_serializable_factories.setter
+ def data_serializable_factories(self, value):
+ if isinstance(value, dict):
+ for factory_id, factory in six.iteritems(value):
+ if not isinstance(factory_id, six.integer_types):
+ raise TypeError("Keys of data_serializable_factories must be integers")
- self.multiplier = _DEFAULT_MULTIPLIER
- """Defines the factor with which to multiply backoff after a failed retry.
- Must be greater than or equal to 1. By default, set to 1.
- """
+ if not isinstance(factory, dict):
+ raise TypeError("Values of data_serializable_factories must be dict")
- self.jitter = _DEFAULT_JITTER
- """Defines how much to randomize backoffs. At each iteration the calculated
- back-off is randomized via following method in pseudo-code
- Random(-jitter * current_backoff, jitter * current_backoff).
- Must be in range [0.0, 1.0]. By default, set to `0` (no randomization)."""
+ for class_id, clazz in six.iteritems(factory):
+ if not isinstance(class_id, six.integer_types):
+ raise TypeError("Keys of factories of data_serializable_factories must be integers")
+ if not (isinstance(clazz, type) and issubclass(clazz, IdentifiedDataSerializable)):
+ raise TypeError("Values of factories of data_serializable_factories must be "
+ "subclasses of IdentifiedDataSerializable")
-class SSLConfig(object):
- """
- SSL configuration.
- """
+ self._data_serializable_factories = value
+ else:
+ raise TypeError("data_serializable_factories must be a dict")
- def __init__(self):
- self.enabled = False
- """Enables/disables SSL."""
+ @property
+ def portable_factories(self):
+ return self._portable_factories
- self.cafile = None
- """
- Absolute path of concatenated CA certificates used to validate server's certificates in PEM format.
- When SSL is enabled and cafile is not set, a set of default CA certificates from default locations
- will be used.
- """
+ @portable_factories.setter
+ def portable_factories(self, value):
+ if isinstance(value, dict):
+ for factory_id, factory in six.iteritems(value):
+ if not isinstance(factory_id, six.integer_types):
+ raise TypeError("Keys of portable_factories must be integers")
- self.certfile = None
- """Absolute path of the client certificate in PEM format."""
+ if not isinstance(factory, dict):
+ raise TypeError("Values of portable_factories must be dict")
- self.keyfile = None
- """
- Absolute path of the private key file for the client certificate in the PEM format.
- If this parameter is None, private key will be taken from certfile.
- """
+ for class_id, clazz in six.iteritems(factory):
+ if not isinstance(class_id, six.integer_types):
+ raise TypeError("Keys of factories of portable_factories must be integers")
- self.password = None
- """
- Password for decrypting the keyfile if it is encrypted.
- The password may be a function to call to get the password.
- It will be called with no arguments, and it should return a string, bytes, or bytearray.
- If the return value is a string it will be encoded as UTF-8 before using it to decrypt the key.
- Alternatively a string, bytes, or bytearray value may be supplied directly as the password.
- """
+ if not (isinstance(clazz, type) and issubclass(clazz, Portable)):
+ raise TypeError("Values of factories of portable_factories must be "
+ "subclasses of Portable")
- self.protocol = PROTOCOL.TLS
- """Protocol version used in SSL communication. Default value is TLSv1.2"""
+ self._portable_factories = value
+ else:
+ raise TypeError("portable_factories must be a dict")
- self.ciphers = None
- """
- String in the OpenSSL cipher list format to set the available ciphers for sockets.
- More than one cipher can be set by separating them with a colon.
- """
+ @property
+ def class_definitions(self):
+ return self._class_definitions
+ @class_definitions.setter
+ def class_definitions(self, value):
+ if isinstance(value, list):
+ for cd in value:
+ if not isinstance(cd, ClassDefinition):
+ raise TypeError("class_definitions must contain objects of type ClassDefinition")
-class FlakeIdGeneratorConfig(object):
- """
- FlakeIdGeneratorConfig contains the configuration for the client regarding
- :class:`~hazelcast.proxy.flake_id_generator.FlakeIdGenerator`
- """
+ self._class_definitions = value
+ else:
+ raise TypeError("class_definitions must be a list")
- def __init__(self, name="default"):
- self._name = name
- self._prefetch_count = 100
- self._prefetch_validity_in_millis = 600000
+ @property
+ def check_class_definition_errors(self):
+ return self._check_class_definition_errors
+
+ @check_class_definition_errors.setter
+ def check_class_definition_errors(self, value):
+ if isinstance(value, bool):
+ self._check_class_definition_errors = value
+ else:
+ raise TypeError("check_class_definition_errors must be a boolean")
@property
- def name(self):
- """
- Name of the flake ID generator configuration.
-
- :getter: Returns the configuration name. This can be actual object name or pattern.
- :setter: Sets the name or name pattern for this config. Must not be modified after this
- instance is added to configuration.
- :type: str
- """
- return self._name
+ def is_big_endian(self):
+ return self._is_big_endian
- @name.setter
- def name(self, name):
- self._name = name
+ @is_big_endian.setter
+ def is_big_endian(self, value):
+ if isinstance(value, bool):
+ self._is_big_endian = value
+ else:
+ raise TypeError("is_big_endian must be a boolean")
@property
- def prefetch_count(self):
- """
- Prefetch value count.
-
- :getter: Returns the prefetch count. Prefetch count is in the range 1..100,000.
- :setter: Sets how many IDs are pre-fetched on the background when a new flake ID is requested
- from members. Default is 100.
- Prefetch count should be in the range 1..100,000.
- :type: int
- """
- return self._prefetch_count
+ def default_int_type(self):
+ return self._default_int_type
- @prefetch_count.setter
- def prefetch_count(self, prefetch_count):
- if not (0 < prefetch_count <= _MAXIMUM_PREFETCH_COUNT):
- raise ValueError("Prefetch count must be 1..{}, not {}".format(_MAXIMUM_PREFETCH_COUNT, prefetch_count))
- self._prefetch_count = prefetch_count
+ @default_int_type.setter
+ def default_int_type(self, value):
+ if value in IntType.reverse:
+ self._default_int_type = value
+ else:
+ raise TypeError("default_int_type must be of type IntType")
@property
- def prefetch_validity_in_millis(self):
- """
- Prefetch validity in milliseconds.
+ def global_serializer(self):
+ return self._global_serializer
- :getter: Returns the prefetch validity in milliseconds.
- :setter: Sets for how long the pre-fetched IDs can be used.
- If this time elapses, a new batch of IDs will be fetched.
- Time unit is milliseconds, default is 600,000 (10 minutes).
- The IDs contain timestamp component, which ensures rough global ordering of IDs.
- If an ID is assigned to an object that was created much later, it will be much out of order.
- If you don't care about ordering, set this value to 0.
- :type: int
- """
- return self._prefetch_validity_in_millis
+ @global_serializer.setter
+ def global_serializer(self, value):
+ if isinstance(value, type) and issubclass(value, StreamSerializer):
+ self._global_serializer = value
+ else:
+ raise TypeError("global_serializer must be a StreamSerializer")
- @prefetch_validity_in_millis.setter
- def prefetch_validity_in_millis(self, prefetch_validity_in_millis):
- self._prefetch_validity_in_millis = prefetch_validity_in_millis
+ @property
+ def custom_serializers(self):
+ return self._custom_serializers
+ @custom_serializers.setter
+ def custom_serializers(self, value):
+ if isinstance(value, dict):
+ for _type, serializer in six.iteritems(value):
+ if not isinstance(_type, type):
+ raise TypeError("Keys of custom_serializers must be types")
-class ClientCloudConfig(object):
- """
- Hazelcast Cloud configuration to let the client connect the cluster via Hazelcast.cloud
- """
+ if not (isinstance(serializer, type) and issubclass(serializer, StreamSerializer)):
+ raise TypeError("Values of custom_serializers must be subclasses of StreamSerializer")
- def __init__(self):
- self.enabled = False
- """Enables/disables cloud config."""
+ self._custom_serializers = value
+ else:
+ raise TypeError("custom_serializers must be a dict")
- self.discovery_token = ""
- """Hazelcast Cloud Discovery token of your cluster."""
+ @property
+ def near_caches(self):
+ return self._near_caches
+ @near_caches.setter
+ def near_caches(self, value):
+ if isinstance(value, dict):
+ configs = dict()
+ for name, config in six.iteritems(value):
+ if not isinstance(name, six.string_types):
+ raise TypeError("Keys of near_caches must be strings")
-class LoggerConfig(object):
- """
- Custom configuration for logging or a logging level for the default
- Hazelcast client logger can be set using this class.
- """
- def __init__(self):
- self.config_file = None
- """
- If the configuration file is set, given configuration file
- will be used instead of the default logger configuration
- with the given log level. This should be the absolute
- path of a JSON file that follows the
- ``Configuration dictionary schema`` described in the logging
- module of the standard library.
- """
-
- self.level = logging.INFO
- """
- Sets the logging level for the default logging
- configuration. To turn off the logging, level
- can be set to a high integer value. If custom
- logging levels are not used, a value greater
- than 50 is enough to turn off the default
- logger.
- """
+ if not isinstance(config, dict):
+ raise TypeError("Values of near_caches must be dict")
+ configs[name] = _NearCacheConfig.from_dict(config)
-class BitmapIndexOptions(object):
- """
- Configures indexing options specific to bitmap indexes
- """
+ self._near_caches = configs
+ else:
+ raise TypeError("near_caches must be a dict")
- def __init__(self, unique_key=QUERY_CONSTANTS.KEY_ATTRIBUTE_NAME,
- unique_key_transformation=UNIQUE_KEY_TRANSFORMATION.OBJECT):
- self.unique_key = unique_key
- """
- Source of values which uniquely identify each entry being inserted into an index.
- """
+ @property
+ def load_balancer(self):
+ return self._load_balancer
- self.unique_key_transformation = unique_key_transformation
- """
- Unique key transformation configured in this index. The transformation is
- applied to every value extracted from unique key attribute
- """
+ @load_balancer.setter
+ def load_balancer(self, value):
+ if isinstance(value, LoadBalancer):
+ self._load_balancer = value
+ else:
+ raise TypeError("load_balancer must be a LoadBalancer")
- def __repr__(self):
- return "BitmapIndexOptions(unique_key=%s, unique_key_transformation=%s)" \
- % (self.unique_key, self.unique_key_transformation)
+ @property
+ def membership_listeners(self):
+ return self._membership_listeners
+ @membership_listeners.setter
+ def membership_listeners(self, value):
+ if isinstance(value, list):
+ try:
+ for item in value:
+ try:
+ added, removed = item
+ except TypeError:
+ raise TypeError("membership_listeners must contain tuples of length 2 as items")
-class IndexConfig(object):
- """
- Configuration of an index. Hazelcast support two types of indexes: sorted index and hash index.
- Sorted indexes could be used with equality and range predicates and have logarithmic search time.
- Hash indexes could be used with equality predicates and have constant search time assuming the hash
- function of the indexed field disperses the elements properly.
- Index could be created on one or more attributes.
- """
+ if not (callable(added) or callable(removed)):
+ raise TypeError("At least one of the listeners in the tuple most be callable")
- def __init__(self, name=None, type=INDEX_TYPE.SORTED, attributes=None, bitmap_index_options=None):
- self.name = name
- """Name of the index"""
+ self._membership_listeners = value
+ except ValueError:
+ raise TypeError("membership_listeners must contain tuples of length 2 as items")
+ else:
+ raise TypeError("membership_listeners must be a list")
- self.type = type
- """Type of the index"""
+ @property
+ def lifecycle_listeners(self):
+ return self._lifecycle_listeners
- self.attributes = attributes or []
- """Indexed attributes"""
+ @lifecycle_listeners.setter
+ def lifecycle_listeners(self, value):
+ if isinstance(value, list):
+ for listener in value:
+ if not callable(listener):
+ raise TypeError("lifecycle_listeners must contain callable items")
- self.bitmap_index_options = bitmap_index_options or BitmapIndexOptions()
- """Bitmap index options"""
+ self._lifecycle_listeners = value
+ else:
+ raise TypeError("lifecycle_listeners must be a list")
- def add_attribute(self, attribute):
- _IndexUtil.validate_attribute(attribute)
- self.attributes.append(attribute)
+ @property
+ def flake_id_generators(self):
+ return self._flake_id_generators
- def __repr__(self):
- return "IndexConfig(name=%s, type=%s, attributes=%s, bitmap_index_options=%s)" \
- % (self.name, self.type, self.attributes, self.bitmap_index_options)
+ @flake_id_generators.setter
+ def flake_id_generators(self, value):
+ if isinstance(value, dict):
+ configs = dict()
+ for name, config in six.iteritems(value):
+ if not isinstance(name, six.string_types):
+ raise TypeError("Keys of flake_id_generators must be strings")
+ if not isinstance(config, dict):
+ raise TypeError("Values of flake_id_generators must be dict")
-class _IndexUtil(object):
- _MAX_ATTRIBUTES = 255
- """Maximum number of attributes allowed in the index."""
+ configs[name] = _FlakeIdGeneratorConfig.from_dict(config)
- _THIS_PATTERN = re.compile(r"^this\.")
- """Pattern to stripe away "this." prefix."""
+ self._flake_id_generators = configs
+ else:
+ raise TypeError("flake_id_generators must be a dict")
- @staticmethod
- def validate_attribute(attribute):
- check_not_none(attribute, "Attribute name cannot be None")
+ @property
+ def labels(self):
+ return self._labels
- stripped_attribute = attribute.strip()
- if not stripped_attribute:
- raise ValueError("Attribute name cannot be empty")
+ @labels.setter
+ def labels(self, value):
+ if isinstance(value, list):
+ for label in value:
+ if not isinstance(label, six.string_types):
+ raise TypeError("labels must be list of strings")
- if stripped_attribute.endswith("."):
- raise ValueError("Attribute name cannot end with dot: %s" % attribute)
+ self._labels = value
+ else:
+ raise TypeError("labels must be a list")
- @staticmethod
- def validate_and_normalize(map_name, index_config):
- original_attributes = index_config.attributes
- if not original_attributes:
- raise ValueError("Index must have at least one attribute: %s" % index_config)
+ @property
+ def heartbeat_interval(self):
+ return self._heartbeat_interval
+
+ @heartbeat_interval.setter
+ def heartbeat_interval(self, value):
+ if isinstance(value, number_types):
+ if value <= 0:
+ raise ValueError("heartbeat_interval must be positive")
+ self._heartbeat_interval = value
+ else:
+ raise TypeError("heartbeat_interval must be a number")
- if len(original_attributes) > _IndexUtil._MAX_ATTRIBUTES:
- raise ValueError("Index cannot have more than %s attributes %s" % (_IndexUtil._MAX_ATTRIBUTES, index_config))
+ @property
+ def heartbeat_timeout(self):
+ return self._heartbeat_timeout
+
+ @heartbeat_timeout.setter
+ def heartbeat_timeout(self, value):
+ if isinstance(value, number_types):
+ if value <= 0:
+ raise ValueError("heartbeat_timeout must be positive")
+ self._heartbeat_timeout = value
+ else:
+ raise TypeError("heartbeat_timeout must be a number")
- if index_config.type == INDEX_TYPE.BITMAP and len(original_attributes) > 1:
- raise ValueError("Composite bitmap indexes are not supported: %s" % index_config)
+ @property
+ def invocation_timeout(self):
+ return self._invocation_timeout
+
+ @invocation_timeout.setter
+ def invocation_timeout(self, value):
+ if isinstance(value, number_types):
+ if value <= 0:
+ raise ValueError("invocation_timeout must be positive")
+ self._invocation_timeout = value
+ else:
+ raise TypeError("invocation_timeout must be a number")
- normalized_attributes = []
- for original_attribute in original_attributes:
- _IndexUtil.validate_attribute(original_attribute)
+ @property
+ def invocation_retry_pause(self):
+ return self._invocation_retry_pause
+
+ @invocation_retry_pause.setter
+ def invocation_retry_pause(self, value):
+ if isinstance(value, number_types):
+ if value <= 0:
+ raise ValueError("invocation_retry_pause must be positive")
+ self._invocation_retry_pause = value
+ else:
+ raise TypeError("invocation_retry_pause must be a number")
- original_attribute = original_attribute.strip()
- normalized_attribute = _IndexUtil.canonicalize_attribute(original_attribute)
+ @property
+ def statistics_enabled(self):
+ return self._statistics_enabled
- try:
- idx = normalized_attributes.index(normalized_attribute)
- except ValueError:
- pass
- else:
- duplicate_original_attribute = original_attributes[idx]
- if duplicate_original_attribute == original_attribute:
- raise ValueError("Duplicate attribute name [attribute_name=%s, index_config=%s]"
- % (original_attribute, index_config))
- else:
- raise ValueError("Duplicate attribute names [attribute_name1=%s, attribute_name2=%s, "
- "index_config=%s]"
- % (duplicate_original_attribute, original_attribute, index_config))
+ @statistics_enabled.setter
+ def statistics_enabled(self, value):
+ if isinstance(value, bool):
+ self._statistics_enabled = value
+ else:
+ raise TypeError("statistics_enabled must be a boolean")
- normalized_attributes.append(normalized_attribute)
+ @property
+ def statistics_period(self):
+ return self._statistics_period
+
+ @statistics_period.setter
+ def statistics_period(self, value):
+ if isinstance(value, number_types):
+ if value <= 0:
+ raise ValueError("statistics_period must be positive")
+ self._statistics_period = value
+ else:
+ raise TypeError("statistics_period must be a number")
- name = index_config.name
- if name and not name.strip():
- name = None
+ @property
+ def shuffle_member_list(self):
+ return self._shuffle_member_list
- normalized_config = _IndexUtil.build_normalized_config(map_name, index_config.type, name,
- normalized_attributes)
- if index_config.type == INDEX_TYPE.BITMAP:
- unique_key = index_config.bitmap_index_options.unique_key
- unique_key_transformation = index_config.bitmap_index_options.unique_key_transformation
- _IndexUtil.validate_attribute(unique_key)
- unique_key = _IndexUtil.canonicalize_attribute(unique_key)
- normalized_config.bitmap_index_options.unique_key = unique_key
- normalized_config.bitmap_index_options.unique_key_transformation = unique_key_transformation
+ @shuffle_member_list.setter
+ def shuffle_member_list(self, value):
+ if isinstance(value, bool):
+ self._shuffle_member_list = value
+ else:
+ raise TypeError("shuffle_member_list must be a boolean")
- return normalized_config
+ @property
+ def logging_config(self):
+ return self._logging_config
- @staticmethod
- def canonicalize_attribute(attribute):
- return re.sub(_IndexUtil._THIS_PATTERN, "", attribute)
+ @logging_config.setter
+ def logging_config(self, value):
+ if isinstance(value, dict):
+ self._logging_config = value
+ else:
+ raise TypeError("logging_config must be a dict")
- @staticmethod
- def build_normalized_config(map_name, index_type, index_name, normalized_attributes):
- new_config = IndexConfig()
- new_config.type = index_type
+ @property
+ def logging_level(self):
+ return self._logging_level
+
+ @logging_level.setter
+ def logging_level(self, value):
+ if value in {logging.NOTSET, logging.DEBUG, logging.INFO, logging.WARNING,
+ logging.ERROR, logging.CRITICAL}:
+ self._logging_level = value
+ else:
+ raise TypeError("logging_level must be a valid value")
- name = map_name + "_" + _IndexUtil._index_type_to_name(index_type) if index_name is None else None
- for normalized_attribute in normalized_attributes:
- new_config.add_attribute(normalized_attribute)
- if name:
- name += "_" + normalized_attribute
+ @classmethod
+ def from_dict(cls, d):
+ config = cls()
+ for k, v in six.iteritems(d):
+ if v is not None:
+ try:
+ config.__setattr__(k, v)
+ except AttributeError:
+ raise InvalidConfigurationError("Unrecognized config option: %s" % k)
+ return config
- if name:
- index_name = name
- new_config.name = index_name
- return new_config
+class _NearCacheConfig(object):
+ __slots__ = ("_invalidate_on_change", "_in_memory_format", "_time_to_live", "_max_idle",
+ "_eviction_policy", "_eviction_max_size", "_eviction_sampling_count",
+ "_eviction_sampling_pool_size")
- @staticmethod
- def _index_type_to_name(index_type):
- if index_type == INDEX_TYPE.SORTED:
- return "sorted"
- elif index_type == INDEX_TYPE.HASH:
- return "hash"
- elif index_type == INDEX_TYPE.BITMAP:
- return "bitmap"
+ def __init__(self):
+ self._invalidate_on_change = True
+ self._in_memory_format = InMemoryFormat.BINARY
+ self._time_to_live = None
+ self._max_idle = None
+ self._eviction_policy = EvictionPolicy.LRU
+ self._eviction_max_size = 10000
+ self._eviction_sampling_count = 8
+ self._eviction_sampling_pool_size = 16
+
+ @property
+ def invalidate_on_change(self):
+ return self._invalidate_on_change
+
+ @invalidate_on_change.setter
+ def invalidate_on_change(self, value):
+ if isinstance(value, bool):
+ self._invalidate_on_change = value
else:
- raise ValueError("Unsupported index type %s" % index_type)
+ raise TypeError("invalidate_on_change must be a boolean")
+ @property
+ def in_memory_format(self):
+ return self._in_memory_format
-class ClientProperty(object):
- """
- Client property holds the name, default value and time unit of Hazelcast client properties.
- Client properties can be set by
+ @in_memory_format.setter
+ def in_memory_format(self, value):
+ if value in InMemoryFormat.reverse:
+ self._in_memory_format = value
+ else:
+ raise TypeError("in_memory_format must be of the type InMemoryFormat")
- * Programmatic Configuration
- * Environment variables
- """
+ @property
+ def time_to_live(self):
+ return self._time_to_live
+
+ @time_to_live.setter
+ def time_to_live(self, value):
+ if isinstance(value, number_types):
+ if value < 0:
+ raise ValueError("time_to_live must be non-negative")
+ self._time_to_live = value
+ else:
+ raise TypeError("time_to_live must be a number")
- def __init__(self, name, default_value=None, time_unit=None):
- self.name = name
- self.default_value = default_value
- self.time_unit = time_unit
+ @property
+ def max_idle(self):
+ return self._max_idle
+
+ @max_idle.setter
+ def max_idle(self, value):
+ if isinstance(value, number_types):
+ if value < 0:
+ raise ValueError("max_idle must be non-negative")
+ self._max_idle = value
+ else:
+ raise TypeError("max_idle must be a number")
+ @property
+ def eviction_policy(self):
+ return self._eviction_policy
-class ClientProperties(object):
- HEARTBEAT_INTERVAL = ClientProperty("hazelcast.client.heartbeat.interval", 5000, TimeUnit.MILLISECOND)
- """
- Time interval between the heartbeats sent by the client to the nodes.
- """
+ @eviction_policy.setter
+ def eviction_policy(self, value):
+ if value in EvictionPolicy.reverse:
+ self._eviction_policy = value
+ else:
+ raise TypeError("eviction_policy must be of type EvictionPolicy")
- HEARTBEAT_TIMEOUT = ClientProperty("hazelcast.client.heartbeat.timeout", 60000, TimeUnit.MILLISECOND)
- """
- Client sends heartbeat messages to the members and this is the timeout for this sending operations.
- If there is not any message passing between the client and member within the given time via this property
- in milliseconds, the connection will be closed.
- """
+ @property
+ def eviction_max_size(self):
+ return self._eviction_max_size
- INVOCATION_TIMEOUT_SECONDS = ClientProperty("hazelcast.client.invocation.timeout.seconds", 120, TimeUnit.SECOND)
- """
- When an invocation gets an exception because
-
- * Member throws an exception.
- * Connection between the client and member is closed.
- * Client's heartbeat requests are timed out.
-
- Time passed since invocation started is compared with this property.
- If the time is already passed, then the exception is delegated to the user. If not, the invocation is retried.
- Note that, if invocation gets no exception and it is a long running one, then it will not get any exception,
- no matter how small this timeout is set.
- """
+ @eviction_max_size.setter
+ def eviction_max_size(self, value):
+ if isinstance(value, number_types):
+ if value < 1:
+ raise ValueError("eviction_max_size must be greater than 1")
+ self._eviction_max_size = value
+ else:
+ raise TypeError("eviction_max_size must be a number")
- INVOCATION_RETRY_PAUSE_MILLIS = ClientProperty("hazelcast.client.invocation.retry.pause.millis", 1000,
- TimeUnit.MILLISECOND)
- """
- Pause time between each retry cycle of an invocation in milliseconds.
- """
+ @property
+ def eviction_sampling_count(self):
+ return self._eviction_sampling_count
- HAZELCAST_CLOUD_DISCOVERY_TOKEN = ClientProperty("hazelcast.client.cloud.discovery.token", "")
- """
- Token to use when discovering cluster via Hazelcast.cloud.
- """
+ @eviction_sampling_count.setter
+ def eviction_sampling_count(self, value):
+ if isinstance(value, number_types):
+ if value < 1:
+ raise ValueError("eviction_sampling_count must be greater than 1")
+ self._eviction_sampling_count = value
+ else:
+ raise TypeError("eviction_sampling_count must be a number")
- STATISTICS_ENABLED = ClientProperty("hazelcast.client.statistics.enabled", False)
- """
- Used to enable the client statistics collection.
- """
+ @property
+ def eviction_sampling_pool_size(self):
+ return self._eviction_sampling_pool_size
- STATISTICS_PERIOD_SECONDS = ClientProperty("hazelcast.client.statistics.period.seconds", 3, TimeUnit.SECOND)
- """
- Period in seconds to collect statistics.
- """
+ @eviction_sampling_pool_size.setter
+ def eviction_sampling_pool_size(self, value):
+ if isinstance(value, number_types):
+ if value < 1:
+ raise ValueError("eviction_sampling_pool_size must be greater than 1")
+ self._eviction_sampling_pool_size = value
+ else:
+ raise TypeError("eviction_sampling_pool_size must be a number")
- SHUFFLE_MEMBER_LIST = ClientProperty("hazelcast.client.shuffle.member.list", True)
- """
- Client shuffles the given member list to prevent all clients to connect to the same node when
- this property is set to true. When it is set to false, the client tries to connect to the nodes
- in the given order.
- """
+ @classmethod
+ def from_dict(cls, d):
+ config = cls()
+ for k, v in six.iteritems(d):
+ try:
+ config.__setattr__(k, v)
+ except AttributeError:
+ raise InvalidConfigurationError("Unrecognized config option for the near cache: %s" % k)
+ return config
- def __init__(self, properties):
- self._properties = properties
- def get(self, property):
- """
- Gets the value of the given property. First checks client config properties, then environment variables
- and lastly fall backs to the default value of the property.
+class _FlakeIdGeneratorConfig(object):
+ __slots__ = ("_prefetch_count", "_prefetch_validity")
- :param property: (:class:`~hazelcast.config.ClientProperty`), Property to get value from
- :return: Value of the given property
- """
- value = self._properties.get(property.name, None)
- if value is not None:
- return value
+ def __init__(self):
+ self._prefetch_count = 100
+ self._prefetch_validity = 600
- value = os.getenv(property.name, None)
- if value is not None:
- return value
+ @property
+ def prefetch_count(self):
+ return self._prefetch_count
- return property.default_value
+ @prefetch_count.setter
+ def prefetch_count(self, value):
+ if isinstance(value, number_types):
+ if not (0 < value <= 100000):
+ raise ValueError("prefetch_count must be in range 1 to 100000")
+ self._prefetch_count = value
+ else:
+ raise TypeError("prefetch_count must be a number")
- def get_bool(self, property):
- """
- Gets the value of the given property as boolean.
+ @property
+ def prefetch_validity(self):
+ return self._prefetch_validity
+
+ @prefetch_validity.setter
+ def prefetch_validity(self, value):
+ if isinstance(value, number_types):
+ if value < 0:
+ raise ValueError("prefetch_validity must be non-negative")
+ self._prefetch_validity = value
+ else:
+ raise TypeError("prefetch_validity must be a number")
+
+ @classmethod
+ def from_dict(cls, d):
+ config = cls()
+ for k, v in six.iteritems(d):
+ try:
+ config.__setattr__(k, v)
+ except AttributeError:
+ raise InvalidConfigurationError("Unrecognized config option for the flake id generator: %s" % k)
+ return config
- :param property: (:class:`~hazelcast.config.ClientProperty`), Property to get value from
- :return: (bool), Value of the given property
- """
- value = self.get(property)
- if isinstance(value, bool):
- return value
- return value.lower() == "true"
-
- def get_seconds(self, property):
- """
- Gets the value of the given property in seconds. If the value of the given property is not a number,
- throws TypeError.
-
- :param property: (:class:`~hazelcast.config.ClientProperty`), Property to get seconds from
- :return: (float), Value of the given property in seconds
- """
- return TimeUnit.to_seconds(self.get(property), property.time_unit)
-
- def get_seconds_positive_or_default(self, property):
- """
- Gets the value of the given property in seconds. If the value of the given property is not a number,
- throws TypeError. If the value of the given property in seconds is not positive, tries to
- return the default value in seconds.
-
- :param property: (:class:`~hazelcast.config.ClientProperty`), Property to get seconds from
- :return: (float), Value of the given property in seconds if it is positive.
- Else, value of the default value of given property in seconds.
- """
- seconds = self.get_seconds(property)
- return seconds if seconds > 0 else TimeUnit.to_seconds(property.default_value, property.time_unit)
diff --git a/hazelcast/connection.py b/hazelcast/connection.py
index 6d86716524..fea73c9fdb 100644
--- a/hazelcast/connection.py
+++ b/hazelcast/connection.py
@@ -8,7 +8,7 @@
import uuid
from collections import OrderedDict
-from hazelcast.config import RECONNECT_MODE
+from hazelcast.config import ReconnectMode
from hazelcast.core import AddressHelper
from hazelcast.errors import AuthenticationError, TargetDisconnectedError, HazelcastClientNotActiveError, \
InvalidConfigurationError, ClientNotAllowedInClusterError, IllegalStateError, ClientOfflineError
@@ -18,7 +18,7 @@
from hazelcast.protocol.client_message import SIZE_OF_FRAME_LENGTH_AND_FLAGS, Frame, InboundMessage, \
ClientMessageBuilder
from hazelcast.protocol.codec import client_authentication_codec, client_ping_codec
-from hazelcast.util import AtomicInteger, calculate_version, UNKNOWN_VERSION, enum
+from hazelcast.util import AtomicInteger, calculate_version, UNKNOWN_VERSION
from hazelcast.version import CLIENT_TYPE, CLIENT_VERSION, SERIALIZATION_VERSION
from hazelcast import six
@@ -65,8 +65,11 @@ def sleep(self):
return True
-_AuthenticationStatus = enum(AUTHENTICATED=0, CREDENTIALS_FAILED=1,
- SERIALIZATION_VERSION_MISMATCH=2, NOT_ALLOWED_IN_CLUSTER=3)
+class _AuthenticationStatus(object):
+ AUTHENTICATED = 0
+ CREDENTIALS_FAILED = 1
+ SERIALIZATION_VERSION_MISMATCH = 2
+ NOT_ALLOWED_IN_CLUSTER = 3
class ConnectionManager(object):
@@ -92,20 +95,19 @@ def __init__(self, client, reactor, address_provider, lifecycle_service,
self._near_cache_manager = near_cache_manager
self._logger_extras = logger_extras
config = self._client.config
- self._smart_routing_enabled = config.network.smart_routing
+ self._smart_routing_enabled = config.smart_routing
self._wait_strategy = self._init_wait_strategy(config)
- self._reconnect_mode = config.connection_strategy.reconnect_mode
+ self._reconnect_mode = config.reconnect_mode
self._heartbeat_manager = _HeartbeatManager(self, self._client, reactor, invocation_service, logger_extras)
self._connection_listeners = []
self._connect_all_members_timer = None
- self._async_start = config.connection_strategy.async_start
+ self._async_start = config.async_start
self._connect_to_cluster_thread_running = False
self._pending_connections = dict()
- props = self._client.properties
- self._shuffle_member_list = props.get_bool(props.SHUFFLE_MEMBER_LIST)
+ self._shuffle_member_list = config.shuffle_member_list
self._lock = threading.RLock()
self._connection_id_generator = AtomicInteger()
- self._labels = config.labels
+ self._labels = frozenset(config.labels)
self._cluster_id = None
self._load_balancer = None
@@ -221,13 +223,13 @@ def check_invocation_allowed(self):
if self.active_connections:
return
- if self._async_start or self._reconnect_mode == RECONNECT_MODE.ASYNC:
+ if self._async_start or self._reconnect_mode == ReconnectMode.ASYNC:
raise ClientOfflineError()
else:
raise IOError("No connection found to cluster")
def _trigger_cluster_reconnection(self):
- if self._reconnect_mode == RECONNECT_MODE.OFF:
+ if self._reconnect_mode == ReconnectMode.OFF:
self.logger.info("Reconnect mode is OFF. Shutting down the client", extra=self._logger_extras)
self._shutdown_client()
return
@@ -236,9 +238,8 @@ def _trigger_cluster_reconnection(self):
self._start_connect_to_cluster_thread()
def _init_wait_strategy(self, config):
- retry_config = config.connection_strategy.connection_retry
- return _WaitStrategy(retry_config.initial_backoff, retry_config.max_backoff, retry_config.multiplier,
- retry_config.cluster_connect_timeout, retry_config.jitter, self._logger_extras)
+ return _WaitStrategy(config.retry_initial_backoff, config.retry_max_backoff, config.retry_multiplier,
+ config.cluster_connect_timeout, config.retry_jitter, self._logger_extras)
def _start_connect_all_members_timer(self):
connecting_addresses = set()
@@ -362,7 +363,7 @@ def _get_or_connect(self, address):
factory = self._reactor.connection_factory
connection = factory(self, self._connection_id_generator.get_and_increment(),
- translated, self._client.config.network,
+ translated, self._client.config,
self._invocation_service.handle_client_message)
except IOError:
return ImmediateExceptionFuture(sys.exc_info()[1], sys.exc_info()[2])
@@ -499,10 +500,9 @@ def __init__(self, connection_manager, client, reactor, invocation_service, logg
self._reactor = reactor
self._invocation_service = invocation_service
self._logger_extras = logger_extras
-
- props = client.properties
- self._heartbeat_timeout = props.get_seconds_positive_or_default(props.HEARTBEAT_TIMEOUT)
- self._heartbeat_interval = props.get_seconds_positive_or_default(props.HEARTBEAT_INTERVAL)
+ config = client.config
+ self._heartbeat_timeout = config.heartbeat_timeout
+ self._heartbeat_interval = config.heartbeat_interval
def start(self):
"""
@@ -532,10 +532,10 @@ def _check_connection(self, now, connection):
return
if (now - connection.last_read_time) > self._heartbeat_timeout:
- if connection.live:
- self.logger.warning("Heartbeat failed over the connection: %s" % connection, extra=self._logger_extras)
- connection.close("Heartbeat timed out",
- TargetDisconnectedError("Heartbeat timed out to connection %s" % connection))
+ self.logger.warning("Heartbeat failed over the connection: %s" % connection, extra=self._logger_extras)
+ connection.close("Heartbeat timed out",
+ TargetDisconnectedError("Heartbeat timed out to connection %s" % connection))
+ return
if (now - connection.last_write_time) > self._heartbeat_interval:
request = client_ping_codec.encode_request()
diff --git a/hazelcast/core.py b/hazelcast/core.py
index 7bc6eb9dcd..8a962ed578 100644
--- a/hazelcast/core.py
+++ b/hazelcast/core.py
@@ -3,7 +3,7 @@
from hazelcast import six
from hazelcast import util
-from hazelcast.util import enum
+from hazelcast.util import with_reversed_items
class MemberInfo(object):
@@ -13,7 +13,7 @@ class MemberInfo(object):
Represents a member in the cluster with its address, uuid, lite member status, attributes and version.
"""
- def __init__(self, address, uuid, attributes, lite_member, version, *args):
+ def __init__(self, address, uuid, attributes, lite_member, version, *_):
self.address = address
self.uuid = uuid
self.attributes = attributes
@@ -120,13 +120,21 @@ def __eq__(self, other):
return False
-DistributedObjectEventType = enum(CREATED="CREATED", DESTROYED="DESTROYED")
-"""
-Type of the distributed object event.
+@with_reversed_items
+class DistributedObjectEventType(object):
+ """
+ Type of the distributed object event.
+ """
-* CREATED : DistributedObject is created.
-* DESTROYED: DistributedObject is destroyed.
-"""
+ CREATED = "CREATED"
+ """
+ DistributedObject is created.
+ """
+
+ DESTROYED = "DESTROYED"
+ """
+ DistributedObject is destroyed.
+ """
class DistributedObjectEvent(object):
@@ -137,7 +145,7 @@ class DistributedObjectEvent(object):
def __init__(self, name, service_name, event_type, source):
self.name = name
self.service_name = service_name
- self.event_type = DistributedObjectEventType.reverse.get(event_type, None)
+ self.event_type = DistributedObjectEventType.reverse.get(event_type, event_type)
self.source = source
def __repr__(self):
@@ -214,10 +222,10 @@ def __init__(self, key, value, cost, creation_time, expiration_time, hits, last_
def __repr__(self):
return "SimpleEntryView(key=%s, value=%s, cost=%s, creation_time=%s, " \
"expiration_time=%s, hits=%s, last_access_time=%s, last_stored_time=%s, " \
- "last_update_time=%s, version=%s, eviction_criteria_number=%s, ttl=%s" \
+ "last_update_time=%s, version=%s, ttl=%s, max_idle=%s" \
% (self.key, self.value, self.cost, self.creation_time, self.expiration_time, self.hits,
self.last_access_time, self.last_stored_time, self.last_update_time, self.version,
- self.eviction_criteria_number, self.ttl)
+ self.ttl, self.max_idle)
class MemberSelector(object):
diff --git a/hazelcast/discovery.py b/hazelcast/discovery.py
index 31f4867460..ed1235fb3f 100644
--- a/hazelcast/discovery.py
+++ b/hazelcast/discovery.py
@@ -3,7 +3,6 @@
from hazelcast.errors import HazelcastCertificationError
from hazelcast.core import AddressHelper
-from hazelcast.config import ClientProperty
from hazelcast.six.moves import http_client
try:
@@ -19,8 +18,8 @@ class HazelcastCloudAddressProvider(object):
"""
logger = logging.getLogger("HazelcastClient.HazelcastCloudAddressProvider")
- def __init__(self, host, url, connection_timeout, logger_extras=None):
- self.cloud_discovery = HazelcastCloudDiscovery(host, url, connection_timeout)
+ def __init__(self, token, connection_timeout, logger_extras):
+ self.cloud_discovery = HazelcastCloudDiscovery(token, connection_timeout)
self._private_to_public = dict()
self._logger_extras = logger_extras
@@ -34,8 +33,8 @@ def load_addresses(self):
nodes = self.cloud_discovery.discover_nodes()
# Every private address is primary
return list(nodes.keys()), []
- except Exception as ex:
- self.logger.warning("Failed to load addresses from Hazelcast Cloud: %s" % ex.args[0],
+ except Exception as e:
+ self.logger.warning("Failed to load addresses from Hazelcast Cloud: %s" % e,
extra=self._logger_extras)
return [], []
@@ -63,8 +62,8 @@ def refresh(self):
"""
try:
self._private_to_public = self.cloud_discovery.discover_nodes()
- except Exception as ex:
- self.logger.warning("Failed to load addresses from Hazelcast.cloud: {}".format(ex.args[0]),
+ except Exception as e:
+ self.logger.warning("Failed to load addresses from Hazelcast.cloud: %s" % e,
extra=self._logger_extras)
@@ -73,19 +72,13 @@ class HazelcastCloudDiscovery(object):
Discovery service that discover nodes via Hazelcast.cloud
https://coordinator.hazelcast.cloud/cluster/discovery?token=
"""
+ _CLOUD_URL_BASE = "coordinator.hazelcast.cloud"
_CLOUD_URL_PATH = "/cluster/discovery?token="
_PRIVATE_ADDRESS_PROPERTY = "private-address"
_PUBLIC_ADDRESS_PROPERTY = "public-address"
- CLOUD_URL_BASE_PROPERTY = ClientProperty("hazelcast.client.cloud.url", "https://coordinator.hazelcast.cloud")
- """
- Internal client property to change base url of cloud discovery endpoint.
- Used for testing cloud discovery.
- """
-
- def __init__(self, host, url, connection_timeout):
- self._host = host
- self._url = url
+ def __init__(self, token, connection_timeout):
+ self._url = self._CLOUD_URL_PATH + token
self._connection_timeout = connection_timeout
# Default context operates only on TLSv1+, checks certificates,hostname and validity
self._ctx = ssl.create_default_context()
@@ -97,7 +90,7 @@ def discover_nodes(self):
:return: (dict), Dictionary that maps private addresses to public addresses.
"""
try:
- https_connection = http_client.HTTPSConnection(host=self._host,
+ https_connection = http_client.HTTPSConnection(host=self._CLOUD_URL_BASE,
timeout=self._connection_timeout,
context=self._ctx)
https_connection.request(method="GET", url=self._url, headers={"Accept-Charset": "UTF-8"})
@@ -126,18 +119,3 @@ def _parse_response(self, https_response):
private_to_public_addresses[private_addr] = public_addr
return private_to_public_addresses
-
- @staticmethod
- def get_host_and_url(properties, cloud_token):
- """
- Helper method to get host and url that can be used in HTTPSConnection.
-
- :param properties: Client config properties.
- :param cloud_token: Cloud discovery token.
- :return: Host and URL pair
- """
- host = properties.get(HazelcastCloudDiscovery.CLOUD_URL_BASE_PROPERTY.name,
- HazelcastCloudDiscovery.CLOUD_URL_BASE_PROPERTY.default_value)
- host = host.replace("https://", "")
- host = host.replace("http://", "")
- return host, HazelcastCloudDiscovery._CLOUD_URL_PATH + cloud_token
diff --git a/hazelcast/invocation.py b/hazelcast/invocation.py
index 21c67452a0..a9fcdb3ce3 100644
--- a/hazelcast/invocation.py
+++ b/hazelcast/invocation.py
@@ -48,7 +48,7 @@ class InvocationService(object):
def __init__(self, client, reactor, logger_extras):
config = client.config
- if config.network.smart_routing:
+ if config.smart_routing:
self.invoke = self._invoke_smart
else:
self.invoke = self._invoke_non_smart
@@ -62,9 +62,9 @@ def __init__(self, client, reactor, logger_extras):
self._check_invocation_allowed_fn = None
self._pending = {}
self._next_correlation_id = AtomicInteger(1)
- self._is_redo_operation = config.network.redo_operation
- self._invocation_timeout = self._init_invocation_timeout()
- self._invocation_retry_pause = self._init_invocation_retry_pause()
+ self._is_redo_operation = config.redo_operation
+ self._invocation_timeout = config.invocation_timeout
+ self._invocation_retry_pause = config.invocation_retry_pause
self._shutdown = False
def start(self, partition_service, connection_manager, listener_service):
@@ -167,16 +167,6 @@ def _invoke_non_smart(self, invocation):
except Exception as e:
self._handle_exception(invocation, e)
- def _init_invocation_retry_pause(self):
- invocation_retry_pause = self._client.properties.get_seconds_positive_or_default(
- self._client.properties.INVOCATION_RETRY_PAUSE_MILLIS)
- return invocation_retry_pause
-
- def _init_invocation_timeout(self):
- invocation_timeout = self._client.properties.get_seconds_positive_or_default(
- self._client.properties.INVOCATION_TIMEOUT_SECONDS)
- return invocation_timeout
-
def _send(self, invocation, connection):
if self._shutdown:
raise HazelcastClientNotActiveError()
diff --git a/hazelcast/lifecycle.py b/hazelcast/lifecycle.py
index 2b943800bf..a0236c216c 100644
--- a/hazelcast/lifecycle.py
+++ b/hazelcast/lifecycle.py
@@ -2,16 +2,44 @@
import uuid
from hazelcast import six
-from hazelcast.util import create_git_info, enum
+from hazelcast.util import create_git_info, with_reversed_items
-LifecycleState = enum(
- STARTING="STARTING",
- STARTED="STARTED",
- SHUTTING_DOWN="SHUTTING_DOWN",
- SHUTDOWN="SHUTDOWN",
- CONNECTED="CONNECTED",
- DISCONNECTED="DISCONNECTED",
-)
+
+@with_reversed_items
+class LifecycleState(object):
+ """
+ Lifecycle states.
+ """
+
+ STARTING = "STARTING"
+ """
+ The client is starting.
+ """
+
+ STARTED = "STARTED"
+ """
+ The client has started.
+ """
+
+ CONNECTED = "CONNECTED"
+ """
+ The client connected to a member.
+ """
+
+ SHUTTING_DOWN = "SHUTTING_DOWN"
+ """
+ The client is shutting down.
+ """
+
+ DISCONNECTED = "DISCONNECTED"
+ """
+ The client disconnected from a member.
+ """
+
+ SHUTDOWN = "SHUTDOWN"
+ """
+ The client has shutdown.
+ """
class LifecycleService(object):
@@ -66,8 +94,10 @@ def __init__(self, client, logger_extras):
self.running = False
self._listeners = {}
- for listener in client.config.lifecycle_listeners:
- self.add_listener(listener)
+ lifecycle_listeners = client.config.lifecycle_listeners
+ if lifecycle_listeners:
+ for listener in lifecycle_listeners:
+ self.add_listener(listener)
self._git_info = create_git_info()
diff --git a/hazelcast/listener.py b/hazelcast/listener.py
index 7569f2acbb..b219e38599 100644
--- a/hazelcast/listener.py
+++ b/hazelcast/listener.py
@@ -38,7 +38,7 @@ def __init__(self, client, connection_manager, invocation_service, logger_extras
self._connection_manager = connection_manager
self._invocation_service = invocation_service
self._logger_extras = logger_extras
- self._is_smart = client.config.network.smart_routing
+ self._is_smart = client.config.smart_routing
self._active_registrations = {} # Dict of user_registration_id, ListenerRegistration
self._registration_lock = threading.RLock()
self._event_handlers = {}
diff --git a/hazelcast/near_cache.py b/hazelcast/near_cache.py
index 8ac097a90d..fa13f8142f 100644
--- a/hazelcast/near_cache.py
+++ b/hazelcast/near_cache.py
@@ -1,13 +1,13 @@
import random
from hazelcast import six
-from hazelcast.config import EVICTION_POLICY, IN_MEMORY_FORMAT
+from hazelcast.config import InMemoryFormat, EvictionPolicy
from hazelcast.util import current_time
from hazelcast.six.moves import range
from sys import getsizeof
-def lru_key_func(x):
+def _lru_key_func(x):
"""
Least Recently Used key function.
@@ -17,7 +17,7 @@ def lru_key_func(x):
return x.last_access_time
-def lfu_key_func(x):
+def _lfu_key_func(x):
"""
Least Frequently Used key function.
@@ -27,18 +27,22 @@ def lfu_key_func(x):
return x.access_hit
-def random_key_func(x):
+def _random_key_func(_):
"""
Random key function.
- :param x: (:class:`~hazelcast.near_cache.DataRecord`)
+ :param _: (:class:`~hazelcast.near_cache.DataRecord`)
:return: (int), 0.
"""
return 0
-eviction_key_func = {EVICTION_POLICY.NONE: None, EVICTION_POLICY.LRU: lru_key_func, EVICTION_POLICY.LFU: lfu_key_func,
- EVICTION_POLICY.RANDOM: random_key_func}
+_eviction_key_func = {
+ EvictionPolicy.NONE: None,
+ EvictionPolicy.LRU: _lru_key_func,
+ EvictionPolicy.LFU: _lfu_key_func,
+ EvictionPolicy.RANDOM: _random_key_func
+}
class DataRecord(object):
@@ -66,8 +70,8 @@ def is_expired(self, max_idle_seconds):
(max_idle_seconds is not None and self.last_access_time + max_idle_seconds < now)
def __repr__(self):
- return "DataRecord[key:{}, value:{}, create_time:{}, expiration_time:{}, last_access_time={}, access_hit={}]" \
- .format(self.key, self.value, self.create_time, self.expiration_time, self.last_access_time, self.access_hit)
+ return "DataRecord(key=%s, value=%s, create_time=%s, expiration_time=%s, last_access_time=%s, access_hit=%s)" \
+ % (self.key, self.value, self.create_time, self.expiration_time, self.last_access_time, self.access_hit)
class NearCache(dict):
@@ -75,13 +79,13 @@ class NearCache(dict):
NearCache is a local cache used by :class:`~hazelcast.proxy.map.MapFeatNearCache`.
"""
- def __init__(self, name, serialization_service, in_memory_format, time_to_live_seconds, max_idle_seconds, invalidate_on_change,
+ def __init__(self, name, serialization_service, in_memory_format, time_to_live, max_idle, invalidate_on_change,
eviction_policy, eviction_max_size, eviction_sampling_count=None, eviction_sampling_pool_size=None):
self.name = name
self.serialization_service = serialization_service
self.in_memory_format = in_memory_format
- self.time_to_live_seconds = time_to_live_seconds
- self.max_idle_seconds = max_idle_seconds
+ self.time_to_live = time_to_live
+ self.max_idle = max_idle
self.invalidate_on_change = invalidate_on_change
self.eviction_policy = eviction_policy
self.eviction_max_size = eviction_max_size
@@ -101,7 +105,7 @@ def __init__(self, name, serialization_service, in_memory_format, time_to_live_s
self.eviction_sampling_pool_size = self.eviction_max_size
# internal
- self._key_func = eviction_key_func[self.eviction_policy]
+ self._key_func = _eviction_key_func[self.eviction_policy]
self._eviction_candidates = list()
self._evictions = 0
self._expirations = 0
@@ -133,33 +137,33 @@ def get_statistics(self):
def __setitem__(self, key, value):
self._do_eviction_if_required()
- if self.in_memory_format == IN_MEMORY_FORMAT.BINARY:
+ if self.in_memory_format == InMemoryFormat.BINARY:
value = self.serialization_service.to_data(value)
- elif self.in_memory_format == IN_MEMORY_FORMAT.OBJECT:
+ elif self.in_memory_format == InMemoryFormat.OBJECT:
value = self.serialization_service.to_object(value)
else:
raise ValueError("Invalid in-memory format!!!")
- data_record = DataRecord(key, value, ttl_seconds=self.time_to_live_seconds)
+ data_record = DataRecord(key, value, ttl_seconds=self.time_to_live)
super(NearCache, self).__setitem__(key, data_record)
def __getitem__(self, key):
try:
value_record = super(NearCache, self).__getitem__(key)
- if value_record.is_expired(self.max_idle_seconds):
+ if value_record.is_expired(self.max_idle):
super(NearCache, self).__delitem__(key)
raise KeyError
except KeyError as ke:
self._misses += 1
raise ke
- if self.eviction_policy == EVICTION_POLICY.LRU:
+ if self.eviction_policy == EvictionPolicy.LRU:
value_record.last_access_time = current_time()
- elif self.eviction_policy == EVICTION_POLICY.LFU:
+ elif self.eviction_policy == EvictionPolicy.LFU:
value_record.access_hit += 1
self._hits += 1
return self.serialization_service.to_object(value_record.value) \
- if self.in_memory_format == IN_MEMORY_FORMAT.BINARY else value_record.value
+ if self.in_memory_format == InMemoryFormat.BINARY else value_record.value
def _do_eviction_if_required(self):
if not self._is_eviction_required():
@@ -188,7 +192,7 @@ def _find_new_random_samples(self):
start = self._random_index()
for i in range(start, start + self.eviction_sampling_count):
index = i if i < len(records) else i - len(records)
- if records[index].is_expired(self.max_idle_seconds):
+ if records[index].is_expired(self.max_idle):
self._clean_expired_record(records[index].key)
elif self._is_better_than_worse_entry(records[index]) or len(new_sample_pool) < self.eviction_sampling_pool_size:
new_sample_pool.add(records[index])
@@ -197,7 +201,7 @@ def _find_new_random_samples(self):
def _scan_and_expire_collection(self, records):
new_records = []
for record in records:
- if record.is_expired(self.max_idle_seconds):
+ if record.is_expired(self.max_idle):
self._clean_expired_record(record.key)
else:
new_records.append(record)
@@ -211,7 +215,7 @@ def _is_better_than_worse_entry(self, data_record):
or (self._key_func(data_record) - self._key_func(self._eviction_candidates[-1])) < 0
def _is_eviction_required(self):
- return self.eviction_policy != EVICTION_POLICY.NONE and self.eviction_max_size <= self.__len__()
+ return self.eviction_policy != EvictionPolicy.NONE and self.eviction_max_size <= self.__len__()
def _clean_expired_record(self, key):
try:
@@ -237,7 +241,7 @@ def _invalidate(self, key_data):
self._invalidation_requests += 1
def __repr__(self):
- return "NearCache[len:{}, evicted:{}]".format(self.__len__(), self._evictions)
+ return "NearCache(len=%s, evicted=%s)" % (self.__len__(), self._evictions)
class NearCacheManager(object):
@@ -251,13 +255,13 @@ def get_or_create_near_cache(self, name):
if not near_cache:
near_cache_config = self._client.config.near_caches.get(name, None)
if not near_cache_config:
- raise ValueError("Cannot find a near cache configuration with the name '{}'".format(name))
+ raise ValueError("Cannot find a near cache configuration with the name '%s'" % name)
- near_cache = NearCache(near_cache_config.name,
+ near_cache = NearCache(name,
self._serialization_service,
near_cache_config.in_memory_format,
- near_cache_config.time_to_live_seconds,
- near_cache_config.max_idle_seconds,
+ near_cache_config.time_to_live,
+ near_cache_config.max_idle,
near_cache_config.invalidate_on_change,
near_cache_config.eviction_policy,
near_cache_config.eviction_max_size,
diff --git a/hazelcast/proxy/base.py b/hazelcast/proxy/base.py
index 5c39cf962e..911978488d 100644
--- a/hazelcast/proxy/base.py
+++ b/hazelcast/proxy/base.py
@@ -3,8 +3,8 @@
from hazelcast.future import make_blocking
from hazelcast.invocation import Invocation
from hazelcast.partition import string_partition_strategy
-from hazelcast.util import enum
from hazelcast import six
+from hazelcast.util import with_reversed_items
MAX_SIZE = float('inf')
@@ -30,7 +30,7 @@ def __init__(self, service_name, name, context):
self._register_listener = listener_service.register_listener
self._deregister_listener = listener_service.deregister_listener
self.logger = logging.getLogger("HazelcastClient.%s(%s)" % (type(self).__name__, name))
- self._is_smart = context.config.network.smart_routing
+ self._is_smart = context.config.smart_routing
def destroy(self):
"""
@@ -112,9 +112,78 @@ def __repr__(self):
return '%s(name="%s")' % (type(self).__name__, self.name)
-ItemEventType = enum(added=1, removed=2)
-EntryEventType = enum(added=1, removed=2, updated=4, evicted=8, expired=16, evict_all=32, clear_all=64, merged=128,
- invalidation=256, loaded=512)
+@with_reversed_items
+class ItemEventType(object):
+ """
+ Type of item events.
+ """
+
+ ADDED = 1
+ """
+ Fired when an item is added.
+ """
+
+ REMOVED = 2
+ """
+ Fired when an item is removed.
+ """
+
+
+@with_reversed_items
+class EntryEventType(object):
+ """
+ Type of entry event.
+ """
+
+ ADDED = 1
+ """
+ Fired if an entry is added.
+ """
+
+ REMOVED = 2
+ """
+ Fired if an entry is removed.
+ """
+
+ UPDATED = 4
+ """
+ Fired if an entry is updated.
+ """
+
+ EVICTED = 8
+ """
+ Fired if an entry is evicted.
+ """
+
+ EXPIRED = 16
+ """
+ Fired if an entry is expired.
+ """
+
+ EVICT_ALL = 32
+ """
+ Fired if all entries are evicted.
+ """
+
+ CLEAR_ALL = 64
+ """
+ Fired if all entries are cleared.
+ """
+
+ MERGED = 128
+ """
+ Fired if an entry is merged after a network partition.
+ """
+
+ INVALIDATION = 256
+ """
+ Fired if an entry is invalidated.
+ """
+
+ LOADED = 512
+ """
+ Fired if an entry is loaded.
+ """
class ItemEvent(object):
diff --git a/hazelcast/proxy/flake_id_generator.py b/hazelcast/proxy/flake_id_generator.py
index 624589483b..c0947e09ce 100644
--- a/hazelcast/proxy/flake_id_generator.py
+++ b/hazelcast/proxy/flake_id_generator.py
@@ -3,8 +3,8 @@
import collections
from hazelcast.proxy.base import Proxy, MAX_SIZE
-from hazelcast.config import FlakeIdGeneratorConfig
-from hazelcast.util import current_time_in_millis, TimeUnit, to_millis
+from hazelcast.config import _FlakeIdGeneratorConfig
+from hazelcast.util import TimeUnit, to_millis, current_time
from hazelcast.protocol.codec import flake_id_generator_new_id_batch_codec
from hazelcast.future import ImmediateFuture, Future
@@ -32,8 +32,6 @@ class FlakeIdGenerator(Proxy):
member with join version smaller than 2^16 in the cluster. The remedy is to restart the cluster:
nodeId will be assigned from zero again. Uniqueness after the restart will be preserved thanks to
the timestamp component.
-
- Requires Hazelcast IMDG 3.10
"""
_BITS_NODE_ID = 16
_BITS_SEQUENCE = 6
@@ -43,10 +41,9 @@ def __init__(self, service_name, name, context):
config = context.config.flake_id_generators.get(name, None)
if config is None:
- config = FlakeIdGeneratorConfig()
+ config = _FlakeIdGeneratorConfig()
- self._auto_batcher = _AutoBatcher(config.prefetch_count, config.prefetch_validity_in_millis,
- self._new_id_batch)
+ self._auto_batcher = _AutoBatcher(config.prefetch_count, config.prefetch_validity, self._new_id_batch)
def new_id(self):
"""
@@ -60,8 +57,6 @@ def new_id(self):
:raises HazelcastError: if node ID for all members in the cluster is out of valid range.
See "Node ID overflow" note above.
- :raises UnsupportedOperationError: if the cluster version is below 3.10.
-
:return: (int), new cluster-wide unique ID.
"""
return self._auto_batcher.new_id()
@@ -97,9 +92,9 @@ def handler(message):
class _AutoBatcher(object):
- def __init__(self, batch_size, validity_in_millis, id_generator):
+ def __init__(self, batch_size, validity, id_generator):
self._batch_size = batch_size
- self._validity_in_millis = validity_in_millis
+ self._validity = validity
self._batch_id_supplier = id_generator
self._block = _Block(_IdBatch(0, 0, 0), 0)
self._lock = threading.RLock()
@@ -133,7 +128,7 @@ def _assign_new_block(self, future):
try:
new_batch_required = False
id_batch = future.result()
- block = _Block(id_batch, self._validity_in_millis)
+ block = _Block(id_batch, self._validity)
with self._lock:
while True:
try:
@@ -186,13 +181,13 @@ def __next__(self):
class _Block(object):
- def __init__(self, id_batch, validity_in_millis):
+ def __init__(self, id_batch, validity):
self._id_batch = id_batch
self._iterator = iter(self._id_batch)
- self._invalid_since = validity_in_millis + current_time_in_millis() if validity_in_millis > 0 else MAX_SIZE
+ self._invalid_since = validity + current_time() if validity > 0 else MAX_SIZE
def next_id(self):
- if self._invalid_since <= current_time_in_millis():
+ if self._invalid_since <= current_time():
return None
return next(self._iterator, None)
diff --git a/hazelcast/proxy/list.py b/hazelcast/proxy/list.py
index 4ff65e4565..31144c2ed5 100644
--- a/hazelcast/proxy/list.py
+++ b/hazelcast/proxy/list.py
@@ -110,7 +110,7 @@ def handle_event_item(item, uuid, event_type):
member = self._context.cluster_service.get_member(uuid)
item_event = ItemEvent(self.name, item, event_type, member, self._to_object)
- if event_type == ItemEventType.added:
+ if event_type == ItemEventType.ADDED:
if item_added_func:
item_added_func(item_event)
else:
diff --git a/hazelcast/proxy/map.py b/hazelcast/proxy/map.py
index 47b3fc415b..266b329a6c 100644
--- a/hazelcast/proxy/map.py
+++ b/hazelcast/proxy/map.py
@@ -1,6 +1,6 @@
import itertools
-from hazelcast.config import _IndexUtil
+from hazelcast.config import IndexUtil, IndexType, IndexConfig
from hazelcast.future import combine_futures, ImmediateFuture
from hazelcast.invocation import Invocation
from hazelcast.protocol.codec import map_add_entry_listener_codec, map_add_entry_listener_to_key_codec, \
@@ -82,9 +82,9 @@ def add_entry_listener(self, include_value=False, key=None, predicate=None, adde
.. seealso:: :class:`~hazelcast.serialization.predicate.Predicate` for more info about predicates.
"""
- flags = get_entry_listener_flags(added=added_func, removed=removed_func, updated=updated_func,
- evicted=evicted_func, evict_all=evict_all_func, clear_all=clear_all_func,
- merged=merged_func, expired=expired_func, loaded=loaded_func)
+ flags = get_entry_listener_flags(ADDED=added_func, REMOVED=removed_func, UPDATED=updated_func,
+ EVICTED=evicted_func, EXPIRED=expired_func, EVICT_ALL=evict_all_func,
+ CLEAR_ALL=clear_all_func, MERGED=merged_func, LOADED=loaded_func)
if key and predicate:
codec = map_add_entry_listener_to_key_with_predicate_codec
@@ -107,30 +107,30 @@ def handle_event_entry(key_, value, old_value, merging_value, event_type, uuid,
event = EntryEvent(self._to_object, key_, value, old_value, merging_value,
event_type, uuid, number_of_affected_entries)
- if event.event_type == EntryEventType.added:
+ if event.event_type == EntryEventType.ADDED:
added_func(event)
- elif event.event_type == EntryEventType.removed:
+ elif event.event_type == EntryEventType.REMOVED:
removed_func(event)
- elif event.event_type == EntryEventType.updated:
+ elif event.event_type == EntryEventType.UPDATED:
updated_func(event)
- elif event.event_type == EntryEventType.evicted:
+ elif event.event_type == EntryEventType.EVICTED:
evicted_func(event)
- elif event.event_type == EntryEventType.evict_all:
+ elif event.event_type == EntryEventType.EVICT_ALL:
evict_all_func(event)
- elif event.event_type == EntryEventType.clear_all:
+ elif event.event_type == EntryEventType.CLEAR_ALL:
clear_all_func(event)
- elif event.event_type == EntryEventType.merged:
+ elif event.event_type == EntryEventType.MERGED:
merged_func(event)
- elif event.event_type == EntryEventType.expired:
+ elif event.event_type == EntryEventType.EXPIRED:
expired_func(event)
- elif event.event_type == EntryEventType.loaded:
+ elif event.event_type == EntryEventType.LOADED:
loaded_func(event)
return self._register_listener(request, lambda r: codec.decode_response(r),
lambda reg_id: map_remove_entry_listener_codec.encode_request(self.name, reg_id),
lambda m: codec.handle(m, handle_event_entry))
- def add_index(self, index_config):
+ def add_index(self, attributes=None, index_type=IndexType.SORTED, name=None, bitmap_index_options=None):
"""
Adds an index to this map for the specified entries so that queries can run faster.
@@ -146,8 +146,8 @@ def add_index(self, index_config):
If you query your values mostly based on age and active fields, you should consider indexing these.
>>> employees = self.client.get_map("employees")
- >>> employees.add_index(IndexConfig("age")) # Sorted index for range queries
- >>> employees.add_index(IndexConfig("active", INDEX_TYPE.HASH)) # Hash index for equality predicates
+ >>> employees.add_index(attributes=["age"]) # Sorted index for range queries
+ >>> employees.add_index(attributes=["active"], index_type=IndexType.HASH)) # Hash index for equality predicates
Index attribute should either have a getter method or be public.
You should also make sure to add the indexes before adding
@@ -160,10 +160,19 @@ def add_index(self, index_config):
Until the index finishes being created, any searches for the attribute will use a full Map scan,
thus avoiding using a partially built index and returning incorrect results.
- :param index_config: (:class:`~hazelcast.config.IndexConfig`), index config.
- """
- check_not_none(index_config, "Index config cannot be None")
- validated = _IndexUtil.validate_and_normalize(self.name, index_config)
+ :param attributes: (list), list of indexed attributes.
+ :param index_type: (:class:`~hazelcast.config.IndexType`), type of the index
+ :param name: (str), name of the index
+ :param bitmap_index_options: (dict), bitmap index options.
+ """
+ d = {
+ "name": name,
+ "type": index_type,
+ "attributes": attributes,
+ "bitmap_index_options": bitmap_index_options,
+ }
+ config = IndexConfig.from_dict(d)
+ validated = IndexUtil.validate_and_normalize(self.name, config)
request = map_add_index_codec.encode_request(self.name, validated)
return self._invoke(request)
@@ -1070,7 +1079,7 @@ def _on_destroy(self):
def _add_near_cache_invalidation_listener(self):
try:
codec = map_add_near_cache_invalidation_listener_codec
- request = codec.encode_request(self.name, EntryEventType.invalidation, self._is_smart)
+ request = codec.encode_request(self.name, EntryEventType.INVALIDATION, self._is_smart)
self._invalidation_listener_id = self._register_listener(
request, lambda r: codec.decode_response(r),
lambda reg_id: map_remove_entry_listener_codec.encode_request(self.name, reg_id),
diff --git a/hazelcast/proxy/multi_map.py b/hazelcast/proxy/multi_map.py
index 663534b1f7..afd5288eb0 100644
--- a/hazelcast/proxy/multi_map.py
+++ b/hazelcast/proxy/multi_map.py
@@ -39,11 +39,11 @@ def add_entry_listener(self, include_value=False, key=None, added_func=None, rem
def handle_event_entry(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries):
event = EntryEvent(self._to_object, key, value, old_value, merging_value,
event_type, uuid, number_of_affected_entries)
- if event.event_type == EntryEventType.added and added_func:
+ if event.event_type == EntryEventType.ADDED and added_func:
added_func(event)
- elif event.event_type == EntryEventType.removed and removed_func:
+ elif event.event_type == EntryEventType.REMOVED and removed_func:
removed_func(event)
- elif event.event_type == EntryEventType.clear_all and clear_all_func:
+ elif event.event_type == EntryEventType.CLEAR_ALL and clear_all_func:
clear_all_func(event)
return self._register_listener(
diff --git a/hazelcast/proxy/pn_counter.py b/hazelcast/proxy/pn_counter.py
index bd3d07a252..6f4615fa16 100644
--- a/hazelcast/proxy/pn_counter.py
+++ b/hazelcast/proxy/pn_counter.py
@@ -49,8 +49,6 @@ class PNCounter(Proxy):
The CRDT state is kept entirely on non-lite (data) members. If there
aren't any and the methods here are invoked on a lite member, they will
fail with an NoDataMemberInClusterError.
-
- Requires Hazelcast IMDG 3.10+.
"""
_EMPTY_ADDRESS_LIST = []
@@ -66,7 +64,6 @@ def get(self):
Returns the current value of the counter.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
- :raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:return: (int), the current value of the counter.
@@ -78,7 +75,6 @@ def get_and_add(self, delta):
Adds the given value to the current value and returns the previous value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
- :raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to add.
@@ -92,7 +88,6 @@ def add_and_get(self, delta):
Adds the given value to the current value and returns the updated value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
- :raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to add.
@@ -106,7 +101,6 @@ def get_and_subtract(self, delta):
Subtracts the given value from the current value and returns the previous value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
- :raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to subtract.
@@ -120,7 +114,6 @@ def subtract_and_get(self, delta):
Subtracts the given value from the current value and returns the updated value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
- :raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to subtract.
@@ -134,7 +127,6 @@ def get_and_decrement(self):
Decrements the counter value by one and returns the previous value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
- :raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:return: (int), the previous value.
@@ -147,7 +139,6 @@ def decrement_and_get(self):
Decrements the counter value by one and returns the updated value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
- :raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:return: (int), the updated value.
@@ -160,7 +151,6 @@ def get_and_increment(self):
Increments the counter value by one and returns the previous value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
- :raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:return: (int), the previous value.
diff --git a/hazelcast/proxy/queue.py b/hazelcast/proxy/queue.py
index c49587fc91..45080892cf 100644
--- a/hazelcast/proxy/queue.py
+++ b/hazelcast/proxy/queue.py
@@ -83,7 +83,7 @@ def handle_event_item(item, uuid, event_type):
member = self._context.cluster_service.get_member(uuid)
item_event = ItemEvent(self.name, item, event_type, member, self._to_object)
- if event_type == ItemEventType.added:
+ if event_type == ItemEventType.ADDED:
if item_added_func:
item_added_func(item_event)
else:
diff --git a/hazelcast/proxy/replicated_map.py b/hazelcast/proxy/replicated_map.py
index 201b21e67e..022e95a538 100644
--- a/hazelcast/proxy/replicated_map.py
+++ b/hazelcast/proxy/replicated_map.py
@@ -65,15 +65,15 @@ def add_entry_listener(self, key=None, predicate=None, added_func=None, removed_
def handle_event_entry(key, value, old_value, merging_value, event_type, uuid, number_of_affected_entries):
event = EntryEvent(self._to_object, key, value, old_value, merging_value,
event_type, uuid, number_of_affected_entries)
- if event.event_type == EntryEventType.added and added_func:
+ if event.event_type == EntryEventType.ADDED and added_func:
added_func(event)
- elif event.event_type == EntryEventType.removed and removed_func:
+ elif event.event_type == EntryEventType.REMOVED and removed_func:
removed_func(event)
- elif event.event_type == EntryEventType.updated and updated_func:
+ elif event.event_type == EntryEventType.UPDATED and updated_func:
updated_func(event)
- elif event.event_type == EntryEventType.evicted and evicted_func:
+ elif event.event_type == EntryEventType.EVICTED and evicted_func:
evicted_func(event)
- elif event.event_type == EntryEventType.clear_all and clear_all_func:
+ elif event.event_type == EntryEventType.CLEAR_ALL and clear_all_func:
clear_all_func(event)
return self._register_listener(
diff --git a/hazelcast/proxy/set.py b/hazelcast/proxy/set.py
index 0431b65f06..60d0af0d78 100644
--- a/hazelcast/proxy/set.py
+++ b/hazelcast/proxy/set.py
@@ -65,7 +65,7 @@ def handle_event_item(item, uuid, event_type):
member = self._context.cluster_service.get_member(uuid)
item_event = ItemEvent(self.name, item, event_type, member, self._to_object)
- if event_type == ItemEventType.added:
+ if event_type == ItemEventType.ADDED:
if item_added_func:
item_added_func(item_event)
else:
diff --git a/hazelcast/reactor.py b/hazelcast/reactor.py
index e0e0cff3bb..242fee7422 100644
--- a/hazelcast/reactor.py
+++ b/hazelcast/reactor.py
@@ -11,7 +11,7 @@
from functools import total_ordering
from hazelcast import six
-from hazelcast.config import PROTOCOL
+from hazelcast.config import SSLProtocol
from hazelcast.connection import Connection
from hazelcast.core import Address
from hazelcast.errors import HazelcastError
@@ -128,7 +128,7 @@ class AsyncoreConnection(Connection, asyncore.dispatcher):
read_buffer_size = _BUFFER_SIZE
def __init__(self, dispatcher_map, connection_manager, connection_id, address,
- network_config, message_callback, logger_extras):
+ config, message_callback, logger_extras):
asyncore.dispatcher.__init__(self, map=dispatcher_map)
Connection.__init__(self, connection_manager, connection_id, message_callback, logger_extras)
self.connected_address = address
@@ -137,7 +137,7 @@ def __init__(self, dispatcher_map, connection_manager, connection_id, address,
self._write_queue = deque()
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
- timeout = network_config.connection_timeout
+ timeout = config.connection_timeout
if not timeout:
timeout = six.MAXSIZE
@@ -149,7 +149,7 @@ def __init__(self, dispatcher_map, connection_manager, connection_id, address,
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, _BUFFER_SIZE)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, _BUFFER_SIZE)
- for socket_option in network_config.socket_options:
+ for socket_option in config.socket_options:
if socket_option.option is socket.SO_RCVBUF:
self.read_buffer_size = socket_option.value
@@ -157,41 +157,40 @@ def __init__(self, dispatcher_map, connection_manager, connection_id, address,
self.connect((address.host, address.port))
- ssl_config = network_config.ssl
- if ssl and ssl_config.enabled:
+ if ssl and config.ssl_enabled:
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
- protocol = ssl_config.protocol
+ protocol = config.ssl_protocol
# Use only the configured protocol
try:
- if protocol != PROTOCOL.SSLv2:
+ if protocol != SSLProtocol.SSLv2:
ssl_context.options |= ssl.OP_NO_SSLv2
- if protocol != PROTOCOL.SSLv3 and protocol != PROTOCOL.SSL:
+ if protocol != SSLProtocol.SSLv3:
ssl_context.options |= ssl.OP_NO_SSLv3
- if protocol != PROTOCOL.TLSv1:
+ if protocol != SSLProtocol.TLSv1:
ssl_context.options |= ssl.OP_NO_TLSv1
- if protocol != PROTOCOL.TLSv1_1:
+ if protocol != SSLProtocol.TLSv1_1:
ssl_context.options |= ssl.OP_NO_TLSv1_1
- if protocol != PROTOCOL.TLSv1_2 and protocol != PROTOCOL.TLS:
+ if protocol != SSLProtocol.TLSv1_2:
ssl_context.options |= ssl.OP_NO_TLSv1_2
- if protocol != PROTOCOL.TLSv1_3:
+ if protocol != SSLProtocol.TLSv1_3:
ssl_context.options |= ssl.OP_NO_TLSv1_3
except AttributeError:
pass
ssl_context.verify_mode = ssl.CERT_REQUIRED
- if ssl_config.cafile:
- ssl_context.load_verify_locations(ssl_config.cafile)
+ if config.ssl_cafile:
+ ssl_context.load_verify_locations(config.ssl_cafile)
else:
ssl_context.load_default_certs()
- if ssl_config.certfile:
- ssl_context.load_cert_chain(ssl_config.certfile, ssl_config.keyfile, ssl_config.password)
+ if config.ssl_certfile:
+ ssl_context.load_cert_chain(config.ssl_certfile, config.ssl_keyfile, config.ssl_password)
- if ssl_config.ciphers:
- ssl_context.set_ciphers(ssl_config.ciphers)
+ if config.ssl_ciphers:
+ ssl_context.set_ciphers(config.ssl_ciphers)
self.socket = ssl_context.wrap_socket(self.socket)
diff --git a/hazelcast/serialization/base.py b/hazelcast/serialization/base.py
index 92c62170a7..14f789aefa 100644
--- a/hazelcast/serialization/base.py
+++ b/hazelcast/serialization/base.py
@@ -1,7 +1,7 @@
import sys
from threading import RLock
-from hazelcast.config import INTEGER_TYPE
+from hazelcast.config import IntType
from hazelcast.serialization.api import *
from hazelcast.serialization.data import *
from hazelcast.errors import HazelcastInstanceNotActiveError, HazelcastSerializationError
@@ -88,7 +88,7 @@ def to_object(self, data):
serializer = self._registry.serializer_by_type_id(type_id)
if serializer is None:
if self._active:
- raise HazelcastSerializationError("Missing Serializer for type-id:{}".format(type_id))
+ raise HazelcastSerializationError("Missing Serializer for type-id:%s" % type_id)
else:
raise HazelcastInstanceNotActiveError()
return serializer.read(inp)
@@ -114,7 +114,7 @@ def read_object(self, inp):
serializer = self._registry.serializer_by_type_id(type_id)
if serializer is None:
if self._active:
- raise HazelcastSerializationError("Missing Serializer for type-id:{}".format(type_id))
+ raise HazelcastSerializationError("Missing Serializer for type-id: %s" % type_id)
else:
raise HazelcastInstanceNotActiveError()
return serializer.read(inp)
@@ -142,7 +142,7 @@ def destroy(self):
class SerializerRegistry(object):
- def __init__(self, int_type=INTEGER_TYPE.VAR):
+ def __init__(self, int_type):
self._global_serializer = None
self._portable_serializer = None
self._data_serializer = None
@@ -223,17 +223,17 @@ def lookup_default_serializer(self, obj_type, obj):
type_id = None
# LOCATE NUMERIC TYPES
if obj_type in six.integer_types:
- if self.int_type == INTEGER_TYPE.BYTE:
+ if self.int_type == IntType.BYTE:
type_id = CONSTANT_TYPE_BYTE
- elif self.int_type == INTEGER_TYPE.SHORT:
+ elif self.int_type == IntType.SHORT:
type_id = CONSTANT_TYPE_SHORT
- elif self.int_type == INTEGER_TYPE.INT:
+ elif self.int_type == IntType.INT:
type_id = CONSTANT_TYPE_INTEGER
- elif self.int_type == INTEGER_TYPE.LONG:
+ elif self.int_type == IntType.LONG:
type_id = CONSTANT_TYPE_LONG
- elif self.int_type == INTEGER_TYPE.BIG_INT:
+ elif self.int_type == IntType.BIG_INT:
type_id = JAVA_DEFAULT_TYPE_BIG_INTEGER
- elif self.int_type == INTEGER_TYPE.VAR:
+ elif self.int_type == IntType.VAR:
if MIN_BYTE <= obj <= MAX_BYTE:
type_id = CONSTANT_TYPE_BYTE
elif MIN_SHORT <= obj <= MAX_SHORT:
@@ -279,18 +279,18 @@ def safe_register_serializer(self, stream_serializer, obj_type=None):
with self._registration_lock:
if obj_type is not None:
if obj_type in self._constant_type_dict:
- raise ValueError("[{}] serializer cannot be overridden!".format(obj_type))
+ raise ValueError("[%s] serializer cannot be overridden!" % obj_type)
current = self._type_dict.get(obj_type, None)
if current is not None and current.__class__ != stream_serializer.__class__:
- raise ValueError("Serializer[{}] has been already registered for type: {}"
- .format(current.__class__, obj_type))
+ raise ValueError("Serializer[%s] has been already registered for type: %s"
+ % (current.__class__, obj_type))
else:
self._type_dict[obj_type] = stream_serializer
serializer_type_id = stream_serializer.get_type_id()
current = self._id_dic.get(serializer_type_id, None)
if current is not None and current.__class__ != stream_serializer.__class__:
- raise ValueError("Serializer[{}] has been already registered for type-id: {}"
- .format(current.__class__, serializer_type_id))
+ raise ValueError("Serializer[%s] has been already registered for type-id: %s"
+ % (current.__class__, serializer_type_id))
else:
self._id_dic[serializer_type_id] = stream_serializer
return current is None
diff --git a/hazelcast/serialization/input.py b/hazelcast/serialization/input.py
index 133139f1f1..43adcda4bc 100644
--- a/hazelcast/serialization/input.py
+++ b/hazelcast/serialization/input.py
@@ -146,7 +146,7 @@ def _check_available(self, position, size):
if _position < 0:
raise ValueError
if self._size - _position < size:
- raise EOFError("Cannot read {} bytes!".format(size))
+ raise EOFError("Cannot read %s bytes!" % size)
def _read_from_buff(self, fmt, size, position=None):
if position is None:
diff --git a/hazelcast/serialization/portable/classdef.py b/hazelcast/serialization/portable/classdef.py
index de115cbf28..4fc485f656 100644
--- a/hazelcast/serialization/portable/classdef.py
+++ b/hazelcast/serialization/portable/classdef.py
@@ -1,29 +1,30 @@
from hazelcast.errors import HazelcastSerializationError
-from hazelcast.util import enum
from hazelcast import six
-
-FieldType = enum(
- PORTABLE=0,
- BYTE=1,
- BOOLEAN=2,
- CHAR=3,
- SHORT=4,
- INT=5,
- LONG=6,
- FLOAT=7,
- DOUBLE=8,
- UTF=9,
- PORTABLE_ARRAY=10,
- BYTE_ARRAY=11,
- BOOLEAN_ARRAY=12,
- CHAR_ARRAY=13,
- SHORT_ARRAY=14,
- INT_ARRAY=15,
- LONG_ARRAY=16,
- FLOAT_ARRAY=17,
- DOUBLE_ARRAY=18,
- UTF_ARRAY=19
-)
+from hazelcast.util import with_reversed_items
+
+
+@with_reversed_items
+class FieldType(object):
+ PORTABLE = 0
+ BYTE = 1
+ BOOLEAN = 2
+ CHAR = 3
+ SHORT = 4
+ INT = 5
+ LONG = 6
+ FLOAT = 7
+ DOUBLE = 8
+ UTF = 9
+ PORTABLE_ARRAY = 10
+ BYTE_ARRAY = 11
+ BOOLEAN_ARRAY = 12
+ CHAR_ARRAY = 13
+ SHORT_ARRAY = 14
+ INT_ARRAY = 15
+ LONG_ARRAY = 16
+ FLOAT_ARRAY = 17
+ DOUBLE_ARRAY = 18
+ UTF_ARRAY = 19
class FieldDefinition(object):
@@ -38,15 +39,18 @@ def __init__(self, index, field_name, field_type, version, factory_id=0, class_i
def __eq__(self, other):
return isinstance(other, self.__class__) \
and (self.index, self.field_name, self.field_type, self.version, self.factory_id, self.class_id) == \
- (other.index, other.field_name, other.field_type, other.version, other.factory_id, other.class_id)
+ (other.index, other.field_name, other.field_type, other.version, other.factory_id, other.class_id)
+
+ def __ne__(self, other):
+ return not self.__eq__(other)
def __repr__(self):
- return "FieldDefinition[ ix:{}, name:{}, type:{}, version:{}, fid:{}, cid:{}]".format(self.index,
- self.field_name,
- self.field_type,
- self.version,
- self.factory_id,
- self.class_id)
+ return "FieldDefinition(ix=%s, name=%s, type=%s, version=%s, fid=%s, cid=%s)" % (self.index,
+ self.field_name,
+ self.field_type,
+ self.version,
+ self.factory_id,
+ self.class_id)
class ClassDefinition(object):
@@ -67,7 +71,7 @@ def get_field(self, field_name_or_index):
for field in six.itervalues(self.field_defs):
if field.index == index:
return field
- raise IndexError("Index is out of bound. Index: {} and size: {}".format(index, count))
+ raise IndexError("Index is out of bound. Index: %s and size: %s" % (index, count))
else:
return self.field_defs.get(field_name_or_index, None)
@@ -81,13 +85,13 @@ def get_field_type(self, field_name):
fd = self.get_field(field_name)
if fd:
return fd.field_type
- raise ValueError("Unknown field: {}".format(field_name))
+ raise ValueError("Unknown field: %s" % field_name)
def get_field_class_id(self, field_name):
fd = self.get_field(field_name)
if fd:
return fd.class_id
- raise ValueError("Unknown field: {}".format(field_name))
+ raise ValueError("Unknown field: %s" % field_name)
def get_field_count(self):
return len(self.field_defs)
@@ -98,16 +102,16 @@ def set_version_if_not_set(self, version):
def __eq__(self, other):
return isinstance(other, self.__class__) and (self.factory_id, self.class_id, self.version, self.field_defs) == \
- (other.factory_id, other.class_id, other.version, other.field_defs)
+ (other.factory_id, other.class_id, other.version, other.field_defs)
def __ne__(self, other):
return not self.__eq__(other)
def __repr__(self):
- return "fid:{}, cid:{}, v:{}, fields:{}".format(self.factory_id, self.class_id, self.version, self.field_defs)
+ return "fid:%s, cid:%s, v:%s, fields:%s" % (self.factory_id, self.class_id, self.version, self.field_defs)
def __hash__(self):
- return id(self)//16
+ return hash((self.factory_id, self.class_id, self.version))
class ClassDefinitionBuilder(object):
@@ -228,4 +232,4 @@ def _add_field_by_type(self, field_name, field_type, version, factory_id=0, clas
def _check(self):
if self._done:
- raise HazelcastSerializationError("ClassDefinition is already built for {}".format(self.class_id))
+ raise HazelcastSerializationError("ClassDefinition is already built for %s" % self.class_id)
diff --git a/hazelcast/serialization/portable/context.py b/hazelcast/serialization/portable/context.py
index f693c4cef3..3213b6e8be 100644
--- a/hazelcast/serialization/portable/context.py
+++ b/hazelcast/serialization/portable/context.py
@@ -114,7 +114,7 @@ def set_class_version(self, class_id, version):
try:
current_version = self._current_class_versions[class_id]
if current_version != version:
- raise ValueError("Class-id: {} is already registered!".format(class_id))
+ raise ValueError("Class-id: %s is already registered!" % class_id)
except KeyError:
self._current_class_versions[class_id] = version
@@ -126,7 +126,7 @@ def register(self, class_def):
if class_def is None:
return None
if class_def.factory_id != self._factory_id:
- raise HazelcastSerializationError("Invalid factory-id! {} -> {}".format(self._factory_id, class_def))
+ raise HazelcastSerializationError("Invalid factory-id! %s -> %s" % (self._factory_id, class_def))
if isinstance(class_def, ClassDefinition):
class_def.set_version_if_not_set(self._portable_version)
combined_key = (class_def.class_id, class_def.version)
@@ -136,8 +136,8 @@ def register(self, class_def):
current_class_def = self._versioned_definitions[combined_key]
if isinstance(current_class_def, ClassDefinition):
if current_class_def != class_def:
- raise HazelcastSerializationError("Incompatible class-definitions with same class-id: {} vs {}"
- .format(class_def, current_class_def))
+ raise HazelcastSerializationError("Incompatible class-definitions with same class-id: %s vs %s"
+ % (class_def, current_class_def))
return current_class_def
self._versioned_definitions[combined_key] = class_def
return class_def
diff --git a/hazelcast/serialization/portable/reader.py b/hazelcast/serialization/portable/reader.py
index 9960afc82c..5c600ae00b 100644
--- a/hazelcast/serialization/portable/reader.py
+++ b/hazelcast/serialization/portable/reader.py
@@ -18,7 +18,7 @@ def __init__(self, portable_serializer, data_input, class_def):
except Exception:
raise HazelcastSerializationError()
if field_count != class_def.get_field_count():
- raise ValueError("Field count({}) in stream does not match! {}".format(field_count, class_def))
+ raise ValueError("Field count(%s) in stream does not match! %s" % (field_count, class_def))
self._offset = data_input.position()
self._raw = False
@@ -85,7 +85,7 @@ def read_portable(self, field_name):
if fd is None:
raise self._create_unknown_field_exception(field_name)
if fd.field_type != FieldType.PORTABLE:
- raise HazelcastSerializationError("Not a Portable field: {}".format(field_name))
+ raise HazelcastSerializationError("Not a Portable field: %s" % field_name)
pos = self._read_position_by_field_def(fd)
self._in.set_position(pos)
@@ -191,7 +191,7 @@ def read_portable_array(self, field_name):
if fd is None:
raise self._create_unknown_field_exception(field_name)
if fd.field_type != FieldType.PORTABLE_ARRAY:
- raise HazelcastSerializationError("Not a portable array field: {}".format(field_name))
+ raise HazelcastSerializationError("Not a portable array field: %s" % field_name)
pos = self._read_position_by_field_def(fd)
self._in.set_position(pos)
@@ -232,7 +232,7 @@ def _read_position(self, field_name, field_type):
if fd is None:
return self._read_nested_position(field_name, field_type)
if fd.field_type != field_type:
- raise HazelcastSerializationError("Not a '{}' field: {}".format(field_type, field_name))
+ raise HazelcastSerializationError("Not a '%s' field: %s" % (field_type, field_name))
return self._read_position_by_field_def(fd)
def _read_nested_position(self, field_name, field_type):
@@ -250,18 +250,18 @@ def _read_nested_position(self, field_name, field_type):
self._in.set_position(pos)
is_none = self._in.read_boolean()
if is_none:
- raise ValueError("Parent field is null: ".format(field_names[i]))
+ raise ValueError("Parent field is null: %s" % field_names[i])
_reader = self._portable_serializer.create_default_reader(self._in)
if fd is None:
raise self._create_unknown_field_exception(field_name)
if fd.field_type != field_type:
- raise HazelcastSerializationError("Not a '{}' field: {}".format(field_type, field_name))
+ raise HazelcastSerializationError("Not a '%s' field: %s" % (field_type, field_name))
return _reader._read_position_by_field_def(fd)
raise self._create_unknown_field_exception(field_name)
def _create_unknown_field_exception(self, field_name):
- return HazelcastSerializationError("Unknown field name: '{}' for ClassDefinition[ id: {}, version: {} ]"
- .format(field_name, self._class_def.class_id, self._class_def.version))
+ return HazelcastSerializationError("Unknown field name: '%s' for ClassDefinition(id=%s, version=%s)"
+ % (field_name, self._class_def.class_id, self._class_def.version))
def _read_position_by_field_def(self, fd):
pos = self._in.read_int(self._offset + fd.index * bits.INT_SIZE_IN_BYTES)
@@ -272,9 +272,9 @@ def _read_position_by_field_def(self, fd):
def _check_factory_and_class(field_def, factory_id, class_id):
if factory_id != field_def.factory_id:
- raise ValueError("Invalid factoryId! Expected: {}, Current: {}".format(factory_id, field_def.factory_id))
+ raise ValueError("Invalid factoryId! Expected: %s, Current: %s" % (factory_id, field_def.factory_id))
if class_id != field_def.class_id:
- raise ValueError("Invalid classId! Expected: {}, Current: {}".format(class_id, field_def.class_id))
+ raise ValueError("Invalid classId! Expected: %s, Current: %s" % (class_id, field_def.class_id))
class MorphingPortableReader(DefaultPortableReader):
@@ -470,5 +470,5 @@ def validate_type_compatibility(self, field_def, expected_type):
raise self.create_incompatible_class_change_error(field_def, expected_type)
def create_incompatible_class_change_error(self, field_def, expected_type):
- return TypeError("Incompatible to read {} from {} while reading field :{} on {}"
- .format(expected_type, field_def.field_type, field_def.field_name, self._class_def))
+ return TypeError("Incompatible to read %s from %s while reading field: %s on %s"
+ % (expected_type, field_def.field_type, field_def.field_name, self._class_def))
diff --git a/hazelcast/serialization/portable/serializer.py b/hazelcast/serialization/portable/serializer.py
index 34a3033bdc..70775c787b 100644
--- a/hazelcast/serialization/portable/serializer.py
+++ b/hazelcast/serialization/portable/serializer.py
@@ -54,11 +54,11 @@ def create_new_portable_instance(self, factory_id, class_id):
try:
portable_factory = self._portable_factories[factory_id]
except KeyError:
- raise HazelcastSerializationError("Could not find portable_factory for factory-id: {}".format(factory_id))
+ raise HazelcastSerializationError("Could not find portable_factory for factory-id: %s" % factory_id)
portable = portable_factory[class_id]
if portable is None:
- raise HazelcastSerializationError("Could not create Portable for class-id: {}".format(class_id))
+ raise HazelcastSerializationError("Could not create Portable for class-id: %s" % class_id)
return portable()
def create_reader(self, inp, factory_id, class_id, version, portable_version):
diff --git a/hazelcast/serialization/portable/writer.py b/hazelcast/serialization/portable/writer.py
index a759d9d1e7..cdcb80d5ec 100644
--- a/hazelcast/serialization/portable/writer.py
+++ b/hazelcast/serialization/portable/writer.py
@@ -146,13 +146,13 @@ def _set_position(self, field_name, field_type):
raise HazelcastSerializationError("Cannot write Portable fields after get_raw_data_output() is called!")
fd = self._class_def.get_field(field_name)
if fd is None:
- raise HazelcastSerializationError("Invalid field name:'{}' for ClassDefinition(id:{} , version:{} )"
- .format(field_name, self._class_def.class_id, self._class_def.version))
+ raise HazelcastSerializationError("Invalid field name:'%s' for ClassDefinition(id:%s , version:%s )"
+ % (field_name, self._class_def.class_id, self._class_def.version))
if field_name not in self._writen_fields:
self._write_field_def(fd.index, field_name, field_type)
self._writen_fields.add(field_name)
else:
- raise HazelcastSerializationError("Field '{}' has already been written!".format(field_name))
+ raise HazelcastSerializationError("Field '%s' has already been written!" % field_name)
return fd
def _write_field_def(self, index, field_name, field_type):
@@ -172,12 +172,12 @@ def end(self):
def _check_portable_attributes(field_def, portable):
if field_def.factory_id != portable.get_factory_id():
raise HazelcastSerializationError("Wrong Portable type! Generic portable types are not supported! "
- "Expected factory-id: {}, Actual factory-id: {}"
- .format(field_def.factory_id, portable.get_factory_id()))
+ "Expected factory-id: %s, Actual factory-id: %s"
+ % (field_def.factory_id, portable.get_factory_id()))
if field_def.class_id != portable.get_class_id():
raise HazelcastSerializationError("Wrong Portable type! Generic portable types are not supported! "
- "Expected class-id: {}, Actual class-id: {}"
- .format(field_def.class_id, portable.get_class_id()))
+ "Expected class-id: %s, Actual class-id: %s"
+ % (field_def.class_id, portable.get_class_id()))
class ClassDefinitionWriter(PortableWriter):
diff --git a/hazelcast/serialization/serializer.py b/hazelcast/serialization/serializer.py
index d8cac6e579..41363b7a67 100644
--- a/hazelcast/serialization/serializer.py
+++ b/hazelcast/serialization/serializer.py
@@ -371,12 +371,11 @@ def read(self, inp):
factory = self._factories.get(factory_id, None)
if factory is None:
raise HazelcastSerializationError(
- "No DataSerializerFactory registered for namespace: {}".format(factory_id))
+ "No DataSerializerFactory registered for namespace: %s" % factory_id)
identified = factory.get(class_id, None)
if identified is None:
raise HazelcastSerializationError(
- "{} is not be able to create an instance for id: {} on factoryId: {}".format(factory, class_id,
- factory_id))
+ "%s is not be able to create an instance for id: %s on factoryId: %s" % (factory, class_id, factory_id))
instance = identified()
instance.read_data(inp)
return instance
diff --git a/hazelcast/serialization/service.py b/hazelcast/serialization/service.py
index c3f55a031d..0cdd96b53b 100644
--- a/hazelcast/serialization/service.py
+++ b/hazelcast/serialization/service.py
@@ -18,27 +18,28 @@ def default_partition_strategy(key):
class SerializationServiceV1(BaseSerializationService):
- def __init__(self, serialization_config, version=1, global_partition_strategy=default_partition_strategy,
+ def __init__(self, config, version=1,
+ global_partition_strategy=default_partition_strategy,
output_buffer_size=DEFAULT_OUT_BUFFER_SIZE):
super(SerializationServiceV1, self).__init__(version, global_partition_strategy, output_buffer_size,
- serialization_config.is_big_endian,
- serialization_config.default_integer_type)
- self._portable_context = PortableContext(self, serialization_config.portable_version)
- self.register_class_definitions(serialization_config.class_definitions, serialization_config.check_class_def_errors)
- self._registry._portable_serializer = PortableSerializer(self._portable_context, serialization_config.portable_factories)
+ config.is_big_endian,
+ config.default_int_type)
+ self._portable_context = PortableContext(self, config.portable_version)
+ self.register_class_definitions(config.class_definitions, config.check_class_definition_errors)
+ self._registry._portable_serializer = PortableSerializer(self._portable_context, config.portable_factories)
# merge configured factories with built in ones
factories = {}
- factories.update(serialization_config.data_serializable_factories)
+ factories.update(config.data_serializable_factories)
self._registry._data_serializer = IdentifiedDataSerializer(factories)
self._register_constant_serializers()
# Register Custom Serializers
- for _type, custom_serializer in six.iteritems(serialization_config.custom_serializers):
+ for _type, custom_serializer in six.iteritems(config.custom_serializers):
self._registry.safe_register_serializer(custom_serializer(), _type)
# Register Global Serializer
- global_serializer = serialization_config.global_serializer
+ global_serializer = config.global_serializer
if global_serializer:
self._registry._global_serializer = global_serializer()
@@ -80,7 +81,7 @@ def register_class_definitions(self, class_definitions, check_error):
class_defs = dict()
for cd in class_definitions:
if cd in class_defs:
- raise HazelcastSerializationError("Duplicate registration found for class-id:{}".format(cd.class_id))
+ raise HazelcastSerializationError("Duplicate registration found for class-id: %s" % cd.class_id)
class_defs[cd.class_id] = cd
for cd in class_definitions:
self.register_class_definition(cd, class_defs, check_error)
@@ -96,5 +97,5 @@ def register_class_definition(self, cd, class_defs, check_error):
self._portable_context.register_class_definition(nested_cd)
elif check_error:
raise HazelcastSerializationError(
- "Could not find registered ClassDefinition for class-id:{}".format(fd.class_id))
+ "Could not find registered ClassDefinition for class-id: %s" % fd.class_id)
self._portable_context.register_class_definition(cd)
diff --git a/hazelcast/six.py b/hazelcast/six.py
index a04af41b9f..241d95cee6 100644
--- a/hazelcast/six.py
+++ b/hazelcast/six.py
@@ -902,7 +902,6 @@ def ensure_text(s, encoding='utf-8', errors='strict'):
raise TypeError("not expecting type '%s'" % type(s))
-
def python_2_unicode_compatible(klass):
"""
A decorator that defines __unicode__ and __str__ methods under Python 2.
@@ -943,4 +942,4 @@ def python_2_unicode_compatible(klass):
break
del i, importer
# Finally, add the importer to the meta path import hook.
-sys.meta_path.append(_importer)
\ No newline at end of file
+sys.meta_path.append(_importer)
diff --git a/hazelcast/statistics.py b/hazelcast/statistics.py
index 5d108f346d..d260f35f76 100644
--- a/hazelcast/statistics.py
+++ b/hazelcast/statistics.py
@@ -4,7 +4,6 @@
from hazelcast.invocation import Invocation
from hazelcast.protocol.codec import client_statistics_codec
from hazelcast.util import current_time_in_millis, to_millis, to_nanos, current_time
-from hazelcast.config import ClientProperties
from hazelcast.version import CLIENT_VERSION, CLIENT_TYPE
from hazelcast import six
@@ -32,7 +31,9 @@ def __init__(self, client, reactor, connection_manager, invocation_service, near
self._invocation_service = invocation_service
self._near_cache_manager = near_cache_manager
self._logger_extras = logger_extras
- self._enabled = client.properties.get_bool(ClientProperties.STATISTICS_ENABLED)
+ config = client.config
+ self._enabled = config.statistics_enabled
+ self._period = config.statistics_period
self._statistics_timer = None
self._failed_gauges = set()
@@ -40,27 +41,16 @@ def start(self):
if not self._enabled:
return
- period = self._client.properties.get_seconds(ClientProperties.STATISTICS_PERIOD_SECONDS)
- if period <= 0:
- default_period = self._client.properties.get_seconds_positive_or_default(
- ClientProperties.STATISTICS_PERIOD_SECONDS)
-
- self.logger.warning("Provided client statistics {} cannot be less than or equal to 0. "
- "You provided {} as the configuration. Client will use the default value "
- "{} instead.".format(ClientProperties.STATISTICS_PERIOD_SECONDS.name,
- period, default_period), extra=self._logger_extras)
- period = default_period
-
def _statistics_task():
if not self._client.lifecycle_service.is_running():
return
self._send_statistics()
- self._statistics_timer = self._reactor.add_timer(period, _statistics_task)
+ self._statistics_timer = self._reactor.add_timer(self._period, _statistics_task)
- self._statistics_timer = self._reactor.add_timer(period, _statistics_task)
+ self._statistics_timer = self._reactor.add_timer(self._period, _statistics_task)
- self.logger.info("Client statistics enabled with the period of {} seconds.".format(period),
+ self.logger.info("Client statistics enabled with the period of %s seconds." % self._period,
extra=self._logger_extras)
def shutdown(self):
@@ -192,12 +182,12 @@ def safe_wrapper(self, psutil_stats, probe_name, *args):
try:
stat = func(self, psutil_stats, probe_name, *args)
except AttributeError as ae:
- self.logger.debug("Unable to register psutil method used for the probe {}. "
- "Cause: {}".format(probe_name, ae), extra=self._logger_extras)
+ self.logger.debug("Unable to register psutil method used for the probe %s. "
+ "Cause: %s" % (probe_name, ae), extra=self._logger_extras)
self._failed_gauges.add(probe_name)
return
except Exception as ex:
- self.logger.warning("Failed to access the probe {}. Cause: {}".format(probe_name, ex),
+ self.logger.warning("Failed to access the probe %s. Cause: %s" % (probe_name, ex),
extra=self._logger_extras)
stat = self._DEFAULT_PROBE_VALUE
diff --git a/hazelcast/util.py b/hazelcast/util.py
index ce108bf9d8..3eb0e700d7 100644
--- a/hazelcast/util.py
+++ b/hazelcast/util.py
@@ -1,3 +1,4 @@
+import random
import threading
import time
import logging
@@ -110,7 +111,7 @@ def validate_type(_type):
:param _type: (Type), the type to be validated.
"""
if not isinstance(_type, type):
- raise ValueError("Serializer should be an instance of {}".format(_type.__name__))
+ raise ValueError("Serializer should be an instance of %s" % _type.__name__)
def validate_serializer(serializer, _type):
@@ -121,7 +122,7 @@ def validate_serializer(serializer, _type):
:param _type: (Type), type to be used for serializer validation.
"""
if not issubclass(serializer, _type):
- raise ValueError("Serializer should be an instance of {}".format(_type.__name__))
+ raise ValueError("Serializer should be an instance of %s" % _type.__name__)
class AtomicInteger(object):
@@ -144,16 +145,6 @@ def get_and_increment(self):
return res
-def enum(**enums):
- """
- Utility method for defining enums.
- :param enums: Parameters of enumeration.
- :return: (Enum), the created enumerations.
- """
- enums['reverse'] = dict((value, key) for key, value in six.iteritems(enums))
- return type('Enum', (), enums)
-
-
class ImmutableLazyDataList(Sequence):
def __init__(self, list_data, to_object):
super(ImmutableLazyDataList, self).__init__()
@@ -327,3 +318,92 @@ def to_signed(unsigned, bit_len):
if unsigned & (1 << (bit_len - 1)):
return unsigned | ~mask
return unsigned & mask
+
+
+def with_reversed_items(cls):
+ reversed_mappings = {}
+ for attr_name, attr_value in six.iteritems(vars(cls)):
+ if not (attr_name.startswith("_") or callable(getattr(cls, attr_name))):
+ reversed_mappings[attr_value] = attr_name
+
+ class ClsWithReservedItems(cls):
+ reverse = reversed_mappings
+
+ return ClsWithReservedItems
+
+
+number_types = (six.integer_types, float)
+none_type = type(None)
+
+
+class LoadBalancer(object):
+ """Load balancer allows you to send operations to one of a number of endpoints (Members).
+ It is up to the implementation to use different load balancing policies.
+
+ If the client is configured with smart routing,
+ only the operations that are not key based will be routed to the endpoint
+ returned by the load balancer. If it is not, the load balancer will not be used.
+ """
+ def init(self, cluster_service):
+ """
+ Initializes the load balancer.
+
+ :param cluster_service: (:class:`~hazelcast.cluster.ClusterService`), The cluster service to select members from
+ """
+ raise NotImplementedError("init")
+
+ def next(self):
+ """
+ Returns the next member to route to.
+ :return: (:class:`~hazelcast.core.Member`), Returns the next member or None if no member is available
+ """
+ raise NotImplementedError("next")
+
+
+class _AbstractLoadBalancer(LoadBalancer):
+
+ def __init__(self):
+ self._cluster_service = None
+ self._members = []
+
+ def init(self, cluster_service):
+ self._cluster_service = cluster_service
+ cluster_service.add_listener(self._listener, self._listener, True)
+
+ def _listener(self, _):
+ self._members = self._cluster_service.get_members()
+
+
+class RoundRobinLB(_AbstractLoadBalancer):
+ """A load balancer implementation that relies on using round robin
+ to a next member to send a request to.
+
+ Round robin is done based on best effort basis, the order of members for concurrent calls to
+ the next() is not guaranteed.
+ """
+
+ def __init__(self):
+ super(RoundRobinLB, self).__init__()
+ self._idx = 0
+
+ def next(self):
+ members = self._members
+ if not members:
+ return None
+
+ n = len(members)
+ idx = self._idx % n
+ self._idx += 1
+ return members[idx]
+
+
+class RandomLB(_AbstractLoadBalancer):
+ """A load balancer that selects a random member to route to.
+ """
+
+ def next(self):
+ members = self._members
+ if not members:
+ return None
+ idx = random.randrange(0, len(members))
+ return members[idx]
diff --git a/tests/base.py b/tests/base.py
index 6fb7bdf9e8..eda0980bc5 100644
--- a/tests/base.py
+++ b/tests/base.py
@@ -44,7 +44,7 @@ def create_cluster(cls, rc, config=None):
return _Cluster(rc, rc.createCluster(None, config))
def create_client(self, config=None):
- client = hazelcast.HazelcastClient(config)
+ client = hazelcast.HazelcastClient(**config)
self.clients.append(client)
return client
@@ -100,12 +100,11 @@ class SingleMemberTestCase(HazelcastTestCase):
@classmethod
def setUpClass(cls):
- configure_logging()
cls.rc = cls.create_rc()
cls.cluster = cls.create_cluster(cls.rc, cls.configure_cluster())
cls.member = cls.cluster.start_member()
- cls.client = hazelcast.HazelcastClient(cls.configure_client(hazelcast.ClientConfig()))
+ cls.client = hazelcast.HazelcastClient(**cls.configure_client(dict()))
@classmethod
def tearDownClass(cls):
diff --git a/tests/client_test.py b/tests/client_test.py
index 1361f48931..bacf8a1d3e 100644
--- a/tests/client_test.py
+++ b/tests/client_test.py
@@ -1,7 +1,6 @@
import time
from tests.base import HazelcastTestCase
-from hazelcast.config import ClientConfig, ClientProperties
from hazelcast.client import HazelcastClient
from hazelcast.lifecycle import LifecycleState
from tests.hzrc.ttypes import Lang
@@ -9,10 +8,6 @@
class ClientTest(HazelcastTestCase):
- @classmethod
- def setUpClass(cls):
- configure_logging()
-
def test_client_only_listens(self):
rc = self.create_rc()
client_heartbeat_seconds = 8
@@ -22,17 +17,13 @@ def test_client_only_listens(self):
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-4.0.xsd">
- {}
+ %s
- """.format(client_heartbeat_seconds)
+ """ % client_heartbeat_seconds
cluster = self.create_cluster(rc, cluster_config)
cluster.start_member()
- config = ClientConfig()
- config.cluster_name = cluster.id
- config.set_property(ClientProperties.HEARTBEAT_INTERVAL.name, 1000)
-
- client1 = HazelcastClient(config)
+ client1 = HazelcastClient(cluster_name=cluster.id, heartbeat_interval=1)
def lifecycle_event_collector():
events = []
@@ -48,14 +39,12 @@ def event_collector(e):
collector = lifecycle_event_collector()
client1.lifecycle_service.add_listener(collector)
- config2 = ClientConfig()
- config2.cluster_name = cluster.id
- client2 = HazelcastClient(config2)
+ client2 = HazelcastClient(cluster_name=cluster.id)
key = "topic-name"
topic = client1.get_topic(key)
- def message_listener(e):
+ def message_listener(_):
pass
topic.add_listener(message_listener)
@@ -90,17 +79,18 @@ def tearDown(self):
self.shutdown_all_clients()
def test_default_config(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
-
- self.create_client(config)
+ self.create_client({
+ "cluster_name": self.cluster.id
+ })
self.assertIsNone(self.get_labels_from_member())
def test_provided_labels_are_received(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.labels.add("test-label")
- self.create_client(config)
+ self.create_client({
+ "cluster_name": self.cluster.id,
+ "labels": [
+ "test-label",
+ ]
+ })
self.assertEqual(b"test-label", self.get_labels_from_member())
def get_labels_from_member(self):
diff --git a/tests/cluster_test.py b/tests/cluster_test.py
index 8ac88bf46c..1cba494c1a 100644
--- a/tests/cluster_test.py
+++ b/tests/cluster_test.py
@@ -1,26 +1,21 @@
import unittest
-from hazelcast import ClientConfig, HazelcastClient, six
-from hazelcast.cluster import RandomLB, RoundRobinLB
+from hazelcast import HazelcastClient, six
+from hazelcast.util import RandomLB, RoundRobinLB
from tests.base import HazelcastTestCase
-from tests.util import configure_logging
class ClusterTest(HazelcastTestCase):
rc = None
- @classmethod
- def setUpClass(cls):
- configure_logging()
-
def setUp(self):
self.rc = self.create_rc()
self.cluster = self.create_cluster(self.rc)
def create_config(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- return config
+ return {
+ "cluster_name": self.cluster.id,
+ }
def tearDown(self):
self.shutdown_all_clients()
@@ -34,7 +29,9 @@ def member_added(m):
events.append(m)
config = self.create_config()
- config.membership_listeners.append((member_added, None))
+ config["membership_listeners"] = [
+ (member_added, None)
+ ]
member = self.cluster.start_member()
@@ -109,7 +106,9 @@ def listener(_):
raise RuntimeError("error")
config = self.create_config()
- config.membership_listeners.append((listener, listener))
+ config["membership_listeners"] = [
+ (listener, listener)
+ ]
self.cluster.start_member()
self.create_client(config)
@@ -144,26 +143,26 @@ class LoadBalancersTest(unittest.TestCase):
def test_random_lb_with_no_members(self):
cluster = _MockClusterService([])
lb = RandomLB()
- lb.init(cluster, None)
+ lb.init(cluster)
self.assertIsNone(lb.next())
def test_round_robin_lb_with_no_members(self):
cluster = _MockClusterService([])
lb = RoundRobinLB()
- lb.init(cluster, None)
+ lb.init(cluster)
self.assertIsNone(lb.next())
def test_random_lb_with_members(self):
cluster = _MockClusterService([0, 1, 2])
lb = RandomLB()
- lb.init(cluster, None)
+ lb.init(cluster)
for _ in range(10):
self.assertTrue(0 <= lb.next() <= 2)
def test_round_robin_lb_with_members(self):
cluster = _MockClusterService([0, 1, 2])
lb = RoundRobinLB()
- lb.init(cluster, None)
+ lb.init(cluster)
for i in range(10):
self.assertEqual(i % 3, lb.next())
@@ -171,7 +170,6 @@ def test_round_robin_lb_with_members(self):
class LoadBalancersWithRealClusterTest(HazelcastTestCase):
@classmethod
def setUpClass(cls):
- configure_logging()
cls.rc = cls.create_rc()
cls.cluster = cls.create_cluster(cls.rc, None)
cls.member1 = cls.cluster.start_member()
@@ -184,10 +182,7 @@ def tearDownClass(cls):
cls.rc.exit()
def test_random_load_balancer(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.load_balancer = RandomLB()
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id, load_balancer=RandomLB())
self.assertTrue(client.lifecycle_service.is_running())
lb = client._load_balancer
@@ -200,10 +195,7 @@ def test_random_load_balancer(self):
client.shutdown()
def test_round_robin_load_balancer(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.load_balancer = RoundRobinLB()
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id, load_balancer=RoundRobinLB())
self.assertTrue(client.lifecycle_service.is_running())
lb = client._load_balancer
diff --git a/tests/config_test.py b/tests/config_test.py
new file mode 100644
index 0000000000..c66ef0fcf3
--- /dev/null
+++ b/tests/config_test.py
@@ -0,0 +1,795 @@
+import logging
+import unittest
+
+from hazelcast.config import _Config, SSLProtocol, ReconnectMode, IntType, InMemoryFormat, EvictionPolicy,\
+ IndexConfig, IndexType, UniqueKeyTransformation, QueryConstants
+from hazelcast.errors import InvalidConfigurationError
+from hazelcast.serialization.api import IdentifiedDataSerializable, Portable, StreamSerializer
+from hazelcast.serialization.portable.classdef import ClassDefinition
+from hazelcast.util import RandomLB
+
+
+class ConfigTest(unittest.TestCase):
+ def setUp(self):
+ self.config = _Config()
+
+ def test_from_dict_defaults(self):
+ config = _Config.from_dict({})
+ for item in self.config.__slots__:
+ self.assertEqual(getattr(self.config, item), getattr(config, item))
+
+ def test_from_dict_with_a_few_changes(self):
+ config = _Config.from_dict({"client_name": "hazel", "cluster_name": "cast"})
+ for item in self.config.__slots__:
+ if item == "_client_name" or item == "_cluster_name":
+ continue
+ self.assertEqual(getattr(self.config, item), getattr(config, item))
+
+ self.assertEqual("hazel", config.client_name)
+ self.assertEqual("cast", config.cluster_name)
+
+ def test_from_dict_skip_none_item(self):
+ config = _Config.from_dict({"cluster_name": None, "cluster_members": None})
+ for item in self.config.__slots__:
+ self.assertEqual(getattr(self.config, item), getattr(config, item))
+
+ def test_from_dict_with_invalid_elements(self):
+ with self.assertRaises(InvalidConfigurationError):
+ _Config.from_dict({"invalid_elem": False})
+
+ def test_cluster_members(self):
+ config = self.config
+ self.assertEqual([], config.cluster_members)
+
+ with self.assertRaises(TypeError):
+ config.cluster_members = [1]
+
+ with self.assertRaises(TypeError):
+ config.cluster_members = 1
+
+ addresses = ["localhost", "10.162.1.1:5701"]
+ config.cluster_members = addresses
+ self.assertEqual(addresses, config.cluster_members)
+
+ def test_cluster_name(self):
+ config = self.config
+ self.assertEqual("dev", config.cluster_name)
+
+ with self.assertRaises(TypeError):
+ config.cluster_name = 1
+
+ config.cluster_name = "xyz"
+ self.assertEqual("xyz", config.cluster_name)
+
+ def test_client_name(self):
+ config = self.config
+ self.assertIsNone(config.client_name)
+
+ with self.assertRaises(TypeError):
+ config.client_name = None
+
+ config.client_name = "xyz"
+ self.assertEqual("xyz", config.client_name)
+
+ def test_connection_timeout(self):
+ config = self.config
+ self.assertEqual(5.0, config.connection_timeout)
+
+ with self.assertRaises(ValueError):
+ config.connection_timeout = -1
+
+ with self.assertRaises(TypeError):
+ config.connection_timeout = "1"
+
+ config.connection_timeout = 3
+ self.assertEqual(3, config.connection_timeout)
+
+ def test_socket_options(self):
+ config = self.config
+ self.assertEqual([], config.socket_options)
+
+ with self.assertRaises(TypeError):
+ config.socket_options = [(1, 2, 3), (1, 2)]
+
+ with self.assertRaises(TypeError):
+ config.socket_options = (1, 2, 3)
+
+ options = [(1, 2, 3), [4, 5, 6]]
+ config.socket_options = options
+ self.assertEqual(options, config.socket_options)
+
+ def test_redo_operation(self):
+ config = self.config
+ self.assertFalse(config.redo_operation)
+
+ with self.assertRaises(TypeError):
+ config.redo_operation = "false"
+
+ config.redo_operation = True
+ self.assertTrue(config.redo_operation)
+
+ def test_smart_routing(self):
+ config = self.config
+ self.assertTrue(config.smart_routing)
+
+ with self.assertRaises(TypeError):
+ config.smart_routing = None
+
+ config.smart_routing = False
+ self.assertFalse(config.smart_routing)
+
+ def test_ssl_enabled(self):
+ config = self.config
+ self.assertFalse(config.ssl_enabled)
+
+ with self.assertRaises(TypeError):
+ config.ssl_enabled = 123
+
+ config.ssl_enabled = True
+ self.assertTrue(config.ssl_enabled)
+
+ def test_ssl_cafile(self):
+ config = self.config
+ self.assertIsNone(config.ssl_cafile)
+
+ with self.assertRaises(TypeError):
+ config.ssl_cafile = False
+
+ config.ssl_cafile = "/path"
+ self.assertEqual("/path", config.ssl_cafile)
+
+ def test_ssl_certfile(self):
+ config = self.config
+ self.assertIsNone(config.ssl_certfile)
+
+ with self.assertRaises(TypeError):
+ config.ssl_certfile = None
+
+ config.ssl_certfile = "/path"
+ self.assertEqual("/path", config.ssl_certfile)
+
+ def test_ssl_keyfile(self):
+ config = self.config
+ self.assertIsNone(config.ssl_keyfile)
+
+ with self.assertRaises(TypeError):
+ config.ssl_keyfile = None
+
+ config.ssl_keyfile = "/path"
+ self.assertEqual("/path", config.ssl_keyfile)
+
+ def test_ssl_password(self):
+ config = self.config
+ self.assertIsNone(config.ssl_password)
+
+ with self.assertRaises(TypeError):
+ config.ssl_password = 123
+
+ config.ssl_password = "123"
+ self.assertEqual("123", config.ssl_password)
+
+ config.ssl_password = b"qwe"
+ self.assertEqual(b"qwe", config.ssl_password)
+
+ config.ssl_password = bytearray([1, 2, 3])
+ self.assertEqual(bytearray([1, 2, 3]), config.ssl_password)
+
+ config.ssl_password = lambda: "123"
+ self.assertEqual("123", config.ssl_password())
+
+ def test_ssl_protocol(self):
+ config = self.config
+ self.assertEqual(SSLProtocol.TLSv1_2, config.ssl_protocol)
+
+ with self.assertRaises(TypeError):
+ config.ssl_protocol = "123"
+
+ config.ssl_protocol = SSLProtocol.TLSv1_3
+ self.assertEqual(SSLProtocol.TLSv1_3, config.ssl_protocol)
+
+ config.ssl_protocol = 0
+ self.assertEqual(0, config.ssl_protocol)
+
+ def test_ssl_ciphers(self):
+ config = self.config
+ self.assertIsNone(config.ssl_ciphers)
+
+ with self.assertRaises(TypeError):
+ config.ssl_ciphers = 123
+
+ config.ssl_ciphers = "123"
+ self.assertEqual("123", config.ssl_ciphers)
+
+ def test_cloud_discovery_token(self):
+ config = self.config
+ self.assertIsNone(config.cloud_discovery_token)
+
+ with self.assertRaises(TypeError):
+ config.cloud_discovery_token = 123
+
+ config.cloud_discovery_token = "TOKEN"
+ self.assertEqual("TOKEN", config.cloud_discovery_token)
+
+ def test_async_start(self):
+ config = self.config
+ self.assertFalse(config.async_start)
+
+ with self.assertRaises(TypeError):
+ config.async_start = "false"
+
+ config.async_start = True
+ self.assertTrue(config.async_start)
+
+ def test_reconnect_mode(self):
+ config = self.config
+ self.assertEqual(ReconnectMode.ON, config.reconnect_mode)
+
+ with self.assertRaises(TypeError):
+ config.reconnect_mode = None
+
+ config.reconnect_mode = ReconnectMode.ASYNC
+ self.assertEqual(ReconnectMode.ASYNC, config.reconnect_mode)
+
+ config.reconnect_mode = 0
+ self.assertEqual(0, config.reconnect_mode)
+
+ def test_retry_initial_backoff(self):
+ config = self.config
+ self.assertEqual(1, config.retry_initial_backoff)
+
+ with self.assertRaises(ValueError):
+ config.retry_initial_backoff = -100
+
+ with self.assertRaises(TypeError):
+ config.retry_initial_backoff = "123"
+
+ config.retry_initial_backoff = 3.5
+ self.assertEqual(3.5, config.retry_initial_backoff)
+
+ def test_retry_max_backoff(self):
+ config = self.config
+ self.assertEqual(30, config.retry_max_backoff)
+
+ with self.assertRaises(ValueError):
+ config.retry_max_backoff = -10
+
+ with self.assertRaises(TypeError):
+ config.retry_max_backoff = None
+
+ config.retry_initial_backoff = 0
+ self.assertEqual(0, config.retry_initial_backoff)
+
+ def test_retry_jitter(self):
+ config = self.config
+ self.assertEqual(0, config.retry_jitter)
+
+ with self.assertRaises(ValueError):
+ config.retry_jitter = -1
+
+ with self.assertRaises(ValueError):
+ config.retry_jitter = 1.1
+
+ with self.assertRaises(TypeError):
+ config.retry_jitter = "123"
+
+ config.retry_jitter = 0.5
+ self.assertEqual(0.5, config.retry_jitter)
+
+ def test_retry_multiplier(self):
+ config = self.config
+ self.assertEqual(1, config.retry_multiplier)
+
+ with self.assertRaises(ValueError):
+ config.retry_multiplier = 0.5
+
+ with self.assertRaises(TypeError):
+ config.retry_multiplier = None
+
+ config.retry_multiplier = 1.5
+ self.assertEqual(1.5, config.retry_multiplier)
+
+ def test_cluster_connect_timeout(self):
+ config = self.config
+ self.assertEqual(20, config.cluster_connect_timeout)
+
+ with self.assertRaises(ValueError):
+ config.cluster_connect_timeout = -1
+
+ with self.assertRaises(TypeError):
+ config.cluster_connect_timeout = ""
+
+ config.cluster_connect_timeout = 20
+ self.assertEqual(20, config.cluster_connect_timeout)
+
+ def test_portable_version(self):
+ config = self.config
+ self.assertEqual(0, config.portable_version)
+
+ with self.assertRaises(ValueError):
+ config.portable_version = -1
+
+ with self.assertRaises(TypeError):
+ config.portable_version = None
+
+ config.portable_version = 2
+ self.assertEqual(2, config.portable_version)
+
+ def test_data_serializable_factories(self):
+ config = self.config
+ self.assertEqual({}, config.data_serializable_factories)
+
+ invalid_configs = [
+ {"123": 1},
+ {123: "123"},
+ {123: {"123": 1}},
+ {123: {123: "123"}},
+ {123: {123: str}},
+ 123,
+ ]
+
+ for invalid_config in invalid_configs:
+ with self.assertRaises(TypeError):
+ config.data_serializable_factories = invalid_config
+
+ factories = {1: {
+ 2: IdentifiedDataSerializable
+ }}
+
+ config.data_serializable_factories = factories
+ self.assertEqual(factories, config.data_serializable_factories)
+
+ def test_data_portable_factories(self):
+ config = self.config
+ self.assertEqual({}, config.portable_factories)
+
+ invalid_configs = [
+ {"123": 1},
+ {123: "123"},
+ {123: {"123": 1}},
+ {123: {123: "123"}},
+ {123: {123: str}},
+ 123,
+ ]
+
+ for invalid_config in invalid_configs:
+ with self.assertRaises(TypeError):
+ config.portable_factories = invalid_config
+
+ factories = {1: {
+ 2: Portable
+ }}
+
+ config.portable_factories = factories
+ self.assertEqual(factories, config.portable_factories)
+
+ def test_class_definitions(self):
+ config = self.config
+ self.assertEqual([], config.class_definitions)
+
+ with self.assertRaises(TypeError):
+ config.class_definitions = [123]
+
+ with self.assertRaises(TypeError):
+ config.class_definitions = None
+
+ cds = [ClassDefinition(1, 2, 3)]
+ config.class_definitions = cds
+ self.assertEqual(cds, config.class_definitions)
+
+ def test_check_class_definition_errors(self):
+ config = self.config
+ self.assertTrue(config.check_class_definition_errors)
+
+ with self.assertRaises(TypeError):
+ config.check_class_definition_errors = None
+
+ config.check_class_definition_errors = False
+ self.assertFalse(config.check_class_definition_errors)
+
+ def test_is_big_endian(self):
+ config = self.config
+ self.assertTrue(config.is_big_endian)
+
+ with self.assertRaises(TypeError):
+ config.is_big_endian = None
+
+ config.is_big_endian = False
+ self.assertFalse(config.is_big_endian)
+
+ def test_default_int_type(self):
+ config = self.config
+ self.assertEqual(IntType.INT, config.default_int_type)
+
+ with self.assertRaises(TypeError):
+ config.default_int_type = None
+
+ config.default_int_type = IntType.BIG_INT
+ self.assertEqual(IntType.BIG_INT, config.default_int_type)
+
+ config.default_int_type = 0
+ self.assertEqual(0, config.default_int_type)
+
+ def test_global_serializer(self):
+ config = self.config
+ self.assertIsNone(config.global_serializer)
+
+ with self.assertRaises(TypeError):
+ config.global_serializer = "123"
+
+ with self.assertRaises(TypeError):
+ config.global_serializer = str
+
+ config.global_serializer = StreamSerializer
+ self.assertEqual(StreamSerializer, config.global_serializer)
+
+ def test_custom_serializers(self):
+ config = self.config
+ self.assertEqual({}, config.custom_serializers)
+
+ invalid_configs = [
+ {1: "123"},
+ {str: "123"},
+ {str: int},
+ None,
+ ]
+
+ for invalid_config in invalid_configs:
+ with self.assertRaises(TypeError):
+ config.custom_serializers = invalid_config
+
+ serializers = {
+ int: StreamSerializer
+ }
+ config.custom_serializers = serializers
+ self.assertEqual(serializers, config.custom_serializers)
+
+ def test_near_caches_invalid_configs(self):
+ config = self.config
+ self.assertEqual({}, config.near_caches)
+
+ invalid_configs = [
+ ({123: "123"}, TypeError),
+ ({"123": 123}, TypeError),
+ (None, TypeError),
+ ({"x": {"invalidate_on_change": None}}, TypeError),
+ ({"x": {"in_memory_format": None}}, TypeError),
+ ({"x": {"time_to_live": None}}, TypeError),
+ ({"x": {"time_to_live": -1}}, ValueError),
+ ({"x": {"max_idle": None}}, TypeError),
+ ({"x": {"max_idle": -12}}, ValueError),
+ ({"x": {"eviction_policy": None}}, TypeError),
+ ({"x": {"eviction_max_size": None}}, TypeError),
+ ({"x": {"eviction_max_size": 0}}, ValueError),
+ ({"x": {"eviction_sampling_count": None}}, TypeError),
+ ({"x": {"eviction_sampling_count": 0}}, ValueError),
+ ({"x": {"eviction_sampling_pool_size": None}}, TypeError),
+ ({"x": {"eviction_sampling_pool_size": -10}}, ValueError),
+ ({"x": {"invalid_option": -10}}, InvalidConfigurationError),
+ ]
+
+ for c, e in invalid_configs:
+ with self.assertRaises(e):
+ config.near_caches = c
+
+ def test_near_caches_defaults(self):
+ config = self.config
+ config.near_caches = {"a": {}}
+ nc_config = config.near_caches["a"]
+ self.assertTrue(nc_config.invalidate_on_change)
+ self.assertEqual(InMemoryFormat.BINARY, nc_config.in_memory_format)
+ self.assertIsNone(nc_config.time_to_live)
+ self.assertIsNone(nc_config.max_idle)
+ self.assertEqual(EvictionPolicy.LRU, nc_config.eviction_policy)
+ self.assertEqual(10000, nc_config.eviction_max_size)
+ self.assertEqual(8, nc_config.eviction_sampling_count)
+ self.assertEqual(16, nc_config.eviction_sampling_pool_size)
+
+ def test_near_caches_with_a_few_changes(self):
+ config = self.config
+ config.near_caches = {"a": {"invalidate_on_change": False, "time_to_live": 10}}
+ nc_config = config.near_caches["a"]
+ self.assertFalse(nc_config.invalidate_on_change)
+ self.assertEqual(InMemoryFormat.BINARY, nc_config.in_memory_format)
+ self.assertEqual(10, nc_config.time_to_live)
+ self.assertIsNone(None, nc_config.max_idle)
+ self.assertEqual(EvictionPolicy.LRU, nc_config.eviction_policy)
+ self.assertEqual(10000, nc_config.eviction_max_size)
+ self.assertEqual(8, nc_config.eviction_sampling_count)
+ self.assertEqual(16, nc_config.eviction_sampling_pool_size)
+
+ def test_near_caches(self):
+ config = self.config
+ config.near_caches = {"a": {
+ "invalidate_on_change": False,
+ "in_memory_format": InMemoryFormat.OBJECT,
+ "time_to_live": 100,
+ "max_idle": 200,
+ "eviction_policy": EvictionPolicy.RANDOM,
+ "eviction_max_size": 1000,
+ "eviction_sampling_count": 20,
+ "eviction_sampling_pool_size": 15,
+ }}
+ nc_config = config.near_caches["a"]
+ self.assertFalse(nc_config.invalidate_on_change)
+ self.assertEqual(InMemoryFormat.OBJECT, nc_config.in_memory_format)
+ self.assertEqual(100, nc_config.time_to_live)
+ self.assertEqual(200, nc_config.max_idle)
+ self.assertEqual(EvictionPolicy.RANDOM, nc_config.eviction_policy)
+ self.assertEqual(1000, nc_config.eviction_max_size)
+ self.assertEqual(20, nc_config.eviction_sampling_count)
+ self.assertEqual(15, nc_config.eviction_sampling_pool_size)
+
+ def test_load_balancer(self):
+ config = self.config
+ self.assertIsNone(config.load_balancer)
+
+ with self.assertRaises(TypeError):
+ config.load_balancer = None
+
+ lb = RandomLB()
+ config.load_balancer = lb
+ self.assertEqual(lb, config.load_balancer)
+
+ def test_membership_listeners(self):
+ config = self.config
+ self.assertEqual([], config.membership_listeners)
+
+ with self.assertRaises(TypeError):
+ config.membership_listeners = [(None, None)]
+
+ with self.assertRaises(TypeError):
+ config.membership_listeners = [None]
+
+ with self.assertRaises(TypeError):
+ config.membership_listeners = [(1, 2, 3)]
+
+ with self.assertRaises(TypeError):
+ config.membership_listeners = None
+
+ config.membership_listeners = [(None, lambda x: x)]
+ added, removed = config.membership_listeners[0]
+ self.assertIsNone(added)
+ self.assertEqual("x", removed("x"))
+
+ config.membership_listeners = [(lambda x: x, None)]
+ added, removed = config.membership_listeners[0]
+ self.assertEqual("x", added("x"))
+ self.assertIsNone(removed)
+
+ config.membership_listeners = [(lambda x: x, lambda x: x)]
+ added, removed = config.membership_listeners[0]
+ self.assertEqual("x", added("x"))
+ self.assertEqual("x", removed("x"))
+
+ def test_lifecycle_listeners(self):
+ config = self.config
+ self.assertEqual([], config.lifecycle_listeners)
+
+ with self.assertRaises(TypeError):
+ config.lifecycle_listeners = [None]
+
+ with self.assertRaises(TypeError):
+ config.lifecycle_listeners = None
+
+ config.lifecycle_listeners = [lambda x: x]
+ cb = config.lifecycle_listeners[0]
+ self.assertEqual("x", cb("x"))
+
+ def test_flake_id_generators_invalid_configs(self):
+ config = self.config
+ self.assertEqual({}, config.flake_id_generators)
+
+ invalid_configs = [
+ ({123: "123"}, TypeError),
+ ({"123": 123}, TypeError),
+ (None, TypeError),
+ ({"x": {"prefetch_count": None}}, TypeError),
+ ({"x": {"prefetch_count": -1}}, ValueError),
+ ({"x": {"prefetch_count": 999999}}, ValueError),
+ ({"x": {"prefetch_validity": None}}, TypeError),
+ ({"x": {"prefetch_validity": -1}}, ValueError),
+ ({"x": {"invalid_option": -10}}, InvalidConfigurationError),
+ ]
+
+ for c, e in invalid_configs:
+ with self.assertRaises(e):
+ config.flake_id_generators = c
+
+ def test_flake_id_generators_defaults(self):
+ config = self.config
+ config.flake_id_generators = {"a": {}}
+ fig_config = config.flake_id_generators["a"]
+ self.assertEqual(100, fig_config.prefetch_count)
+ self.assertEqual(600, fig_config.prefetch_validity)
+
+ def test_flake_id_generators_with_a_few_changes(self):
+ config = self.config
+ config.flake_id_generators = {"a": {"prefetch_validity": 10}}
+ fig_config = config.flake_id_generators["a"]
+ self.assertEqual(100, fig_config.prefetch_count)
+ self.assertEqual(10, fig_config.prefetch_validity)
+
+ def test_flake_id_generators(self):
+ config = self.config
+ config.flake_id_generators = {"a": {
+ "prefetch_count": 20,
+ "prefetch_validity": 30,
+ }}
+ fig_config = config.flake_id_generators["a"]
+ self.assertEqual(20, fig_config.prefetch_count)
+ self.assertEqual(30, fig_config.prefetch_validity)
+
+ def test_labels(self):
+ config = self.config
+ self.assertEqual([], config.labels)
+
+ with self.assertRaises(TypeError):
+ config.labels = ["123", None]
+
+ with self.assertRaises(TypeError):
+ config.labels = None
+
+ l = ["123", "345", "qwe"]
+ config.labels = l
+ self.assertEqual(l, config.labels)
+
+ def test_heartbeat_interval(self):
+ config = self.config
+ self.assertEqual(5, config.heartbeat_interval)
+
+ with self.assertRaises(ValueError):
+ config.heartbeat_interval = -1
+
+ with self.assertRaises(TypeError):
+ config.heartbeat_interval = None
+
+ config.heartbeat_interval = 20
+ self.assertEqual(20, config.heartbeat_interval)
+
+ def test_heartbeat_timeout(self):
+ config = self.config
+ self.assertEqual(60, config.heartbeat_timeout)
+
+ with self.assertRaises(ValueError):
+ config.heartbeat_timeout = 0
+
+ with self.assertRaises(TypeError):
+ config.heartbeat_timeout = None
+
+ config.heartbeat_timeout = 100
+ self.assertEqual(100, config.heartbeat_timeout)
+
+ def test_invocation_timeout(self):
+ config = self.config
+ self.assertEqual(120, config.invocation_timeout)
+
+ with self.assertRaises(ValueError):
+ config.invocation_timeout = 0
+
+ with self.assertRaises(TypeError):
+ config.invocation_timeout = None
+
+ config.invocation_timeout = 10
+ self.assertEqual(10, config.invocation_timeout)
+
+ def test_invocation_retry_pause(self):
+ config = self.config
+ self.assertEqual(1, config.invocation_retry_pause)
+
+ with self.assertRaises(ValueError):
+ config.invocation_retry_pause = -1
+
+ with self.assertRaises(TypeError):
+ config.invocation_retry_pause = None
+
+ config.invocation_retry_pause = 11
+ self.assertEqual(11, config.invocation_retry_pause)
+
+ def test_statistics_enabled(self):
+ config = self.config
+ self.assertFalse(config.statistics_enabled)
+
+ with self.assertRaises(TypeError):
+ config.statistics_enabled = None
+
+ config.statistics_enabled = True
+ self.assertTrue(config.statistics_enabled)
+
+ def test_statistics_period(self):
+ config = self.config
+ self.assertEqual(3, config.statistics_period)
+
+ with self.assertRaises(ValueError):
+ config.statistics_period = -1
+
+ with self.assertRaises(TypeError):
+ config.statistics_period = None
+
+ config.statistics_period = 5.5
+ self.assertEqual(5.5, config.statistics_period)
+
+ def test_shuffle_member_list(self):
+ config = self.config
+ self.assertTrue(config.shuffle_member_list)
+
+ with self.assertRaises(TypeError):
+ config.shuffle_member_list = None
+
+ config.shuffle_member_list = False
+ self.assertFalse(config.shuffle_member_list)
+
+ def test_logging_config(self):
+ config = self.config
+ self.assertIsNone(config.logging_config)
+
+ with self.assertRaises(TypeError):
+ config.logging_config = None
+
+ config.logging_config = {}
+ self.assertEqual({}, config.logging_config)
+
+ def test_logging_level(self):
+ config = self.config
+ self.assertEqual(logging.INFO, config.logging_level)
+
+ with self.assertRaises(TypeError):
+ config.logging_level = None
+
+ config.logging_level = logging.DEBUG
+ self.assertEqual(logging.DEBUG, config.logging_level)
+
+
+class IndexConfigTest(unittest.TestCase):
+ def test_defaults(self):
+ config = IndexConfig()
+ self.assertIsNone(config.name)
+ self.assertEqual(IndexType.SORTED, config.type)
+ self.assertEqual([], config.attributes)
+ self.assertEqual("__key", config.bitmap_index_options.unique_key)
+ self.assertEqual(UniqueKeyTransformation.OBJECT, config.bitmap_index_options.unique_key_transformation)
+
+ def test_add_attributes(self):
+ config = IndexConfig()
+
+ invalid_attributes = [
+ (None, AssertionError),
+ (" ", ValueError),
+ ("x.", ValueError),
+ (" x.x.", ValueError)
+ ]
+
+ for attr, error in invalid_attributes:
+ with self.assertRaises(error):
+ config.add_attribute(attr)
+
+ config.add_attribute("x.y")
+ config.add_attribute("x.y.z")
+ self.assertEqual(["x.y", "x.y.z"], config.attributes)
+
+ def test_with_changes(self):
+ name = "name"
+ idx_type = IndexType.HASH
+ attributes = ["attr", "attr.nested"]
+ bio = {
+ "unique_key": QueryConstants.THIS_ATTRIBUTE_NAME,
+ "unique_key_transformation": UniqueKeyTransformation.RAW
+ }
+ config = IndexConfig(name, idx_type, attributes, bio)
+
+ self.assertEqual(name, config.name)
+ self.assertEqual(idx_type, config.type)
+ self.assertEqual(attributes, attributes)
+ self.assertEqual(bio["unique_key"], config.bitmap_index_options.unique_key)
+ self.assertEqual(bio["unique_key_transformation"], config.bitmap_index_options.unique_key_transformation)
+
+ def test_bitmap_index_options(self):
+ invalid_options = [
+ ({"unique_key": None}, TypeError),
+ ({"unique_key_transformation": None}, TypeError),
+ ({"invalid_config": None}, InvalidConfigurationError),
+ ([], TypeError),
+ ]
+
+ for o, e in invalid_options:
+ with self.assertRaises(e):
+ IndexConfig(bitmap_index_options=o)
diff --git a/tests/connection_strategy_test.py b/tests/connection_strategy_test.py
index f194880ab2..7a248a5750 100644
--- a/tests/connection_strategy_test.py
+++ b/tests/connection_strategy_test.py
@@ -1,15 +1,14 @@
-from hazelcast import ClientConfig, HazelcastClient, six
-from hazelcast.config import RECONNECT_MODE
+from hazelcast import HazelcastClient, six
+from hazelcast.config import ReconnectMode
from hazelcast.errors import ClientOfflineError, HazelcastClientNotActiveError
from hazelcast.lifecycle import LifecycleState
from tests.base import HazelcastTestCase
-from tests.util import random_string, configure_logging
+from tests.util import random_string
class ConnectionStrategyTest(HazelcastTestCase):
@classmethod
def setUpClass(cls):
- configure_logging()
cls.rc = cls.create_rc()
@classmethod
@@ -30,17 +29,13 @@ def tearDown(self):
self.cluster = None
def test_async_start_with_no_cluster(self):
- config = ClientConfig()
- config.connection_strategy.async_start = True
- self.client = HazelcastClient(config)
+ self.client = HazelcastClient(async_start=True)
with self.assertRaises(ClientOfflineError):
self.client.get_map(random_string())
def test_async_start_with_no_cluster_throws_after_shutdown(self):
- config = ClientConfig()
- config.connection_strategy.async_start = True
- self.client = HazelcastClient(config)
+ self.client = HazelcastClient(async_start=True)
self.client.shutdown()
with self.assertRaises(HazelcastClientNotActiveError):
@@ -49,10 +44,6 @@ def test_async_start_with_no_cluster_throws_after_shutdown(self):
def test_async_start(self):
self.cluster = self.rc.createCluster(None, None)
self.rc.startMember(self.cluster.id)
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.network.addresses.append("localhost:5701")
- config.connection_strategy.async_start = True
def collector():
events = []
@@ -64,8 +55,11 @@ def on_state_change(event):
on_state_change.events = events
return on_state_change
event_collector = collector()
- config.add_lifecycle_listener(event_collector)
- self.client = HazelcastClient(config)
+
+ self.client = HazelcastClient(cluster_name=self.cluster.id,
+ cluster_members=["localhost:5701"],
+ async_start=True,
+ lifecycle_listeners=[event_collector])
self.assertTrueEventually(lambda: self.assertEqual(1, len(event_collector.events)))
self.client.get_map(random_string())
@@ -73,11 +67,6 @@ def on_state_change(event):
def test_off_reconnect_mode(self):
self.cluster = self.rc.createCluster(None, None)
member = self.rc.startMember(self.cluster.id)
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.network.addresses.append("localhost:5701")
- config.connection_strategy.reconnect_mode = RECONNECT_MODE.OFF
- config.connection_strategy.connection_retry.cluster_connect_timeout = six.MAXSIZE
def collector():
events = []
@@ -89,8 +78,12 @@ def on_state_change(event):
on_state_change.events = events
return on_state_change
event_collector = collector()
- config.add_lifecycle_listener(event_collector)
- self.client = HazelcastClient(config)
+
+ self.client = HazelcastClient(cluster_members=["localhost:5701"],
+ cluster_name=self.cluster.id,
+ reconnect_mode=ReconnectMode.OFF,
+ cluster_connect_timeout=six.MAXSIZE,
+ lifecycle_listeners=[event_collector])
m = self.client.get_map(random_string()).blocking()
# no exception at this point
m.put(1, 1)
@@ -103,11 +96,6 @@ def on_state_change(event):
def test_async_reconnect_mode(self):
self.cluster = self.rc.createCluster(None, None)
member = self.rc.startMember(self.cluster.id)
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.network.addresses.append("localhost:5701")
- config.connection_strategy.reconnect_mode = RECONNECT_MODE.ASYNC
- config.connection_strategy.connection_retry.cluster_connect_timeout = six.MAXSIZE
def collector(event_type):
events = []
@@ -119,8 +107,12 @@ def on_state_change(event):
on_state_change.events = events
return on_state_change
disconnected_collector = collector(LifecycleState.DISCONNECTED)
- config.add_lifecycle_listener(disconnected_collector)
- self.client = HazelcastClient(config)
+
+ self.client = HazelcastClient(cluster_members=["localhost:5701"],
+ cluster_name=self.cluster.id,
+ reconnect_mode=ReconnectMode.ASYNC,
+ cluster_connect_timeout=six.MAXSIZE,
+ lifecycle_listeners=[disconnected_collector])
m = self.client.get_map(random_string()).blocking()
# no exception at this point
m.put(1, 1)
@@ -139,9 +131,7 @@ def on_state_change(event):
m.put(1, 1)
def test_async_start_with_partition_specific_proxies(self):
- config = ClientConfig()
- config.connection_strategy.async_start = True
- self.client = HazelcastClient(config)
+ self.client = HazelcastClient(async_start=True)
with self.assertRaises(ClientOfflineError):
self.client.get_list(random_string())
diff --git a/tests/discovery/address_provider_test.py b/tests/discovery/address_provider_test.py
index 5d8b30514f..1b7efd482c 100644
--- a/tests/discovery/address_provider_test.py
+++ b/tests/discovery/address_provider_test.py
@@ -1,7 +1,6 @@
from unittest import TestCase
from hazelcast.connection import DefaultAddressProvider
from hazelcast.discovery import HazelcastCloudAddressProvider
-from hazelcast.config import ClientConfig
from hazelcast import HazelcastClient
from hazelcast.errors import IllegalStateError
@@ -17,20 +16,13 @@ def test_default_config(self):
self.assertTrue(isinstance(client._address_provider, DefaultAddressProvider))
def test_with_nonempty_network_config_addresses(self):
- config = ClientConfig()
- config.network.addresses.append("127.0.0.1:5701")
- client = _TestClient(config)
+ client = _TestClient(cluster_members=["127.0.0.1:5701"])
self.assertTrue(isinstance(client._address_provider, DefaultAddressProvider))
def test_enabled_cloud_config(self):
- config = ClientConfig()
- config.network.cloud.enabled = True
- client = _TestClient(config)
+ client = _TestClient(cloud_discovery_token="TOKEN")
self.assertTrue(isinstance(client._address_provider, HazelcastCloudAddressProvider))
def test_multiple_providers(self):
- config = ClientConfig()
- config.network.cloud.enabled = True
- config.network.addresses.append("127.0.0.1")
with self.assertRaises(IllegalStateError):
- _TestClient(config)
+ _TestClient(cluster_members=["127.0.0.1"], cloud_discovery_token="TOKEN")
diff --git a/tests/discovery/hazelcast_cloud_config_test.py b/tests/discovery/hazelcast_cloud_config_test.py
deleted file mode 100644
index 6353f54d20..0000000000
--- a/tests/discovery/hazelcast_cloud_config_test.py
+++ /dev/null
@@ -1,65 +0,0 @@
-from unittest import TestCase
-from hazelcast.client import HazelcastClient, ClientProperties
-from hazelcast.config import ClientConfig, ClientCloudConfig
-from hazelcast.discovery import HazelcastCloudDiscovery
-from hazelcast.errors import IllegalStateError
-
-
-class HazelcastCloudConfigTest(TestCase):
-
- def setUp(self):
- self.token = "TOKEN"
- self.config = ClientConfig()
-
- def test_cloud_config_defaults(self):
- cloud_config = self.config.network.cloud
- self.assertEqual(False, cloud_config.enabled)
- self.assertEqual("", cloud_config.discovery_token)
-
- def test_cloud_config(self):
- cloud_config = ClientCloudConfig()
- cloud_config.enabled = True
- cloud_config.discovery_token = self.token
- self.config.network.cloud = cloud_config
- self.assertEqual(True, self.config.network.cloud.enabled)
- self.assertEqual(self.token, self.config.network.cloud.discovery_token)
-
- def test_cloud_config_with_property(self):
- self.config.set_property(ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.name, self.token)
- token = self.config.get_property_or_default(ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.name,
- ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.default_value)
- self.assertEqual(self.token, token)
-
- def test_cloud_config_with_property_and_client_configuration(self):
- self.config.network.cloud.enabled = True
- self.config.connection_strategy.connection_retry.cluster_connect_timeout = 2
- self.config.set_property(ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.name, self.token)
- with self.assertRaises(IllegalStateError):
- HazelcastClient(self.config)
-
- def test_custom_cloud_url(self):
- self.config.set_property(ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.name, self.token)
- self.config.set_property(HazelcastCloudDiscovery.CLOUD_URL_BASE_PROPERTY.name, "dev.hazelcast.cloud")
- host, url = HazelcastCloudDiscovery.get_host_and_url(self.config._properties, self.token)
- self.assertEqual("dev.hazelcast.cloud", host)
- self.assertEqual("/cluster/discovery?token=TOKEN", url)
-
- def test_custom_cloud_url_with_https(self):
- self.config.set_property(ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.name, self.token)
- self.config.set_property(HazelcastCloudDiscovery.CLOUD_URL_BASE_PROPERTY.name, "https://dev.hazelcast.cloud")
- host, url = HazelcastCloudDiscovery.get_host_and_url(self.config._properties, self.token)
- self.assertEqual("dev.hazelcast.cloud", host)
- self.assertEqual("/cluster/discovery?token=TOKEN", url)
-
- def test_custom_url_with_http(self):
- self.config.set_property(ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.name, self.token)
- self.config.set_property(HazelcastCloudDiscovery.CLOUD_URL_BASE_PROPERTY.name, "http://dev.hazelcast.cloud")
- host, url = HazelcastCloudDiscovery.get_host_and_url(self.config._properties, self.token)
- self.assertEqual("dev.hazelcast.cloud", host)
- self.assertEqual("/cluster/discovery?token=TOKEN", url)
-
- def test_default_cloud_url(self):
- self.config.set_property(ClientProperties.HAZELCAST_CLOUD_DISCOVERY_TOKEN.name, self.token)
- host, url = HazelcastCloudDiscovery.get_host_and_url(self.config._properties, self.token)
- self.assertEqual("coordinator.hazelcast.cloud", host)
- self.assertEqual("/cluster/discovery?token=TOKEN", url)
diff --git a/tests/discovery/hazelcast_cloud_discovery_test.py b/tests/discovery/hazelcast_cloud_discovery_test.py
index 38192ab08e..e36333daa2 100644
--- a/tests/discovery/hazelcast_cloud_discovery_test.py
+++ b/tests/discovery/hazelcast_cloud_discovery_test.py
@@ -8,7 +8,6 @@
from hazelcast.core import Address
from hazelcast.errors import HazelcastCertificationError
from hazelcast.discovery import HazelcastCloudDiscovery
-from hazelcast.config import ClientConfig
from hazelcast.client import HazelcastClient
from tests.util import get_abs_path
@@ -101,54 +100,57 @@ def tearDownClass(cls):
cls.server.close_server()
def test_found_response(self):
- discovery = HazelcastCloudDiscovery(*get_params(HOST, self.server.port, CLOUD_URL, TOKEN))
+ discovery = create_discovery(HOST, self.server.port, CLOUD_URL, TOKEN)
discovery._ctx = self.ctx
addresses = discovery.discover_nodes()
six.assertCountEqual(self, ADDRESSES, addresses)
def test_private_link_response(self):
- discovery = HazelcastCloudDiscovery(*get_params(HOST, self.server.port, CLOUD_URL, PRIVATE_LINK_TOKEN))
+ discovery = create_discovery(HOST, self.server.port, CLOUD_URL, PRIVATE_LINK_TOKEN)
discovery._ctx = self.ctx
addresses = discovery.discover_nodes()
six.assertCountEqual(self, PRIVATE_LINK_ADDRESSES, addresses)
def test_not_found_response(self):
- discovery = HazelcastCloudDiscovery(*get_params(HOST, self.server.port, CLOUD_URL, "INVALID_TOKEN"))
+ discovery = create_discovery(HOST, self.server.port, CLOUD_URL, "INVALID_TOKEN")
discovery._ctx = self.ctx
with self.assertRaises(IOError):
discovery.discover_nodes()
def test_invalid_url(self):
- discovery = HazelcastCloudDiscovery(*get_params(HOST, self.server.port, "/INVALID_URL", ""))
+ discovery = create_discovery(HOST, self.server.port, "/INVALID_URL", "")
discovery._ctx = self.ctx
with self.assertRaises(IOError):
discovery.discover_nodes()
def test_invalid_certificates(self):
- discovery = HazelcastCloudDiscovery(*get_params(HOST, self.server.port, CLOUD_URL, TOKEN))
+ discovery = create_discovery(HOST, self.server.port, CLOUD_URL, TOKEN)
with self.assertRaises(HazelcastCertificationError):
discovery.discover_nodes()
def test_client_with_cloud_discovery(self):
- config = ClientConfig()
- config.network.cloud.enabled = True
- config.network.cloud.discovery_token = TOKEN
+ old = HazelcastCloudDiscovery._CLOUD_URL_BASE
+ try:
+ HazelcastCloudDiscovery._CLOUD_URL_BASE = "%s:%s" % (HOST, self.server.port)
+ client = TestClient(cloud_discovery_token=TOKEN)
+ client._address_provider.cloud_discovery._ctx = self.ctx
- config.set_property(HazelcastCloudDiscovery.CLOUD_URL_BASE_PROPERTY.name, HOST + ":" + str(self.server.port))
- client = TestClient(config)
- client._address_provider.cloud_discovery._ctx = self.ctx
+ private_addresses, secondaries = client._address_provider.load_addresses()
- private_addresses, secondaries = client._address_provider.load_addresses()
+ six.assertCountEqual(self, list(ADDRESSES.keys()), private_addresses)
+ six.assertCountEqual(self, secondaries, [])
- six.assertCountEqual(self, list(ADDRESSES.keys()), private_addresses)
- six.assertCountEqual(self, secondaries, [])
+ for private_address in private_addresses:
+ translated_address = client._address_provider.translate(private_address)
+ self.assertEqual(ADDRESSES[private_address], translated_address)
+ finally:
+ HazelcastCloudDiscovery._CLOUD_URL_BASE = old
- for private_address in private_addresses:
- translated_address = client._address_provider.translate(private_address)
- self.assertEqual(ADDRESSES[private_address], translated_address)
-
-def get_params(host, port, url, token, timeout=5.0):
- return host + ":" + str(port), url + token, timeout
+def create_discovery(host, port, url, token, timeout=5.0):
+ discovery = HazelcastCloudDiscovery(token, timeout)
+ discovery._CLOUD_URL_BASE = "%s:%s" % (host, port)
+ discovery._CLOUD_URL_PATH = url
+ return discovery
diff --git a/tests/discovery/hazelcast_cloud_provider_test.py b/tests/discovery/hazelcast_cloud_provider_test.py
index 7e43963a69..c25c7ca781 100644
--- a/tests/discovery/hazelcast_cloud_provider_test.py
+++ b/tests/discovery/hazelcast_cloud_provider_test.py
@@ -17,9 +17,9 @@ def setUp(self):
self.expected_addresses[Address("10.0.0.1", 5702)] = Address("198.51.100.1", 5702)
self.expected_addresses[Address("10.0.0.2", 5701)] = Address("198.51.100.2", 5701)
self.expected_addresses[self.private_address] = self.public_address
- self.cloud_discovery = HazelcastCloudDiscovery("", "", 0)
+ self.cloud_discovery = HazelcastCloudDiscovery("", 0)
self.cloud_discovery.discover_nodes = lambda: self.expected_addresses
- self.provider = HazelcastCloudAddressProvider("", "", 0)
+ self.provider = HazelcastCloudAddressProvider("", 0, None)
self.provider.cloud_discovery = self.cloud_discovery
def test_load_addresses(self):
@@ -58,9 +58,9 @@ def test_translate_when_not_found(self):
self.assertIsNone(actual)
def test_refresh_with_exception(self):
- cloud_discovery = HazelcastCloudDiscovery("", "", 0)
+ cloud_discovery = HazelcastCloudDiscovery("", 0)
cloud_discovery.discover_nodes = self.mock_discover_nodes_with_exception
- provider = HazelcastCloudAddressProvider("", "", 0)
+ provider = HazelcastCloudAddressProvider("", 0, None)
provider.cloud_discovery = cloud_discovery
provider.refresh()
diff --git a/tests/hazelcast_json_value_test.py b/tests/hazelcast_json_value_test.py
index ebaf019779..7048e15af3 100644
--- a/tests/hazelcast_json_value_test.py
+++ b/tests/hazelcast_json_value_test.py
@@ -46,7 +46,7 @@ def setUpClass(cls):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
diff --git a/tests/heartbeat_test.py b/tests/heartbeat_test.py
index 2383c3c27c..9dbdf549ef 100644
--- a/tests/heartbeat_test.py
+++ b/tests/heartbeat_test.py
@@ -1,14 +1,12 @@
from hazelcast import HazelcastClient
from hazelcast.core import Address
from tests.base import HazelcastTestCase
-from hazelcast.config import ClientConfig, ClientProperties
-from tests.util import configure_logging, open_connection_to_address, wait_for_partition_table
+from tests.util import open_connection_to_address, wait_for_partition_table
class HeartbeatTest(HazelcastTestCase):
@classmethod
def setUpClass(cls):
- configure_logging()
cls.rc = cls.create_rc()
@classmethod
@@ -18,13 +16,9 @@ def tearDownClass(cls):
def setUp(self):
self.cluster = self.create_cluster(self.rc)
self.member = self.rc.startMember(self.cluster.id)
- self.config = ClientConfig()
- self.config.cluster_name = self.cluster.id
-
- self.config.set_property(ClientProperties.HEARTBEAT_INTERVAL.name, 500)
- self.config.set_property(ClientProperties.HEARTBEAT_TIMEOUT.name, 2000)
-
- self.client = HazelcastClient(self.config)
+ self.client = HazelcastClient(cluster_name=self.cluster.id,
+ heartbeat_interval=0.5,
+ heartbeat_timeout=2)
def tearDown(self):
self.client.shutdown()
@@ -39,7 +33,7 @@ def test_heartbeat_stopped_and_restored(self):
def connection_collector():
connections = []
- def collector(c, *args):
+ def collector(c, *_):
connections.append(c)
collector.connections = connections
diff --git a/tests/invocation_test.py b/tests/invocation_test.py
index 0bb1838275..cd2056c223 100644
--- a/tests/invocation_test.py
+++ b/tests/invocation_test.py
@@ -1,7 +1,6 @@
import time
import hazelcast
-from hazelcast.config import ClientProperties
from hazelcast.errors import HazelcastTimeoutError
from hazelcast.invocation import Invocation
from hazelcast.protocol.client_message import OutboundMessage
@@ -21,14 +20,16 @@ def tearDownClass(cls):
cls.rc.exit()
def setUp(self):
- config = hazelcast.ClientConfig()
- config.cluster_name = self.cluster.id
- config.set_property(ClientProperties.INVOCATION_TIMEOUT_SECONDS.name, 1)
- self.client = hazelcast.HazelcastClient(config)
+ self.client = hazelcast.HazelcastClient(cluster_name=self.cluster.id, invocation_timeout=1)
def tearDown(self):
self.client.shutdown()
+ def configure_client(cls, config):
+ config["cluster_name"] = cls.cluster.id
+ config["invocation_timeout"] = 1
+ return config
+
def test_invocation_timeout(self):
request = OutboundMessage(bytearray(22), True)
invocation_service = self.client._invocation_service
diff --git a/tests/lifecycle_test.py b/tests/lifecycle_test.py
index cbf48b027f..746bfb6510 100644
--- a/tests/lifecycle_test.py
+++ b/tests/lifecycle_test.py
@@ -1,14 +1,12 @@
-from hazelcast import ClientConfig
from hazelcast.lifecycle import LifecycleState
from tests.base import HazelcastTestCase
-from tests.util import configure_logging, event_collector
+from tests.util import event_collector
class LifecycleTest(HazelcastTestCase):
rc = None
def setUp(self):
- configure_logging()
self.rc = self.create_rc()
self.cluster = self.create_cluster(self.rc)
@@ -18,11 +16,13 @@ def tearDown(self):
def test_lifecycle_listener_receives_events_in_order(self):
collector = event_collector()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.lifecycle_listeners.append(collector)
self.cluster.start_member()
- client = self.create_client(config)
+ client = self.create_client({
+ "cluster_name": self.cluster.id,
+ "lifecycle_listeners": [
+ collector,
+ ]
+ })
client.shutdown()
self.assertEqual(collector.events,
@@ -33,9 +33,9 @@ def test_lifecycle_listener_receives_events_in_order_after_startup(self):
self.cluster.start_member()
collector = event_collector()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- client = self.create_client(config)
+ client = self.create_client({
+ "cluster_name": self.cluster.id,
+ })
client.lifecycle_service.add_listener(collector)
client.shutdown()
@@ -46,9 +46,9 @@ def test_lifecycle_listener_receives_disconnected_event(self):
member = self.cluster.start_member()
collector = event_collector()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- client = self.create_client(config)
+ client = self.create_client({
+ "cluster_name": self.cluster.id,
+ })
client.lifecycle_service.add_listener(collector)
member.shutdown()
self.assertEqual(collector.events, [LifecycleState.DISCONNECTED])
@@ -58,9 +58,9 @@ def test_remove_lifecycle_listener(self):
collector = event_collector()
self.cluster.start_member()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- client = self.create_client(config)
+ client = self.create_client({
+ "cluster_name": self.cluster.id,
+ })
registration_id = client.lifecycle_service.add_listener(collector)
client.lifecycle_service.remove_listener(registration_id)
client.shutdown()
@@ -70,8 +70,10 @@ def test_remove_lifecycle_listener(self):
def test_exception_in_listener(self):
def listener(_):
raise RuntimeError("error")
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.lifecycle_listeners = [listener]
self.cluster.start_member()
- self.create_client(config)
+ self.create_client({
+ "cluster_name": self.cluster.id,
+ "lifecycle_listeners": [
+ listener,
+ ],
+ })
diff --git a/tests/listener_test.py b/tests/listener_test.py
index 96d0416166..0b0638909e 100644
--- a/tests/listener_test.py
+++ b/tests/listener_test.py
@@ -1,18 +1,16 @@
from tests.base import HazelcastTestCase
-from tests.util import configure_logging, random_string, event_collector, generate_key_owned_by_instance, \
- wait_for_partition_table
-from hazelcast.config import ClientConfig
+from tests.util import random_string, event_collector, generate_key_owned_by_instance, wait_for_partition_table
class ListenerTest(HazelcastTestCase):
def setUp(self):
- configure_logging()
self.rc = self.create_rc()
self.cluster = self.create_cluster(self.rc, None)
self.m1 = self.cluster.start_member()
self.m2 = self.cluster.start_member()
- self.client_config = ClientConfig()
- self.client_config.cluster_name = self.cluster.id
+ self.client_config = {
+ "cluster_name": self.cluster.id,
+ }
self.collector = event_collector()
def tearDown(self):
@@ -21,7 +19,7 @@ def tearDown(self):
# -------------------------- test_remove_member ----------------------- #
def test_smart_listener_remove_member(self):
- self.client_config.network.smart_routing = True
+ self.client_config["smart_routing"] = True
client = self.create_client(self.client_config)
wait_for_partition_table(client)
key_m1 = generate_key_owned_by_instance(client, self.m1.uuid)
@@ -36,7 +34,7 @@ def assert_event():
self.assertTrueEventually(assert_event)
def test_non_smart_listener_remove_member(self):
- self.client_config.network.smart_routing = False
+ self.client_config["smart_routing"] = False
client = self.create_client(self.client_config)
map = client.get_map(random_string()).blocking()
map.add_entry_listener(added_func=self.collector)
@@ -52,7 +50,7 @@ def assert_event():
# -------------------------- test_add_member ----------------------- #
def test_smart_listener_add_member(self):
- self.client_config.network.smart_routing = True
+ self.client_config["smart_routing"] = True
client = self.create_client(self.client_config)
map = client.get_map(random_string()).blocking()
map.add_entry_listener(added_func=self.collector)
@@ -66,7 +64,7 @@ def assert_event():
self.assertTrueEventually(assert_event)
def test_non_smart_listener_add_member(self):
- self.client_config.network.smart_routing = False
+ self.client_config["smart_routing"] = False
client = self.create_client(self.client_config)
map = client.get_map(random_string()).blocking()
map.add_entry_listener(added_func=self.collector)
diff --git a/tests/logger/logger_test.py b/tests/logger/logger_test.py
index 48ca91156b..f78957f36b 100644
--- a/tests/logger/logger_test.py
+++ b/tests/logger/logger_test.py
@@ -1,10 +1,10 @@
import datetime
+import json
import logging
import os
from inspect import currentframe, getframeinfo
from hazelcast import HazelcastClient
-from hazelcast.config import LoggerConfig, ClientConfig
from hazelcast.six import StringIO
from hazelcast.version import CLIENT_VERSION
from tests.util import get_abs_path
@@ -25,16 +25,8 @@ def tearDownClass(cls):
cls.rc.exit()
def test_default_config(self):
- logger_config = LoggerConfig()
+ client = HazelcastClient(cluster_name=self.cluster.id)
- self.assertEqual(logging.INFO, logger_config.level)
- self.assertIsNone(logger_config.config_file)
-
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.logger = logger_config
-
- client = HazelcastClient(config)
self.assertEqual(logging.INFO, client.logger.level)
self.assertTrue(client.logger.isEnabledFor(logging.INFO))
self.assertTrue(client.logger.isEnabledFor(logging.WARNING))
@@ -61,18 +53,8 @@ def test_default_config(self):
client.shutdown()
def test_non_default_configuration_level(self):
- logger_config = LoggerConfig()
-
- logger_config.level = logging.CRITICAL
-
- self.assertEqual(logging.CRITICAL, logger_config.level)
- self.assertIsNone(logger_config.config_file)
-
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.logger = logger_config
+ client = HazelcastClient(cluster_name=self.cluster.id, logging_level=logging.CRITICAL)
- client = HazelcastClient(config)
self.assertEqual(logging.CRITICAL, client.logger.level)
self.assertFalse(client.logger.isEnabledFor(logging.INFO))
self.assertFalse(client.logger.isEnabledFor(logging.WARNING))
@@ -99,19 +81,13 @@ def test_non_default_configuration_level(self):
client.shutdown()
def test_simple_custom_logging_configuration(self):
- logger_config = LoggerConfig()
-
# Outputs to stdout with the level of error
config_path = get_abs_path(self.CUR_DIR, "simple_config.json")
- logger_config.config_file = config_path
-
- self.assertEqual(config_path, logger_config.config_file)
-
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.logger = logger_config
+ with open(config_path, "r") as f:
+ logging_config_data = f.read()
+ logging_config = json.loads(logging_config_data)
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id, logging_config=logging_config)
self.assertEqual(logging.ERROR, client.logger.getEffectiveLevel())
self.assertFalse(client.logger.isEnabledFor(logging.INFO))
self.assertFalse(client.logger.isEnabledFor(logging.WARNING))
@@ -138,10 +114,11 @@ def test_simple_custom_logging_configuration(self):
client.shutdown()
def test_default_configuration_multiple_clients(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- client1 = HazelcastClient(config)
- client2 = HazelcastClient(config)
+ config = {
+ "cluster_name": self.cluster.id
+ }
+ client1 = HazelcastClient(**config)
+ client2 = HazelcastClient(**config)
out = StringIO()
@@ -160,14 +137,18 @@ def test_default_configuration_multiple_clients(self):
client2.shutdown()
def test_same_custom_configuration_file_with_multiple_clients(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
+ config_path = get_abs_path(self.CUR_DIR, "simple_config.json")
+ with open(config_path, "r") as f:
+ logging_config_data = f.read()
+ logging_config = json.loads(logging_config_data)
- config_file = get_abs_path(self.CUR_DIR, "simple_config.json")
- config.logger.configuration_file = config_file
+ config = {
+ "cluster_name": self.cluster.id,
+ "logging_config": logging_config,
+ }
- client1 = HazelcastClient(config)
- client2 = HazelcastClient(config)
+ client1 = HazelcastClient(**config)
+ client2 = HazelcastClient(**config)
out = StringIO()
@@ -185,9 +166,7 @@ def test_same_custom_configuration_file_with_multiple_clients(self):
client2.shutdown()
def test_default_logger_output(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id)
out = StringIO()
@@ -211,13 +190,12 @@ def test_default_logger_output(self):
client.shutdown()
def test_custom_configuration_output(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config_file = get_abs_path(self.CUR_DIR, "detailed_config.json")
-
- config.logger.config_file = config_file
+ config_path = get_abs_path(self.CUR_DIR, "detailed_config.json")
+ with open(config_path, "r") as f:
+ logging_config_data = f.read()
+ logging_config = json.loads(logging_config_data)
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id, logging_config=logging_config)
std_out = StringIO()
std_err = StringIO()
diff --git a/tests/near_cache_test.py b/tests/near_cache_test.py
index ef59b4b3ee..3a6462eced 100644
--- a/tests/near_cache_test.py
+++ b/tests/near_cache_test.py
@@ -1,39 +1,19 @@
import unittest
from time import sleep
-from hazelcast import SerializationConfig
-from hazelcast.config import NearCacheConfig
+from hazelcast.config import _Config
from hazelcast.near_cache import *
from hazelcast.serialization import SerializationServiceV1
-from tests.util import random_string, configure_logging
from hazelcast.six.moves import range
class NearCacheTestCase(unittest.TestCase):
def setUp(self):
- configure_logging()
- self.service = SerializationServiceV1(serialization_config=SerializationConfig())
+ self.service = SerializationServiceV1(_Config())
def tearDown(self):
self.service.destroy()
- def test_near_cache_config(self):
- config = NearCacheConfig(random_string())
- with self.assertRaises(ValueError):
- config.in_memory_format = 100
-
- with self.assertRaises(ValueError):
- config.eviction_policy = 100
-
- with self.assertRaises(ValueError):
- config.time_to_live_seconds = -1
-
- with self.assertRaises(ValueError):
- config.max_idle_seconds = -1
-
- with self.assertRaises(ValueError):
- config.eviction_max_size = 0
-
def test_DataRecord_expire_time(self):
now = current_time()
data_rec = DataRecord("key", "value", create_time=now, ttl_seconds=1)
@@ -47,13 +27,13 @@ def test_DataRecord_max_idle_seconds(self):
self.assertTrue(data_rec.is_expired(max_idle_seconds=1))
def test_put_get_data(self):
- near_cache = self.create_near_cache(self.service, IN_MEMORY_FORMAT.BINARY, 1000, 1000, EVICTION_POLICY.LRU, 1000)
+ near_cache = self.create_near_cache(self.service, InMemoryFormat.BINARY, 1000, 1000, EvictionPolicy.LRU, 1000)
key_data = self.service.to_data("key")
near_cache[key_data] = "value"
self.assertEqual("value", near_cache[key_data])
def test_put_get(self):
- near_cache = self.create_near_cache(self.service, IN_MEMORY_FORMAT.OBJECT, 1000, 1000, EVICTION_POLICY.LRU, 1000)
+ near_cache = self.create_near_cache(self.service, InMemoryFormat.OBJECT, 1000, 1000, EvictionPolicy.LRU, 1000)
for i in range(0, 10000):
key = "key-{}".format(i)
value = "value-{}".format(i)
@@ -63,7 +43,7 @@ def test_put_get(self):
self.assertGreaterEqual(near_cache.eviction_max_size * 1.1, near_cache.__len__())
def test_expiry_time(self):
- near_cache = self.create_near_cache(self.service, IN_MEMORY_FORMAT.OBJECT, 1, 1000, EVICTION_POLICY.LRU, 1000)
+ near_cache = self.create_near_cache(self.service, InMemoryFormat.OBJECT, 1, 1000, EvictionPolicy.LRU, 1000)
for i in range(0, 1000):
key = "key-{}".format(i)
value = "value-{}".format(i)
@@ -79,7 +59,7 @@ def test_expiry_time(self):
self.assertGreater(expire, 8)
def test_max_idle_time(self):
- near_cache = self.create_near_cache(self.service, IN_MEMORY_FORMAT.OBJECT, 1000, 2, EVICTION_POLICY.LRU, 1000)
+ near_cache = self.create_near_cache(self.service, InMemoryFormat.OBJECT, 1000, 2, EvictionPolicy.LRU, 1000)
for i in range(0, 1000):
key = "key-{}".format(i)
value = "value-{}".format(i)
@@ -92,7 +72,7 @@ def test_max_idle_time(self):
self.assertEqual(expire, near_cache.eviction_sampling_count)
def test_LRU_time(self):
- near_cache = self.create_near_cache(self.service, IN_MEMORY_FORMAT.OBJECT, 1000, 1000, EVICTION_POLICY.LRU, 10000, 16, 16)
+ near_cache = self.create_near_cache(self.service, InMemoryFormat.OBJECT, 1000, 1000, EvictionPolicy.LRU, 10000, 16, 16)
for i in range(0, 10000):
key = "key-{}".format(i)
value = "value-{}".format(i)
@@ -108,7 +88,7 @@ def test_LRU_time(self):
self.assertLess(evict, 10000)
def test_LRU_time_with_update(self):
- near_cache = self.create_near_cache(self.service, IN_MEMORY_FORMAT.OBJECT, 1000, 1000, EVICTION_POLICY.LRU, 10, 10, 10)
+ near_cache = self.create_near_cache(self.service, InMemoryFormat.OBJECT, 1000, 1000, EvictionPolicy.LRU, 10, 10, 10)
for i in range(0, 10):
key = "key-{}".format(i)
value = "value-{}".format(i)
@@ -124,7 +104,7 @@ def test_LRU_time_with_update(self):
val = near_cache["key-9"]
def test_LFU_time(self):
- near_cache = self.create_near_cache(self.service, IN_MEMORY_FORMAT.BINARY, 1000, 1000, EVICTION_POLICY.LFU, 1000)
+ near_cache = self.create_near_cache(self.service, InMemoryFormat.BINARY, 1000, 1000, EvictionPolicy.LFU, 1000)
for i in range(0, 1000):
key = "key-{}".format(i)
value = "value-{}".format(i)
@@ -141,7 +121,7 @@ def test_LFU_time(self):
self.assertLess(evict, 1000)
def test_RANDOM_time(self):
- near_cache = self.create_near_cache(self.service, IN_MEMORY_FORMAT.BINARY, 1000, 1000, EVICTION_POLICY.LFU, 1000)
+ near_cache = self.create_near_cache(self.service, InMemoryFormat.BINARY, 1000, 1000, EvictionPolicy.LFU, 1000)
for i in range(0, 2000):
key = "key-{}".format(i)
value = "value-{}".format(i)
diff --git a/tests/predicate_test.py b/tests/predicate_test.py
index 6d569abbeb..0f2a9cba29 100644
--- a/tests/predicate_test.py
+++ b/tests/predicate_test.py
@@ -1,6 +1,5 @@
from unittest import TestCase, skip
-from hazelcast.config import IndexConfig
from hazelcast.serialization.predicate import is_equal_to, and_, is_between, is_less_than, \
is_less_than_or_equal_to, is_greater_than, is_greater_than_or_equal_to, or_, is_not_equal_to, not_, is_like, \
is_ilike, matches_regex, sql, true, false, is_in, is_instance_of
@@ -80,7 +79,7 @@ def test_false(self):
class PredicateTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
@@ -90,10 +89,9 @@ def tearDown(self):
self.map.destroy()
def _fill_map(self, count=10):
- map = {"key-%d" % x: "value-%d" % x for x in range(0, count)}
- for k, v in six.iteritems(map):
- self.map.put(k, v)
- return map
+ m = {"key-%d" % x: "value-%d" % x for x in range(0, count)}
+ self.map.put_all(m)
+ return m
def _fill_map_numeric(self, count=100):
for n in range(0, count):
@@ -102,7 +100,7 @@ def _fill_map_numeric(self, count=100):
def test_key_set(self):
self._fill_map()
key_set = self.map.key_set()
- key_set_list = list(key_set)
+ list(key_set)
key_set_list = list(key_set)
assert key_set_list[0]
@@ -208,26 +206,25 @@ def test_instance_of(self):
six.assertCountEqual(self, self.map.key_set(predicate), ["key-1"])
def test_true(self):
- map = self._fill_map()
-
+ m = self._fill_map()
predicate = true()
-
- six.assertCountEqual(self, self.map.key_set(predicate), list(map.keys()))
+ six.assertCountEqual(self, self.map.key_set(predicate), list(m.keys()))
def test_false(self):
- map = self._fill_map()
-
+ self._fill_map()
predicate = false()
-
six.assertCountEqual(self, self.map.key_set(predicate), [])
class PredicatePortableTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
- the_factory = {InnerPortable.CLASS_ID: InnerPortable}
- config.serialization.portable_factories[FACTORY_ID] = the_factory
+ config["cluster_name"] = cls.cluster.id
+ config["portable_factories"] = {
+ FACTORY_ID: {
+ InnerPortable.CLASS_ID: InnerPortable
+ }
+ }
return config
def setUp(self):
@@ -237,10 +234,9 @@ def tearDown(self):
self.map.destroy()
def _fill_map(self, count=1000):
- map = {InnerPortable("key-%d" % x, x): InnerPortable("value-%d" % x, x) for x in range(0, count)}
- for k, v in six.iteritems(map):
- self.map.put(k, v)
- return map
+ m = {InnerPortable("key-%d" % x, x): InnerPortable("value-%d" % x, x) for x in range(0, count)}
+ self.map.put_all(m)
+ return m
def test_predicate_portable_key(self):
_map = self._fill_map()
@@ -305,9 +301,13 @@ def __eq__(self, other):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
- factory = {1: NestedPredicatePortableTest.Body, 2: NestedPredicatePortableTest.Limb}
- config.serialization.portable_factories[FACTORY_ID] = factory
+ config["cluster_name"] = cls.cluster.id
+ config["portable_factories"] = {
+ FACTORY_ID: {
+ 1: NestedPredicatePortableTest.Body,
+ 2: NestedPredicatePortableTest.Limb,
+ },
+ }
return config
def setUp(self):
@@ -320,12 +320,10 @@ def tearDown(self):
def test_adding_indexes(self):
# single-attribute index
- single_index = IndexConfig(attributes=["name"])
- self.map.add_index(single_index)
+ self.map.add_index(attributes=["name"])
# nested-attribute index
- nested_index = IndexConfig(attributes=["limb.name"])
- self.map.add_index(nested_index)
+ self.map.add_index(attributes=["limb.name"])
def test_single_attribute_query_portable_predicates(self):
predicate = is_equal_to("limb.name", "hand")
diff --git a/tests/property_tests.py b/tests/property_tests.py
deleted file mode 100644
index 90e9c29f84..0000000000
--- a/tests/property_tests.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import os
-
-from hazelcast.util import TimeUnit
-from hazelcast.config import ClientProperty, ClientProperties, ClientConfig
-from unittest import TestCase
-
-
-class PropertyTest(TestCase):
- def test_client_property_defaults(self):
- prop = ClientProperty("name")
- self.assertEqual("name", prop.name)
- self.assertIsNone(prop.default_value)
- self.assertIsNone(prop.time_unit)
-
- def test_client_property(self):
- prop = ClientProperty("name", 0, TimeUnit.SECOND)
- self.assertEqual("name", prop.name)
- self.assertEqual(0, prop.default_value)
- self.assertEqual(TimeUnit.SECOND, prop.time_unit)
-
- def test_client_properties_with_config(self):
- config = ClientConfig()
- prop = ClientProperty("key")
- config.set_property(prop.name, "value")
-
- props = ClientProperties(config.get_properties())
- self.assertEqual("value", props.get(prop))
-
- def test_client_properties_with_default_value(self):
- config = ClientConfig()
- prop = ClientProperty("key", "def-value")
-
- props = ClientProperties(config.get_properties())
- self.assertEqual("def-value", props.get(prop))
-
- def test_client_properties_with_config_and_default_value(self):
- config = ClientConfig()
- prop = ClientProperty("key", "def-value")
- config.set_property(prop.name, "value")
-
- props = ClientProperties(config.get_properties())
- self.assertEqual("value", props.get(prop))
-
- def test_client_properties_with_environment_variable(self):
- environ = os.environ
- environ[ClientProperties.HEARTBEAT_INTERVAL.name] = "3000"
-
- props = ClientProperties(dict())
- self.assertEqual("3000", props.get(ClientProperties.HEARTBEAT_INTERVAL))
- os.unsetenv(ClientProperties.HEARTBEAT_INTERVAL.name)
-
- def test_client_properties_with_config_default_value_and_environment_variable(self):
- environ = os.environ
- prop = ClientProperties.HEARTBEAT_INTERVAL
- environ[prop.name] = "1000"
-
- config = ClientConfig()
- config.set_property(prop.name, 2000)
-
- props = ClientProperties(config.get_properties())
- self.assertEqual(2, props.get_seconds(prop))
- os.unsetenv(prop.name)
-
- def test_client_properties_get_second(self):
- config = ClientConfig()
- prop = ClientProperty("test", time_unit=TimeUnit.MILLISECOND)
- config.set_property(prop.name, 1000)
-
- props = ClientProperties(config.get_properties())
- self.assertEqual(1, props.get_seconds(prop))
-
- def test_client_properties_get_second_unsupported_type(self):
- config = ClientConfig()
- prop = ClientProperty("test", "value", TimeUnit.SECOND)
- config.set_property(prop.name, None)
-
- props = ClientProperties(config.get_properties())
- with self.assertRaises(ValueError):
- props.get_seconds(prop)
-
- def test_client_properties_get_second_positive(self):
- config = ClientConfig()
- prop = ClientProperty("test", 1000, TimeUnit.MILLISECOND)
- config.set_property(prop.name, -1000)
-
- props = ClientProperties(config.get_properties())
- self.assertEqual(1, props.get_seconds_positive_or_default(prop))
-
- def test_client_properties_get_second_positive_unsupported_type(self):
- config = ClientConfig()
- prop = ClientProperty("test", "value", TimeUnit.MILLISECOND)
- config.set_property(prop.name, None)
-
- props = ClientProperties(config.get_properties())
- with self.assertRaises(ValueError):
- props.get_seconds_positive_or_default(prop)
-
- def test_client_properties_set_false_when_default_is_true(self):
- config = ClientConfig()
- prop = ClientProperty("test", True)
- config.set_property(prop.name, False)
-
- props = ClientProperties(config.get_properties())
-
- self.assertFalse(props.get(prop))
diff --git a/tests/proxy/distributed_objects_test.py b/tests/proxy/distributed_objects_test.py
index c59c64e398..ae57672bea 100644
--- a/tests/proxy/distributed_objects_test.py
+++ b/tests/proxy/distributed_objects_test.py
@@ -4,7 +4,7 @@
from hazelcast.proxy import MAP_SERVICE
from tests.base import SingleMemberTestCase
from tests.util import event_collector
-from hazelcast import six, ClientConfig
+from hazelcast import six
class DistributedObjectsTest(SingleMemberTestCase):
@@ -12,9 +12,9 @@ class DistributedObjectsTest(SingleMemberTestCase):
def setUpClass(cls):
cls.rc = cls.create_rc()
cls.cluster = cls.create_cluster(cls.rc, cls.configure_cluster())
- config = ClientConfig()
- config.cluster_name = cls.cluster.id
- cls.config = config
+ cls.config = {
+ "cluster_name": cls.cluster.id
+ }
@classmethod
def tearDownClass(cls):
@@ -22,7 +22,7 @@ def tearDownClass(cls):
def setUp(self):
self.member = self.cluster.start_member()
- self.client = hazelcast.HazelcastClient(self.config)
+ self.client = hazelcast.HazelcastClient(**self.config)
def tearDown(self):
self.client.shutdown()
@@ -42,7 +42,7 @@ def test_get_distributed_objects_clears_destroyed_proxies(self):
six.assertCountEqual(self, [m], self.client.get_distributed_objects())
- other_client = hazelcast.HazelcastClient(self.config)
+ other_client = hazelcast.HazelcastClient(**self.config)
other_clients_map = other_client.get_map("map")
other_clients_map.destroy()
diff --git a/tests/proxy/executor_test.py b/tests/proxy/executor_test.py
index 71e9e9b50c..a4bb0f2400 100644
--- a/tests/proxy/executor_test.py
+++ b/tests/proxy/executor_test.py
@@ -29,7 +29,7 @@ def get_class_id(self):
class ExecutorTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
@classmethod
diff --git a/tests/proxy/flake_id_generator_test.py b/tests/proxy/flake_id_generator_test.py
index ea0cdae98c..59361ac328 100644
--- a/tests/proxy/flake_id_generator_test.py
+++ b/tests/proxy/flake_id_generator_test.py
@@ -5,9 +5,7 @@
from tests.base import SingleMemberTestCase, HazelcastTestCase
from tests.hzrc.ttypes import Lang
from tests.util import configure_logging
-from hazelcast.config import ClientConfig, FlakeIdGeneratorConfig, _MAXIMUM_PREFETCH_COUNT
from hazelcast.client import HazelcastClient
-from hazelcast.util import to_millis
from hazelcast.proxy.flake_id_generator import _IdBatch, _Block, _AutoBatcher
from hazelcast.future import ImmediateFuture
from hazelcast.errors import HazelcastError
@@ -20,44 +18,16 @@
AUTO_BATCHER_BASE = 10
-class FlakeIdGeneratorConfigTest(HazelcastTestCase):
- def setUp(self):
- self.flake_id_config = FlakeIdGeneratorConfig()
-
- def test_default_configuration(self):
- self.assertEqual("default", self.flake_id_config.name)
- self.assertEqual(100, self.flake_id_config.prefetch_count)
- self.assertEqual(600000, self.flake_id_config.prefetch_validity_in_millis)
-
- def test_custom_configuration(self):
- self.flake_id_config.name = "test"
- self.flake_id_config.prefetch_count = 333
- self.flake_id_config.prefetch_validity_in_millis = 3333
-
- self.assertEqual("test", self.flake_id_config.name)
- self.assertEqual(333, self.flake_id_config.prefetch_count)
- self.assertEqual(3333, self.flake_id_config.prefetch_validity_in_millis)
-
- def test_prefetch_count_should_be_positive(self):
- with self.assertRaises(ValueError):
- self.flake_id_config.prefetch_count = 0
-
- with self.assertRaises(ValueError):
- self.flake_id_config.prefetch_count = -1
-
- def test_prefetch_count_max_size(self):
- with self.assertRaises(ValueError):
- self.flake_id_config.prefetch_count = _MAXIMUM_PREFETCH_COUNT + 1
-
-
class FlakeIdGeneratorTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
- flake_id_config = FlakeIdGeneratorConfig("short-term")
- flake_id_config.prefetch_count = SHORT_TERM_BATCH_SIZE
- flake_id_config.prefetch_validity_in_millis = to_millis(SHORT_TERM_VALIDITY_SECONDS)
- config.add_flake_id_generator_config(flake_id_config)
+ config["cluster_name"] = cls.cluster.id
+ config["flake_id_generators"] = {
+ "short-term": {
+ "prefetch_count": SHORT_TERM_BATCH_SIZE,
+ "prefetch_validity": SHORT_TERM_VALIDITY_SECONDS,
+ }
+ }
return config
def setUp(self):
@@ -178,15 +148,15 @@ def test_block(self):
def test_block_after_validity_period(self):
id_batch = _IdBatch(-1, -2, 2)
- block = _Block(id_batch, 1)
+ block = _Block(id_batch, 0.1)
time.sleep(0.5)
- self.assertTrueEventually(lambda: block.next_id() is None)
+ self.assertIsNone(block.next_id())
def test_block_with_batch_exhaustion(self):
id_batch = _IdBatch(100, 10000, 0)
- block = _Block(id_batch, 1000)
+ block = _Block(id_batch, 1)
self.assertIsNone(block.next_id())
@@ -247,10 +217,7 @@ def test_new_id_with_at_least_one_suitable_member(self):
response = self._assign_out_of_range_node_id(self.cluster.id, random.randint(0, 1))
self.assertTrueEventually(lambda: response.success and response.result is not None)
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.network.smart_routing = False
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id, smart_routing=False)
generator = client.get_flake_id_generator("test").blocking()
@@ -267,9 +234,7 @@ def test_new_id_fails_when_all_members_are_out_of_node_id_range(self):
response2 = self._assign_out_of_range_node_id(self.cluster.id, 1)
self.assertTrueEventually(lambda: response2.success and response2.result is not None)
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id)
generator = client.get_flake_id_generator("test").blocking()
with self.assertRaises(HazelcastError):
diff --git a/tests/proxy/list_test.py b/tests/proxy/list_test.py
index 999c3dc8a8..b56e03de5b 100644
--- a/tests/proxy/list_test.py
+++ b/tests/proxy/list_test.py
@@ -10,7 +10,7 @@ def setUp(self):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def test_add_entry_listener_item_added(self):
@@ -22,7 +22,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, None)
- self.assertEqual(event.event_type, ItemEventType.added)
+ self.assertEqual(event.event_type, ItemEventType.ADDED)
self.assertTrueEventually(assert_event, 5)
@@ -35,7 +35,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, 'item-value')
- self.assertEqual(event.event_type, ItemEventType.added)
+ self.assertEqual(event.event_type, ItemEventType.ADDED)
self.assertTrueEventually(assert_event, 5)
@@ -49,7 +49,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, None)
- self.assertEqual(event.event_type, ItemEventType.removed)
+ self.assertEqual(event.event_type, ItemEventType.REMOVED)
self.assertTrueEventually(assert_event, 5)
@@ -63,7 +63,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, 'item-value')
- self.assertEqual(event.event_type, ItemEventType.removed)
+ self.assertEqual(event.event_type, ItemEventType.REMOVED)
self.assertTrueEventually(assert_event, 5)
@@ -78,7 +78,7 @@ def assert_event():
if len(collector.events) > 0:
event = collector.events[0]
self.assertEqual(event.item, None)
- self.assertEqual(event.event_type, ItemEventType.added)
+ self.assertEqual(event.event_type, ItemEventType.ADDED)
self.assertTrueEventually(assert_event, 5)
diff --git a/tests/proxy/map_nearcache_test.py b/tests/proxy/map_nearcache_test.py
index 08022ee5f8..b230d67127 100644
--- a/tests/proxy/map_nearcache_test.py
+++ b/tests/proxy/map_nearcache_test.py
@@ -2,7 +2,6 @@
from tests.hzrc.ttypes import Lang
-from hazelcast.config import NearCacheConfig
from tests.base import SingleMemberTestCase
from tests.util import random_string
from hazelcast.six.moves import range
@@ -19,16 +18,14 @@ def configure_cluster(cls):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
-
- near_cache_config = NearCacheConfig(random_string())
- # near_cache_config.time_to_live_seconds = 1000
- # near_cache_config.max_idle_seconds = 1000
- config.add_near_cache_config(near_cache_config)
- return super(MapTest, cls).configure_client(config)
+ config["cluster_name"] = cls.cluster.id
+ config["near_caches"] = {
+ random_string(): {}
+ }
+ return config
def setUp(self):
- name = list(self.client.config.near_caches.values())[0].name
+ name = list(self.client.config.near_caches.keys())[0]
self.map = self.client.get_map(name).blocking()
def tearDown(self):
diff --git a/tests/proxy/map_test.py b/tests/proxy/map_test.py
index 0d696eaece..64ffe7f7c0 100644
--- a/tests/proxy/map_test.py
+++ b/tests/proxy/map_test.py
@@ -1,7 +1,7 @@
import time
import os
-from hazelcast.config import IndexConfig, INDEX_TYPE
+from hazelcast.config import IndexType
from hazelcast.errors import HazelcastError
from hazelcast.proxy.map import EntryEventType
from hazelcast.serialization.api import IdentifiedDataSerializable
@@ -42,9 +42,12 @@ def configure_cluster(cls):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
- config.serialization.add_data_serializable_factory(EntryProcessor.FACTORY_ID,
- {EntryProcessor.CLASS_ID: EntryProcessor})
+ config["cluster_name"] = cls.cluster.id
+ config["data_serializable_factories"] = {
+ EntryProcessor.FACTORY_ID: {
+ EntryProcessor.CLASS_ID: EntryProcessor
+ }
+ }
return config
def setUp(self):
@@ -61,7 +64,7 @@ def test_add_entry_listener_item_added(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', event_type=EntryEventType.added, value='value')
+ self.assertEntryEvent(event, key='key', event_type=EntryEventType.ADDED, value='value')
self.assertTrueEventually(assert_event, 5)
@@ -74,7 +77,7 @@ def test_add_entry_listener_item_removed(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', event_type=EntryEventType.removed, old_value='value')
+ self.assertEntryEvent(event, key='key', event_type=EntryEventType.REMOVED, old_value='value')
self.assertTrueEventually(assert_event, 5)
@@ -87,7 +90,7 @@ def test_add_entry_listener_item_updated(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', event_type=EntryEventType.updated, old_value='value',
+ self.assertEntryEvent(event, key='key', event_type=EntryEventType.UPDATED, old_value='value',
value='new_value')
self.assertTrueEventually(assert_event, 5)
@@ -100,7 +103,7 @@ def test_add_entry_listener_item_expired(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', event_type=EntryEventType.expired, old_value='value')
+ self.assertEntryEvent(event, key='key', event_type=EntryEventType.EXPIRED, old_value='value')
self.assertTrueEventually(assert_event, 10)
@@ -113,7 +116,7 @@ def test_add_entry_listener_with_key(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key1', event_type=EntryEventType.added, value='value1')
+ self.assertEntryEvent(event, key='key1', event_type=EntryEventType.ADDED, value='value1')
self.assertTrueEventually(assert_event, 5)
@@ -126,13 +129,14 @@ def test_add_entry_listener_with_predicate(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key1', event_type=EntryEventType.added, value='value1')
+ self.assertEntryEvent(event, key='key1', event_type=EntryEventType.ADDED, value='value1')
self.assertTrueEventually(assert_event, 5)
def test_add_entry_listener_with_key_and_predicate(self):
collector = event_collector()
- self.map.add_entry_listener(key='key1', predicate=SqlPredicate("this == value3"), include_value=True, added_func=collector)
+ self.map.add_entry_listener(key='key1', predicate=SqlPredicate("this == value3"),
+ include_value=True, added_func=collector)
self.map.put('key2', 'value2')
self.map.put('key1', 'value1')
self.map.remove('key1')
@@ -141,25 +145,24 @@ def test_add_entry_listener_with_key_and_predicate(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key1', event_type=EntryEventType.added, value='value3')
+ self.assertEntryEvent(event, key='key1', event_type=EntryEventType.ADDED, value='value3')
self.assertTrueEventually(assert_event, 5)
def test_add_index(self):
- ordered_index = IndexConfig("length", attributes=["this"])
- unordered_index = IndexConfig("length", INDEX_TYPE.HASH, ["this"])
- self.map.add_index(ordered_index)
- self.map.add_index(unordered_index)
+ self.map.add_index(attributes=["this"])
+ self.map.add_index(attributes=["this"], index_type=IndexType.HASH)
+ self.map.add_index(attributes=["this"], index_type=IndexType.BITMAP, bitmap_index_options={
+ "unique_key": "this",
+ })
def test_add_index_duplicate_fields(self):
- config = IndexConfig("length", attributes=["this", "this"])
with self.assertRaises(ValueError):
- self.map.add_index(config)
+ self.map.add_index(attributes=["this", "this"])
def test_add_index_invalid_attribute(self):
- config = IndexConfig("length", attributes=["this.x."])
with self.assertRaises(ValueError):
- self.map.add_index(config)
+ self.map.add_index(attributes=["this.x."])
def test_clear(self):
self._fill_map()
@@ -214,8 +217,8 @@ def test_evict_all(self):
self.assertEqual(self.map.size(), 0)
def test_execute_on_entries(self):
- map = self._fill_map()
- expected_entry_set = [(key, "processed") for key in map]
+ m = self._fill_map()
+ expected_entry_set = [(key, "processed") for key in m]
values = self.map.execute_on_entries(EntryProcessor("processed"))
@@ -223,9 +226,9 @@ def test_execute_on_entries(self):
six.assertCountEqual(self, expected_entry_set, values)
def test_execute_on_entries_with_predicate(self):
- map = self._fill_map()
- expected_entry_set = [(key, "processed") if key < "key-5" else (key, map[key]) for key in map]
- expected_values = [(key, "processed") for key in map if key < "key-5"]
+ m = self._fill_map()
+ expected_entry_set = [(key, "processed") if key < "key-5" else (key, m[key]) for key in m]
+ expected_values = [(key, "processed") for key in m if key < "key-5"]
values = self.map.execute_on_entries(EntryProcessor("processed"), SqlPredicate("__key < 'key-5'"))
@@ -240,17 +243,17 @@ def test_execute_on_key(self):
self.assertEqual("processed", value)
def test_execute_on_keys(self):
- map = self._fill_map()
- expected_entry_set = [(key, "processed") for key in map]
+ m = self._fill_map()
+ expected_entry_set = [(key, "processed") for key in m]
- values = self.map.execute_on_keys(list(map.keys()), EntryProcessor("processed"))
+ values = self.map.execute_on_keys(list(m.keys()), EntryProcessor("processed"))
six.assertCountEqual(self, expected_entry_set, self.map.entry_set())
six.assertCountEqual(self, expected_entry_set, values)
def test_execute_on_keys_with_empty_key_list(self):
- map = self._fill_map()
- expected_entry_set = [(key, map[key]) for key in map]
+ m = self._fill_map()
+ expected_entry_set = [(key, m[key]) for key in m]
values = self.map.execute_on_keys([], EntryProcessor("processed"))
@@ -328,18 +331,6 @@ def test_key_set_with_predicate(self):
self.assertEqual(self.map.key_set(SqlPredicate("this == 'value-1'")), ["key-1"])
- def test_load_all(self):
- keys = list(self._fill_map().keys())
- # TODO: needs map store configuration
- with self.assertRaises(HazelcastError):
- self.map.load_all()
-
- def test_load_all_with_keys(self):
- keys = list(self._fill_map().keys())
- # TODO: needs map store configuration
- with self.assertRaises(HazelcastError):
- self.map.load_all(["key-1", "key-2"])
-
def test_lock(self):
self.map.put("key", "value")
@@ -349,12 +340,12 @@ def test_lock(self):
self.assertFalse(self.map.try_put("key", "new_value", timeout=0.01))
def test_put_all(self):
- map = {"key-%d" % x: "value-%d" % x for x in range(0, 1000)}
- self.map.put_all(map)
+ m = {"key-%d" % x: "value-%d" % x for x in range(0, 1000)}
+ self.map.put_all(m)
entries = self.map.entry_set()
- six.assertCountEqual(self, entries, six.iteritems(map))
+ six.assertCountEqual(self, entries, six.iteritems(m))
def test_put_all_when_no_keys(self):
self.assertIsNone(self.map.put_all({}))
@@ -415,11 +406,11 @@ def test_remove_if_same_when_different(self):
def test_remove_entry_listener(self):
collector = event_collector()
- id = self.map.add_entry_listener(added_func=collector)
+ reg_id = self.map.add_entry_listener(added_func=collector)
self.map.put('key', 'value')
self.assertTrueEventually(lambda: self.assertEqual(len(collector.events), 1))
- self.map.remove_entry_listener(id)
+ self.map.remove_entry_listener(reg_id)
self.map.put('key2', 'value')
time.sleep(1)
@@ -522,15 +513,15 @@ def test_str(self):
self.assertTrue(str(self.map).startswith("Map"))
def _fill_map(self, count=10):
- map = {"key-%d" % x: "value-%d" % x for x in range(0, count)}
- self.map.put_all(map)
- return map
+ m = {"key-%d" % x: "value-%d" % x for x in range(0, count)}
+ self.map.put_all(m)
+ return m
class MapStoreTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
@classmethod
@@ -597,6 +588,6 @@ def test_add_entry_listener_item_loaded(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', value='value', event_type=EntryEventType.loaded)
+ self.assertEntryEvent(event, key='key', value='value', event_type=EntryEventType.LOADED)
self.assertTrueEventually(assert_event, 10)
diff --git a/tests/proxy/multi_map_test.py b/tests/proxy/multi_map_test.py
index 1b3d8bd528..da3475468f 100644
--- a/tests/proxy/multi_map_test.py
+++ b/tests/proxy/multi_map_test.py
@@ -13,7 +13,7 @@
class MultiMapTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
@@ -30,7 +30,7 @@ def test_add_entry_listener_item_added(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', event_type=EntryEventType.added, value='value')
+ self.assertEntryEvent(event, key='key', event_type=EntryEventType.ADDED, value='value')
self.assertTrueEventually(assert_event, 5)
@@ -43,7 +43,7 @@ def test_add_entry_listener_item_removed(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', event_type=EntryEventType.removed, old_value='value')
+ self.assertEntryEvent(event, key='key', event_type=EntryEventType.REMOVED, old_value='value')
self.assertTrueEventually(assert_event, 5)
@@ -56,7 +56,7 @@ def test_add_entry_listener_clear_all(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, event_type=EntryEventType.clear_all, number_of_affected_entries=1)
+ self.assertEntryEvent(event, event_type=EntryEventType.CLEAR_ALL, number_of_affected_entries=1)
self.assertTrueEventually(assert_event, 5)
@@ -69,7 +69,7 @@ def test_add_entry_listener_with_key(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key1', event_type=EntryEventType.added, value='value1')
+ self.assertEntryEvent(event, key='key1', event_type=EntryEventType.ADDED, value='value1')
self.assertTrueEventually(assert_event, 5)
diff --git a/tests/proxy/pn_counter_test.py b/tests/proxy/pn_counter_test.py
index 73af90690e..21bc772a20 100644
--- a/tests/proxy/pn_counter_test.py
+++ b/tests/proxy/pn_counter_test.py
@@ -1,15 +1,15 @@
import os
from tests.base import SingleMemberTestCase, HazelcastTestCase
-from tests.util import configure_logging, get_abs_path
+from tests.util import get_abs_path
from hazelcast.errors import ConsistencyLostError, NoDataMemberInClusterError
-from hazelcast import HazelcastClient, ClientConfig
+from hazelcast import HazelcastClient
class PNCounterBasicTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
@@ -64,18 +64,12 @@ def _check_pn_counter_method(self, return_value, expected_return_value, expected
class PNCounterConsistencyTest(HazelcastTestCase):
- @classmethod
- def setUpClass(cls):
- configure_logging()
-
def setUp(self):
self.rc = self.create_rc()
self.cluster = self.create_cluster(self.rc, self._configure_cluster())
self.cluster.start_member()
self.cluster.start_member()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- self.client = HazelcastClient(config)
+ self.client = HazelcastClient(cluster_name=self.cluster.id)
self.pn_counter = self.client.get_pn_counter("pn-counter").blocking()
def tearDown(self):
@@ -110,7 +104,7 @@ def _configure_cluster(self):
class PNCounterLiteMemberTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
@classmethod
diff --git a/tests/proxy/queue_test.py b/tests/proxy/queue_test.py
index 7dc0253d2d..d8da2115f1 100644
--- a/tests/proxy/queue_test.py
+++ b/tests/proxy/queue_test.py
@@ -10,7 +10,7 @@
class QueueTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
@classmethod
@@ -32,7 +32,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, None)
- self.assertEqual(event.event_type, ItemEventType.added)
+ self.assertEqual(event.event_type, ItemEventType.ADDED)
self.assertTrueEventually(assert_event, 5)
@@ -45,7 +45,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, 'item-value')
- self.assertEqual(event.event_type, ItemEventType.added)
+ self.assertEqual(event.event_type, ItemEventType.ADDED)
self.assertTrueEventually(assert_event, 5)
@@ -59,7 +59,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, None)
- self.assertEqual(event.event_type, ItemEventType.removed)
+ self.assertEqual(event.event_type, ItemEventType.REMOVED)
self.assertTrueEventually(assert_event, 5)
@@ -73,7 +73,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, 'item-value')
- self.assertEqual(event.event_type, ItemEventType.removed)
+ self.assertEqual(event.event_type, ItemEventType.REMOVED)
self.assertTrueEventually(assert_event, 5)
diff --git a/tests/proxy/replicated_map_test.py b/tests/proxy/replicated_map_test.py
index 910866186a..3b38995556 100644
--- a/tests/proxy/replicated_map_test.py
+++ b/tests/proxy/replicated_map_test.py
@@ -11,7 +11,7 @@
class ReplicatedMapTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
@@ -28,7 +28,7 @@ def test_add_entry_listener_item_added(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', event_type=EntryEventType.added, value='value')
+ self.assertEntryEvent(event, key='key', event_type=EntryEventType.ADDED, value='value')
self.assertTrueEventually(assert_event, 5)
@@ -41,7 +41,7 @@ def test_add_entry_listener_item_removed(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', event_type=EntryEventType.removed, old_value='value')
+ self.assertEntryEvent(event, key='key', event_type=EntryEventType.REMOVED, old_value='value')
self.assertTrueEventually(assert_event, 5)
@@ -54,7 +54,7 @@ def test_add_entry_listener_item_updated(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', event_type=EntryEventType.updated, old_value='value',
+ self.assertEntryEvent(event, key='key', event_type=EntryEventType.UPDATED, old_value='value',
value='new_value')
self.assertTrueEventually(assert_event, 5)
@@ -67,7 +67,7 @@ def test_add_entry_listener_item_evicted(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key', event_type=EntryEventType.evicted, old_value='value')
+ self.assertEntryEvent(event, key='key', event_type=EntryEventType.EVICTED, old_value='value')
self.assertTrueEventually(assert_event, 10)
@@ -80,7 +80,7 @@ def test_add_entry_listener_with_key(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key1', event_type=EntryEventType.added, value='value1')
+ self.assertEntryEvent(event, key='key1', event_type=EntryEventType.ADDED, value='value1')
self.assertTrueEventually(assert_event, 5)
@@ -93,7 +93,7 @@ def test_add_entry_listener_with_predicate(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key1', event_type=EntryEventType.added, value='value1')
+ self.assertEntryEvent(event, key='key1', event_type=EntryEventType.ADDED, value='value1')
self.assertTrueEventually(assert_event, 5)
@@ -108,7 +108,7 @@ def test_add_entry_listener_with_key_and_predicate(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, key='key1', event_type=EntryEventType.added, value='value3')
+ self.assertEntryEvent(event, key='key1', event_type=EntryEventType.ADDED, value='value3')
self.assertTrueEventually(assert_event, 5)
@@ -121,7 +121,7 @@ def test_add_entry_listener_clear_all(self):
def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
- self.assertEntryEvent(event, event_type=EntryEventType.clear_all, number_of_affected_entries=1)
+ self.assertEntryEvent(event, event_type=EntryEventType.CLEAR_ALL, number_of_affected_entries=1)
self.assertTrueEventually(assert_event, 5)
diff --git a/tests/proxy/ringbuffer_test.py b/tests/proxy/ringbuffer_test.py
index 512494c8ee..1ea5bee296 100644
--- a/tests/proxy/ringbuffer_test.py
+++ b/tests/proxy/ringbuffer_test.py
@@ -11,7 +11,7 @@
class RingBufferTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
@classmethod
diff --git a/tests/proxy/set_test.py b/tests/proxy/set_test.py
index 18e93aa2fa..41b8ef5ca8 100644
--- a/tests/proxy/set_test.py
+++ b/tests/proxy/set_test.py
@@ -10,7 +10,7 @@ def setUp(self):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def test_add_entry_listener_item_added(self):
@@ -22,7 +22,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, None)
- self.assertEqual(event.event_type, ItemEventType.added)
+ self.assertEqual(event.event_type, ItemEventType.ADDED)
self.assertTrueEventually(assert_event, 5)
@@ -35,7 +35,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, 'item-value')
- self.assertEqual(event.event_type, ItemEventType.added)
+ self.assertEqual(event.event_type, ItemEventType.ADDED)
self.assertTrueEventually(assert_event, 5)
@@ -49,7 +49,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, None)
- self.assertEqual(event.event_type, ItemEventType.removed)
+ self.assertEqual(event.event_type, ItemEventType.REMOVED)
self.assertTrueEventually(assert_event, 5)
@@ -63,7 +63,7 @@ def assert_event():
self.assertEqual(len(collector.events), 1)
event = collector.events[0]
self.assertEqual(event.item, 'item-value')
- self.assertEqual(event.event_type, ItemEventType.removed)
+ self.assertEqual(event.event_type, ItemEventType.REMOVED)
self.assertTrueEventually(assert_event, 5)
@@ -78,7 +78,7 @@ def assert_event():
if len(collector.events) > 0:
event = collector.events[0]
self.assertEqual(event.item, None)
- self.assertEqual(event.event_type, ItemEventType.added)
+ self.assertEqual(event.event_type, ItemEventType.ADDED)
self.assertTrueEventually(assert_event, 5)
diff --git a/tests/proxy/topic_test.py b/tests/proxy/topic_test.py
index d4c90f0d95..7edbd9d86b 100644
--- a/tests/proxy/topic_test.py
+++ b/tests/proxy/topic_test.py
@@ -5,15 +5,18 @@
class TopicTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
self.topic = self.client.get_topic(random_string()).blocking()
+ def tearDown(self):
+ self.topic.destroy()
+
def test_add_listener(self):
collector = event_collector()
- reg_id = self.topic.add_listener(on_message=collector)
+ self.topic.add_listener(on_message=collector)
self.topic.publish('item-value')
def assert_event():
diff --git a/tests/proxy/transactional_list_test.py b/tests/proxy/transactional_list_test.py
index 3ec99ab760..18173eba87 100644
--- a/tests/proxy/transactional_list_test.py
+++ b/tests/proxy/transactional_list_test.py
@@ -5,7 +5,7 @@
class TransactionalListTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
diff --git a/tests/proxy/transactional_map_test.py b/tests/proxy/transactional_map_test.py
index da4263f9bb..0e7c4704f1 100644
--- a/tests/proxy/transactional_map_test.py
+++ b/tests/proxy/transactional_map_test.py
@@ -7,7 +7,7 @@
class TransactionalMapTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
diff --git a/tests/proxy/transactional_multi_map_test.py b/tests/proxy/transactional_multi_map_test.py
index 5737d59b8b..fbc2ef9da9 100644
--- a/tests/proxy/transactional_multi_map_test.py
+++ b/tests/proxy/transactional_multi_map_test.py
@@ -6,7 +6,7 @@
class TransactionalMultiMapTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
diff --git a/tests/proxy/transactional_queue_test.py b/tests/proxy/transactional_queue_test.py
index 558f41e8d5..c33f696c2a 100644
--- a/tests/proxy/transactional_queue_test.py
+++ b/tests/proxy/transactional_queue_test.py
@@ -9,7 +9,7 @@
class TransactionalQueueTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
@classmethod
diff --git a/tests/proxy/transactional_set_test.py b/tests/proxy/transactional_set_test.py
index 407264db92..3a6cbba1ca 100644
--- a/tests/proxy/transactional_set_test.py
+++ b/tests/proxy/transactional_set_test.py
@@ -5,7 +5,7 @@
class TransactionalSetTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
diff --git a/tests/reconnect_test.py b/tests/reconnect_test.py
index e6bb973fe5..08b25031f2 100644
--- a/tests/reconnect_test.py
+++ b/tests/reconnect_test.py
@@ -1,19 +1,18 @@
+import time
from threading import Thread
from time import sleep
-from hazelcast import ClientConfig
from hazelcast.errors import HazelcastError, TargetDisconnectedError
from hazelcast.lifecycle import LifecycleState
from hazelcast.util import AtomicInteger
from tests.base import HazelcastTestCase
-from tests.util import configure_logging, event_collector
+from tests.util import event_collector
class ReconnectTest(HazelcastTestCase):
rc = None
def setUp(self):
- configure_logging()
self.rc = self.create_rc()
self.cluster = self.create_cluster(self.rc)
@@ -22,29 +21,35 @@ def tearDown(self):
self.rc.exit()
def test_start_client_with_no_member(self):
- config = ClientConfig()
- config.network.addresses.append("127.0.0.1:5701")
- config.network.addresses.append("127.0.0.1:5702")
- config.network.addresses.append("127.0.0.1:5703")
- config.connection_strategy.connection_retry.cluster_connect_timeout = 2
with self.assertRaises(HazelcastError):
- self.create_client(config)
+ self.create_client({
+ "cluster_members": [
+ "127.0.0.1:5701",
+ "127.0.0.1:5702",
+ "127.0.0.1:5703",
+ ],
+ "cluster_connect_timeout": 2,
+ })
def test_start_client_before_member(self):
- t = Thread(target=self.cluster.start_member)
+ def run():
+ time.sleep(1.0)
+ self.cluster.start_member()
+
+ t = Thread(target=run)
t.start()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.connection_strategy.connection_retry.cluster_connect_timeout = 5
- self.create_client(config)
+ self.create_client({
+ "cluster_name": self.cluster.id,
+ "cluster_connect_timeout": 5.0,
+ })
t.join()
def test_restart_member(self):
member = self.cluster.start_member()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.connection_strategy.connection_retry.cluster_connect_timeout = 5
- client = self.create_client(config)
+ client = self.create_client({
+ "cluster_name": self.cluster.id,
+ "cluster_connect_timeout": 5.0,
+ })
state = [None]
@@ -60,10 +65,10 @@ def listener(s):
def test_listener_re_register(self):
member = self.cluster.start_member()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.connection_strategy.connection_retry.cluster_connect_timeout = 5
- client = self.create_client(config)
+ client = self.create_client({
+ "cluster_name": self.cluster.id,
+ "cluster_connect_timeout": 5.0,
+ })
map = client.get_map("map")
@@ -92,10 +97,10 @@ def assert_events():
def test_member_list_after_reconnect(self):
old_member = self.cluster.start_member()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.connection_strategy.connection_retry.cluster_connect_timeout = 5
- client = self.create_client(config)
+ client = self.create_client({
+ "cluster_name": self.cluster.id,
+ "cluster_connect_timeout": 5.0,
+ })
old_member.shutdown()
new_member = self.cluster.start_member()
@@ -109,12 +114,14 @@ def assert_member_list():
def test_reconnect_toNewNode_ViaLastMemberList(self):
old_member = self.cluster.start_member()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.network.addresses.append("127.0.0.1:5701")
- config.network.smart_routing = False
- config.connection_strategy.connection_retry.cluster_connect_timeout = 10
- client = self.create_client(config)
+ client = self.create_client({
+ "cluster_name": self.cluster.id,
+ "cluster_members": [
+ "127.0.0.1:5701",
+ ],
+ "smart_routing": False,
+ "cluster_connect_timeout": 10.0,
+ })
new_member = self.cluster.start_member()
old_member.shutdown()
diff --git a/tests/serialization/custom_global_serialization_test.py b/tests/serialization/custom_global_serialization_test.py
index 56193d023a..33674bf946 100644
--- a/tests/serialization/custom_global_serialization_test.py
+++ b/tests/serialization/custom_global_serialization_test.py
@@ -1,6 +1,6 @@
import unittest
-from hazelcast.config import SerializationConfig
+from hazelcast.config import _Config
from hazelcast.serialization.api import StreamSerializer
from hazelcast.serialization.service import SerializationServiceV1
from hazelcast.six.moves import cPickle
@@ -88,10 +88,10 @@ def destroy(self):
class CustomSerializationTestCase(unittest.TestCase):
def test_global_encode_decode(self):
- config = SerializationConfig()
+ config = _Config()
config.global_serializer = TestGlobalSerializer
- service = SerializationServiceV1(serialization_config=config)
+ service = SerializationServiceV1(config)
obj = CustomClass("uid", "some name", "description text")
data = service.to_data(obj)
@@ -100,10 +100,12 @@ def test_global_encode_decode(self):
self.assertEqual("GLOBAL", obj2.source)
def test_custom_serializer(self):
- config = SerializationConfig()
- config.set_custom_serializer(CustomClass, CustomSerializer)
+ config = _Config()
+ config.custom_serializers = {
+ CustomClass: CustomSerializer
+ }
- service = SerializationServiceV1(serialization_config=config)
+ service = SerializationServiceV1(config)
obj = CustomClass("uid", "some name", "description text")
data = service.to_data(obj)
@@ -112,11 +114,13 @@ def test_custom_serializer(self):
self.assertEqual("CUSTOM", obj2.source)
def test_global_custom_serializer(self):
- config = SerializationConfig()
- config.set_custom_serializer(CustomClass, CustomSerializer)
+ config = _Config()
+ config.custom_serializers = {
+ CustomClass: CustomSerializer
+ }
config.global_serializer = TestGlobalSerializer
- service = SerializationServiceV1(serialization_config=config)
+ service = SerializationServiceV1(config)
obj = CustomClass("uid", "some name", "description text")
data = service.to_data(obj)
@@ -125,9 +129,11 @@ def test_global_custom_serializer(self):
self.assertEqual("CUSTOM", obj2.source)
def test_double_register_custom_serializer(self):
- config = SerializationConfig()
- config.set_custom_serializer(CustomClass, CustomSerializer)
- service = SerializationServiceV1(serialization_config=config)
+ config = _Config()
+ config.custom_serializers = {
+ CustomClass: CustomSerializer
+ }
+ service = SerializationServiceV1(config)
with self.assertRaises(ValueError):
service._registry.safe_register_serializer(TheOtherCustomSerializer, CustomClass)
diff --git a/tests/serialization/identified_test.py b/tests/serialization/identified_test.py
index 19c1364800..fc370a98d8 100644
--- a/tests/serialization/identified_test.py
+++ b/tests/serialization/identified_test.py
@@ -1,6 +1,6 @@
import unittest
-import hazelcast
+from hazelcast.config import _Config
from hazelcast.serialization import SerializationServiceV1
from hazelcast.serialization.api import IdentifiedDataSerializable
@@ -116,9 +116,11 @@ def create_identified():
class IdentifiedSerializationTestCase(unittest.TestCase):
def test_encode_decode(self):
- config = hazelcast.ClientConfig()
- config.serialization.data_serializable_factories[FACTORY_ID] = the_factory
- service = SerializationServiceV1(config.serialization)
+ config = _Config()
+ config.data_serializable_factories = {
+ FACTORY_ID: the_factory
+ }
+ service = SerializationServiceV1(config)
obj = create_identified()
data = service.to_data(obj)
diff --git a/tests/serialization/int_serialization_test.py b/tests/serialization/int_serialization_test.py
index adf9ed7242..2279e0230e 100644
--- a/tests/serialization/int_serialization_test.py
+++ b/tests/serialization/int_serialization_test.py
@@ -1,6 +1,6 @@
import unittest
-from hazelcast.config import SerializationConfig, INTEGER_TYPE
+from hazelcast.config import IntType, _Config
from hazelcast.errors import HazelcastSerializationError
from hazelcast.serialization.serialization_const import CONSTANT_TYPE_BYTE, CONSTANT_TYPE_SHORT, CONSTANT_TYPE_INTEGER, \
CONSTANT_TYPE_LONG
@@ -15,9 +15,9 @@
class IntegerTestCase(unittest.TestCase):
def test_dynamic_case(self):
- config = SerializationConfig()
- config.default_integer_type = INTEGER_TYPE.VAR
- service = SerializationServiceV1(serialization_config=config)
+ config = _Config()
+ config.default_int_type = IntType.VAR
+ service = SerializationServiceV1(config)
d1 = service.to_data(byte_val)
d2 = service.to_data(short_val)
@@ -38,9 +38,9 @@ def test_dynamic_case(self):
self.assertEqual(v4, long_val)
def test_byte_case(self):
- config = SerializationConfig()
- config.default_integer_type = INTEGER_TYPE.BYTE
- service = SerializationServiceV1(serialization_config=config)
+ config = _Config()
+ config.default_int_type = IntType.BYTE
+ service = SerializationServiceV1(config)
d1 = service.to_data(byte_val)
v1 = service.to_object(d1)
@@ -51,9 +51,9 @@ def test_byte_case(self):
service.to_data(big_int)
def test_short_case(self):
- config = SerializationConfig()
- config.default_integer_type = INTEGER_TYPE.SHORT
- service = SerializationServiceV1(serialization_config=config)
+ config = _Config()
+ config.default_int_type = IntType.SHORT
+ service = SerializationServiceV1(config)
d1 = service.to_data(byte_val)
d2 = service.to_data(short_val)
@@ -68,9 +68,9 @@ def test_short_case(self):
service.to_data(big_int)
def test_int_case(self):
- config = SerializationConfig()
- config.default_integer_type = INTEGER_TYPE.INT
- service = SerializationServiceV1(serialization_config=config)
+ config = _Config()
+ config.default_int_type = IntType.INT
+ service = SerializationServiceV1(config)
d1 = service.to_data(byte_val)
d2 = service.to_data(short_val)
@@ -89,9 +89,9 @@ def test_int_case(self):
service.to_data(big_int)
def test_long_case(self):
- config = SerializationConfig()
- config.default_integer_type = INTEGER_TYPE.LONG
- service = SerializationServiceV1(serialization_config=config)
+ config = _Config()
+ config.default_int_type = IntType.LONG
+ service = SerializationServiceV1(config)
d1 = service.to_data(byte_val)
d2 = service.to_data(short_val)
diff --git a/tests/serialization/morphing_portable_test.py b/tests/serialization/morphing_portable_test.py
index f3f7dffbd6..1f9c4ddc19 100644
--- a/tests/serialization/morphing_portable_test.py
+++ b/tests/serialization/morphing_portable_test.py
@@ -1,6 +1,6 @@
import unittest
-from hazelcast import SerializationConfig
+from hazelcast.config import _Config
from hazelcast.serialization import SerializationServiceV1
from tests.serialization.portable_test import create_portable, SerializationV1Portable, InnerPortable, FACTORY_ID
from hazelcast import six
@@ -28,14 +28,18 @@ def get_class_version(self):
class MorphingPortableTestCase(unittest.TestCase):
def setUp(self):
- config1 = SerializationConfig()
- config1.add_portable_factory(FACTORY_ID, the_factory_1)
-
- config2 = SerializationConfig()
- config2.add_portable_factory(FACTORY_ID, the_factory_2)
-
- self.service1 = SerializationServiceV1(serialization_config=config1)
- self.service2 = SerializationServiceV1(serialization_config=config2)
+ config1 = _Config()
+ config1.portable_factories = {
+ FACTORY_ID: the_factory_1
+ }
+
+ config2 = _Config()
+ config2.portable_factories = {
+ FACTORY_ID: the_factory_2
+ }
+
+ self.service1 = SerializationServiceV1(config1)
+ self.service2 = SerializationServiceV1(config2)
base_portable = create_portable()
data = self.service1.to_data(base_portable)
diff --git a/tests/serialization/portable_test.py b/tests/serialization/portable_test.py
index 7b379049dd..ba121eed2a 100644
--- a/tests/serialization/portable_test.py
+++ b/tests/serialization/portable_test.py
@@ -1,6 +1,6 @@
import unittest
-import hazelcast
+from hazelcast.config import _Config
from hazelcast.errors import HazelcastSerializationError
from hazelcast.serialization import SerializationServiceV1
from hazelcast.serialization.api import Portable
@@ -233,9 +233,11 @@ def create_portable():
class PortableSerializationTestCase(unittest.TestCase):
def test_encode_decode(self):
- config = hazelcast.ClientConfig()
- config.serialization.portable_factories[FACTORY_ID] = the_factory
- service = SerializationServiceV1(config.serialization)
+ config = _Config()
+ config.portable_factories = {
+ FACTORY_ID: the_factory
+ }
+ service = SerializationServiceV1(config)
obj = create_portable()
self.assertTrue(obj.inner_portable)
@@ -245,10 +247,12 @@ def test_encode_decode(self):
self.assertEqual(obj.inner_portable.param_int, obj2.nested_field)
def test_encode_decode_2(self):
- config = hazelcast.ClientConfig()
- config.serialization.portable_factories[FACTORY_ID] = the_factory
- service = SerializationServiceV1(config.serialization)
- service2 = SerializationServiceV1(config.serialization)
+ config = _Config()
+ config.portable_factories = {
+ FACTORY_ID: the_factory
+ }
+ service = SerializationServiceV1(config)
+ service2 = SerializationServiceV1(config)
obj = create_portable()
self.assertTrue(obj.inner_portable)
@@ -257,9 +261,11 @@ def test_encode_decode_2(self):
self.assertTrue(obj == obj2)
def test_portable_context(self):
- config = hazelcast.ClientConfig()
- config.serialization.portable_factories[FACTORY_ID] = the_factory
- service = SerializationServiceV1(config.serialization)
+ config = _Config()
+ config.portable_factories = {
+ FACTORY_ID: the_factory
+ }
+ service = SerializationServiceV1(config)
obj = create_portable()
self.assertTrue(obj.inner_portable)
@@ -269,12 +275,14 @@ def test_portable_context(self):
self.assertTrue(class_definition is not None)
def test_portable_null_fields(self):
- config = hazelcast.ClientConfig()
- config.serialization.portable_factories[FACTORY_ID] = the_factory
- service = SerializationServiceV1(config.serialization)
+ config = _Config()
+ config.portable_factories = {
+ FACTORY_ID: the_factory
+ }
+ service = SerializationServiceV1(config)
service.to_data(create_portable())
- service2 = SerializationServiceV1(config.serialization)
+ service2 = SerializationServiceV1(config)
obj = SerializationV1Portable()
data = service.to_data(obj)
@@ -311,14 +319,17 @@ def test_portable_class_def(self):
builder.add_portable_array_field("ap", class_def_inner)
class_def = builder.build()
- config = hazelcast.ClientConfig()
- config.serialization.portable_factories[FACTORY_ID] = the_factory
-
- config.serialization.class_definitions.add(class_def)
- config.serialization.class_definitions.add(class_def_inner)
-
- service = SerializationServiceV1(config.serialization)
- service2 = SerializationServiceV1(config.serialization)
+ config = _Config()
+ config.portable_factories = {
+ FACTORY_ID: the_factory
+ }
+ config.class_definitions = [
+ class_def,
+ class_def_inner,
+ ]
+
+ service = SerializationServiceV1(config)
+ service2 = SerializationServiceV1(config)
obj = SerializationV1Portable()
data = service.to_data(obj)
@@ -326,10 +337,12 @@ def test_portable_class_def(self):
self.assertTrue(obj == obj2)
def test_portable_read_without_factory(self):
- config = hazelcast.ClientConfig()
- config.serialization.portable_factories[FACTORY_ID] = the_factory
- service = SerializationServiceV1(config.serialization)
- service2 = SerializationServiceV1(hazelcast.SerializationConfig())
+ config = _Config()
+ config.portable_factories = {
+ FACTORY_ID: the_factory
+ }
+ service = SerializationServiceV1(config)
+ service2 = SerializationServiceV1(_Config())
obj = create_portable()
self.assertTrue(obj.inner_portable)
@@ -338,13 +351,17 @@ def test_portable_read_without_factory(self):
service2.to_object(data)
def test_nested_portable_serialization(self):
- serialization_config = hazelcast.SerializationConfig()
- serialization_config.portable_version = 6
-
- serialization_config.portable_factories[1] = {1: Parent, 2: Child}
-
- ss1 = SerializationServiceV1(serialization_config)
- ss2 = SerializationServiceV1(serialization_config)
+ config = _Config()
+ config.portable_version = 6
+ config.portable_factories = {
+ 1: {
+ 1: Parent,
+ 2: Child,
+ }
+ }
+
+ ss1 = SerializationServiceV1(config)
+ ss2 = SerializationServiceV1(config)
ss2.to_data(Child("Joe"))
diff --git a/tests/serialization/serialization_test.py b/tests/serialization/serialization_test.py
index c0be951333..d03f365cc0 100644
--- a/tests/serialization/serialization_test.py
+++ b/tests/serialization/serialization_test.py
@@ -1,6 +1,6 @@
import unittest
-from hazelcast.config import SerializationConfig
+from hazelcast.config import _Config
from hazelcast.core import Address
from hazelcast.serialization.data import Data
from hazelcast.serialization.service import SerializationServiceV1
@@ -9,7 +9,7 @@
class SerializationTestCase(unittest.TestCase):
def setUp(self):
- self.service = SerializationServiceV1(serialization_config=SerializationConfig())
+ self.service = SerializationServiceV1(_Config())
def tearDown(self):
self.service.destroy()
diff --git a/tests/serialization/serializers_test.py b/tests/serialization/serializers_test.py
index 7d78701a7e..a89558c366 100644
--- a/tests/serialization/serializers_test.py
+++ b/tests/serialization/serializers_test.py
@@ -5,7 +5,7 @@
from hazelcast import six
from hazelcast.core import HazelcastJsonValue
-from hazelcast.config import SerializationConfig, INTEGER_TYPE
+from hazelcast.config import IntType, _Config
from hazelcast.errors import HazelcastSerializationError
from hazelcast.serialization import BE_INT, MAX_BYTE, MAX_SHORT, MAX_INT, MAX_LONG
from hazelcast.serialization.predicate import *
@@ -31,7 +31,7 @@ def __ne__(self, other):
class SerializersTest(unittest.TestCase):
def setUp(self):
- self.service = SerializationServiceV1(SerializationConfig())
+ self.service = SerializationServiceV1(_Config())
def tearDown(self):
self.service.destroy()
@@ -121,7 +121,7 @@ def validate_predicate(self, predicate):
class SerializersLiveTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
@@ -152,8 +152,8 @@ def set_on_server(self, obj):
return response.success
def replace_serialization_service(self, integer_type):
- config = SerializationConfig()
- config.default_integer_type = integer_type
+ config = _Config()
+ config.default_int_type = integer_type
new_service = SerializationServiceV1(config)
self.map._wrapped._to_data = new_service.to_data
self.map._wrapped._to_object = new_service.to_object
@@ -166,7 +166,7 @@ def test_bool(self):
self.assertEqual(value, response)
def test_byte(self):
- self.replace_serialization_service(INTEGER_TYPE.BYTE)
+ self.replace_serialization_service(IntType.BYTE)
value = (1 << 7) - 1
self.map.set("key", value)
self.assertEqual(value, self.map.get("key"))
@@ -174,7 +174,7 @@ def test_byte(self):
self.assertEqual(value, response)
def test_short(self):
- self.replace_serialization_service(INTEGER_TYPE.SHORT)
+ self.replace_serialization_service(IntType.SHORT)
value = -1 * (1 << 15)
self.map.set("key", value)
self.assertEqual(value, self.map.get("key"))
@@ -189,7 +189,7 @@ def test_int(self):
self.assertEqual(value, response)
def test_long(self):
- self.replace_serialization_service(INTEGER_TYPE.LONG)
+ self.replace_serialization_service(IntType.LONG)
value = -1 * (1 << 63)
self.map.set("key", value)
self.assertEqual(value, self.map.get("key"))
@@ -260,7 +260,7 @@ def test_datetime(self):
self.assertTrue(response.startswith(value.strftime("%a %b %d %H:%M:%S")))
def test_big_integer(self):
- self.replace_serialization_service(INTEGER_TYPE.BIG_INT)
+ self.replace_serialization_service(IntType.BIG_INT)
value = 1 << 128
self.map.set("key", value)
self.assertEqual(value, self.map.get("key"))
@@ -268,7 +268,7 @@ def test_big_integer(self):
self.assertEqual(value, response)
def test_variable_integer(self):
- self.replace_serialization_service(INTEGER_TYPE.VAR)
+ self.replace_serialization_service(IntType.VAR)
value = MAX_BYTE
self.map.set("key", value)
self.assertEqual(value, self.map.get("key"))
diff --git a/tests/serialization/string_test.py b/tests/serialization/string_test.py
index 60c4b668b0..5fe0cef9c8 100644
--- a/tests/serialization/string_test.py
+++ b/tests/serialization/string_test.py
@@ -2,7 +2,7 @@
import binascii
import unittest
-from hazelcast.config import SerializationConfig
+from hazelcast.config import _Config
from hazelcast.serialization.bits import *
from hazelcast.serialization.data import Data
from hazelcast.serialization.serialization_const import CONSTANT_TYPE_STRING
@@ -30,7 +30,7 @@ def to_data_byte(inp):
class StringSerializationTestCase(unittest.TestCase):
def setUp(self):
- self.service = SerializationServiceV1(serialization_config=SerializationConfig())
+ self.service = SerializationServiceV1(_Config())
def test_ascii_encode(self):
data_byte = to_data_byte(TEST_DATA_ASCII)
diff --git a/tests/shutdown_test.py b/tests/shutdown_test.py
index 932db424cc..fe34c01bb6 100644
--- a/tests/shutdown_test.py
+++ b/tests/shutdown_test.py
@@ -1,6 +1,5 @@
import threading
-from hazelcast import ClientConfig
from hazelcast.errors import HazelcastClientNotActiveError
from tests.base import HazelcastTestCase
@@ -18,11 +17,11 @@ def tearDown(self):
self.rc.exit()
def test_shutdown_not_hang_on_member_closed(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.connection_strategy.connection_retry.cluster_connect_timeout = 5
member = self.cluster.start_member()
- client = self.create_client(config)
+ client = self.create_client({
+ "cluster_name": self.cluster.id,
+ "cluster_connect_timeout": 5.0,
+ })
my_map = client.get_map("test")
my_map.put("key", "value").result()
member.shutdown()
@@ -32,9 +31,9 @@ def test_shutdown_not_hang_on_member_closed(self):
def test_invocations_finalised_when_client_shutdowns(self):
self.cluster.start_member()
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- client = self.create_client(config)
+ client = self.create_client({
+ "cluster_name": self.cluster.id,
+ })
m = client.get_map("test")
m.put("key", "value").result()
diff --git a/tests/smart_listener_test.py b/tests/smart_listener_test.py
index 289515e0b4..215761b920 100644
--- a/tests/smart_listener_test.py
+++ b/tests/smart_listener_test.py
@@ -1,15 +1,13 @@
from tests.base import HazelcastTestCase
-from tests.util import configure_logging, random_string, event_collector
-from hazelcast.config import ClientConfig
+from tests.util import random_string, event_collector
from time import sleep
class SmartListenerTest(HazelcastTestCase):
@classmethod
def setUpClass(cls):
- configure_logging()
cls.rc = cls.create_rc()
- cls.cluster = cls.create_cluster(cls.rc, None) # Default config
+ cls.cluster = cls.create_cluster(cls.rc, None)
cls.m1 = cls.cluster.start_member()
cls.m2 = cls.cluster.start_member()
cls.m3 = cls.cluster.start_member()
@@ -19,10 +17,10 @@ def tearDownClass(cls):
cls.rc.exit()
def setUp(self):
- client_config = ClientConfig()
- client_config.cluster_name = self.cluster.id
- client_config.network.smart_routing = True
- self.client = self.create_client(client_config)
+ self.client = self.create_client({
+ "cluster_name": self.cluster.id,
+ "smart_routing": True,
+ })
self.collector = event_collector()
def tearDown(self):
diff --git a/tests/ssl/mutual_authentication_test.py b/tests/ssl/mutual_authentication_test.py
index 5bf98e5107..b7b37fe2df 100644
--- a/tests/ssl/mutual_authentication_test.py
+++ b/tests/ssl/mutual_authentication_test.py
@@ -2,9 +2,9 @@
from tests.base import HazelcastTestCase
from hazelcast.client import HazelcastClient
-from hazelcast.config import PROTOCOL
+from hazelcast.config import SSLProtocol
from hazelcast.errors import HazelcastError
-from tests.util import get_ssl_config, configure_logging, get_abs_path, set_attr
+from tests.util import get_ssl_config, get_abs_path, set_attr
@set_attr(enterprise=True)
@@ -15,10 +15,6 @@ class MutualAuthenticationTest(HazelcastTestCase):
ma_req_xml = get_abs_path(current_directory, "hazelcast-ma-required.xml")
ma_opt_xml = get_abs_path(current_directory, "hazelcast-ma-optional.xml")
- @classmethod
- def setUpClass(cls):
- configure_logging()
-
def setUp(self):
self.rc = self.create_rc()
@@ -28,11 +24,11 @@ def tearDown(self):
def test_ma_required_client_and_server_authenticated(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(True))
cluster.start_member()
- client = HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server1-cert.pem"),
- get_abs_path(self.current_directory, "client1-cert.pem"),
- get_abs_path(self.current_directory, "client1-key.pem"),
- protocol=PROTOCOL.TLSv1))
+ client = HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ get_abs_path(self.current_directory, "client1-cert.pem"),
+ get_abs_path(self.current_directory, "client1-key.pem"),
+ protocol=SSLProtocol.TLSv1))
self.assertTrue(client.lifecycle_service.is_running())
client.shutdown()
@@ -41,42 +37,42 @@ def test_ma_required_server_not_authenticated(self):
cluster.start_member()
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server2-cert.pem"),
- get_abs_path(self.current_directory, "client1-cert.pem"),
- get_abs_path(self.current_directory, "client1-key.pem"),
- protocol=PROTOCOL.TLSv1))
+ HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server2-cert.pem"),
+ get_abs_path(self.current_directory, "client1-cert.pem"),
+ get_abs_path(self.current_directory, "client1-key.pem"),
+ protocol=SSLProtocol.TLSv1))
def test_ma_required_client_not_authenticated(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(True))
cluster.start_member()
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server1-cert.pem"),
- get_abs_path(self.current_directory, "client2-cert.pem"),
- get_abs_path(self.current_directory, "client2-key.pem"),
- protocol=PROTOCOL.TLSv1))
+ HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ get_abs_path(self.current_directory, "client2-cert.pem"),
+ get_abs_path(self.current_directory, "client2-key.pem"),
+ protocol=SSLProtocol.TLSv1))
def test_ma_required_client_and_server_not_authenticated(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(True))
cluster.start_member()
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server2-cert.pem"),
- get_abs_path(self.current_directory, "client2-cert.pem"),
- get_abs_path(self.current_directory, "client2-key.pem"),
- protocol=PROTOCOL.TLSv1))
+ HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server2-cert.pem"),
+ get_abs_path(self.current_directory, "client2-cert.pem"),
+ get_abs_path(self.current_directory, "client2-key.pem"),
+ protocol=SSLProtocol.TLSv1))
def test_ma_optional_client_and_server_authenticated(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(False))
cluster.start_member()
- client = HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server1-cert.pem"),
- get_abs_path(self.current_directory, "client1-cert.pem"),
- get_abs_path(self.current_directory, "client1-key.pem"),
- protocol=PROTOCOL.TLSv1))
+ client = HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ get_abs_path(self.current_directory, "client1-cert.pem"),
+ get_abs_path(self.current_directory, "client1-key.pem"),
+ protocol=SSLProtocol.TLSv1))
self.assertTrue(client.lifecycle_service.is_running())
client.shutdown()
@@ -85,48 +81,49 @@ def test_ma_optional_server_not_authenticated(self):
cluster.start_member()
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server2-cert.pem"),
- get_abs_path(self.current_directory, "client1-cert.pem"),
- get_abs_path(self.current_directory, "client1-key.pem"),
- protocol=PROTOCOL.TLSv1))
+ HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server2-cert.pem"),
+ get_abs_path(self.current_directory, "client1-cert.pem"),
+ get_abs_path(self.current_directory, "client1-key.pem"),
+ protocol=SSLProtocol.TLSv1))
def test_ma_optional_client_not_authenticated(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(False))
cluster.start_member()
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server1-cert.pem"),
- get_abs_path(self.current_directory, "client2-cert.pem"),
- get_abs_path(self.current_directory, "client2-key.pem"),
- protocol=PROTOCOL.TLSv1))
+ HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ get_abs_path(self.current_directory, "client2-cert.pem"),
+ get_abs_path(self.current_directory, "client2-key.pem"),
+ protocol=SSLProtocol.TLSv1))
def test_ma_optional_client_and_server_not_authenticated(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(False))
cluster.start_member()
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server2-cert.pem"),
- get_abs_path(self.current_directory, "client2-cert.pem"),
- get_abs_path(self.current_directory, "client2-key.pem"),
- protocol=PROTOCOL.TLSv1))
+ HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server2-cert.pem"),
+ get_abs_path(self.current_directory, "client2-cert.pem"),
+ get_abs_path(self.current_directory, "client2-key.pem"),
+ protocol=SSLProtocol.TLSv1))
def test_ma_required_with_no_cert_file(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(True))
cluster.start_member()
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, True, get_abs_path(self.current_directory, "server1-cert.pem"),
- protocol=PROTOCOL.TLSv1))
+ HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ protocol=SSLProtocol.TLSv1))
def test_ma_optional_with_no_cert_file(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(False))
cluster.start_member()
- client = HazelcastClient(
- get_ssl_config(cluster.id, True, get_abs_path(self.current_directory, "server1-cert.pem"),
- protocol=PROTOCOL.TLSv1))
+ client = HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ protocol=SSLProtocol.TLSv1))
self.assertTrue(client.lifecycle_service.is_running())
client.shutdown()
diff --git a/tests/ssl/ssl_test.py b/tests/ssl/ssl_test.py
index b0ab2b0940..1103d6178e 100644
--- a/tests/ssl/ssl_test.py
+++ b/tests/ssl/ssl_test.py
@@ -3,7 +3,7 @@
from tests.base import HazelcastTestCase
from hazelcast.client import HazelcastClient
from hazelcast.errors import HazelcastError
-from hazelcast.config import PROTOCOL
+from hazelcast.config import SSLProtocol
from tests.util import get_ssl_config, configure_logging, fill_map, get_abs_path, set_attr
@@ -29,15 +29,15 @@ def test_ssl_disabled(self):
cluster.start_member()
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, False))
+ HazelcastClient(**get_ssl_config(cluster.id, False))
def test_ssl_enabled_is_client_live(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(self.hazelcast_ssl_xml))
cluster.start_member()
- client = HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server1-cert.pem"),
- protocol=PROTOCOL.TLSv1))
+ client = HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ protocol=SSLProtocol.TLSv1))
self.assertTrue(client.lifecycle_service.is_running())
client.shutdown()
@@ -46,7 +46,8 @@ def test_ssl_enabled_trust_default_certificates(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(self.default_ca_xml))
cluster.start_member()
- client = HazelcastClient(get_ssl_config(cluster.id, True, protocol=PROTOCOL.TLSv1))
+ client = HazelcastClient(**get_ssl_config(cluster.id, True,
+ protocol=SSLProtocol.TLSv1))
self.assertTrue(client.lifecycle_service.is_running())
client.shutdown()
@@ -56,15 +57,16 @@ def test_ssl_enabled_dont_trust_self_signed_certificates(self):
cluster.start_member()
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, True, protocol=PROTOCOL.TLSv1))
+ HazelcastClient(**get_ssl_config(cluster.id, True,
+ protocol=SSLProtocol.TLSv1))
def test_ssl_enabled_map_size(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(self.hazelcast_ssl_xml))
cluster.start_member()
- client = HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server1-cert.pem"),
- protocol=PROTOCOL.TLSv1))
+ client = HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ protocol=SSLProtocol.TLSv1))
test_map = client.get_map("test_map")
fill_map(test_map, 10)
self.assertEqual(test_map.size().result(), 10)
@@ -74,11 +76,11 @@ def test_ssl_enabled_with_custom_ciphers(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(self.hazelcast_ssl_xml))
cluster.start_member()
- client = HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server1-cert.pem"),
- protocol=PROTOCOL.TLSv1,
- ciphers="DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-DES-"
- "CBC3-SHA:DHE-RSA-DES-CBC3-SHA:DHE-DSS-DES-CBC3-SHA"))
+ client = HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ protocol=SSLProtocol.TLSv1,
+ ciphers="DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-DES-"
+ "CBC3-SHA:DHE-RSA-DES-CBC3-SHA:DHE-DSS-DES-CBC3-SHA"))
self.assertTrue(client.lifecycle_service.is_running())
client.shutdown()
@@ -87,10 +89,10 @@ def test_ssl_enabled_with_invalid_ciphers(self):
cluster.start_member()
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server1-cert.pem"),
- protocol=PROTOCOL.TLSv1,
- ciphers="INVALID-CIPHER1:INVALID_CIPHER2"))
+ HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ protocol=SSLProtocol.TLSv1,
+ ciphers="INVALID-CIPHER1:INVALID_CIPHER2"))
def test_ssl_enabled_with_protocol_mismatch(self):
cluster = self.create_cluster(self.rc, self.configure_cluster(self.hazelcast_ssl_xml))
@@ -98,9 +100,9 @@ def test_ssl_enabled_with_protocol_mismatch(self):
# Member configured with TLSv1
with self.assertRaises(HazelcastError):
- HazelcastClient(get_ssl_config(cluster.id, True,
- get_abs_path(self.current_directory, "server1-cert.pem"),
- protocol=PROTOCOL.SSLv3))
+ HazelcastClient(**get_ssl_config(cluster.id, True,
+ get_abs_path(self.current_directory, "server1-cert.pem"),
+ protocol=SSLProtocol.SSLv3))
def configure_cluster(self, filename):
with open(filename, "r") as f:
diff --git a/tests/statistics_test.py b/tests/statistics_test.py
index 007b72f2ed..f5c9e80837 100644
--- a/tests/statistics_test.py
+++ b/tests/statistics_test.py
@@ -1,10 +1,8 @@
import time
-import os
from tests.base import HazelcastTestCase
from hazelcast.statistics import Statistics
from hazelcast.client import HazelcastClient
-from hazelcast.config import ClientConfig, ClientProperties, NearCacheConfig
from hazelcast.version import CLIENT_VERSION, CLIENT_TYPE
from tests.hzrc.ttypes import Lang
from tests.util import random_string
@@ -25,9 +23,7 @@ def tearDownClass(cls):
cls.rc.exit()
def test_statistics_disabled_by_default(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id)
time.sleep(2 * self.DEFAULT_STATS_PERIOD)
client_uuid = client._connection_manager.client_uuid
@@ -37,26 +33,8 @@ def test_statistics_disabled_by_default(self):
self.assertIsNone(response.result)
client.shutdown()
- def test_statistics_disabled_with_wrong_value(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.set_property(ClientProperties.STATISTICS_ENABLED.name, "truee")
- config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, self.STATS_PERIOD)
- client = HazelcastClient(config)
- client_uuid = client._connection_manager.client_uuid
-
- time.sleep(2 * self.STATS_PERIOD)
- response = self._get_client_stats_from_server(client_uuid)
-
- self.assertTrue(response.success)
- self.assertIsNone(response.result)
- client.shutdown()
-
def test_statistics_enabled(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.set_property(ClientProperties.STATISTICS_ENABLED.name, True)
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id, statistics_enabled=True)
client_uuid = client._connection_manager.client_uuid
time.sleep(2 * self.DEFAULT_STATS_PERIOD)
@@ -64,29 +42,10 @@ def test_statistics_enabled(self):
client.shutdown()
- def test_statistics_enabled_with_environment_variable(self):
- environ = os.environ
- environ[ClientProperties.STATISTICS_ENABLED.name] = "true"
- environ[ClientProperties.STATISTICS_PERIOD_SECONDS.name] = str(self.STATS_PERIOD)
-
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- client = HazelcastClient(config)
- client_uuid = client._connection_manager.client_uuid
-
- time.sleep(2 * self.STATS_PERIOD)
- self._wait_for_statistics_collection(client_uuid)
-
- os.unsetenv(ClientProperties.STATISTICS_ENABLED.name)
- os.unsetenv(ClientProperties.STATISTICS_PERIOD_SECONDS.name)
- client.shutdown()
-
def test_statistics_period(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.set_property(ClientProperties.STATISTICS_ENABLED.name, True)
- config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, self.STATS_PERIOD)
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id,
+ statistics_enabled=True,
+ statistics_period=self.STATS_PERIOD)
client_uuid = client._connection_manager.client_uuid
time.sleep(2 * self.STATS_PERIOD)
@@ -98,31 +57,14 @@ def test_statistics_period(self):
self.assertNotEqual(response1, response2)
client.shutdown()
- def test_statistics_enabled_with_negative_period(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.set_property(ClientProperties.STATISTICS_ENABLED.name, True)
- config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, -1 * self.STATS_PERIOD)
- client = HazelcastClient(config)
- client_uuid = client._connection_manager.client_uuid
-
- time.sleep(2 * self.DEFAULT_STATS_PERIOD)
- self._wait_for_statistics_collection(client_uuid)
-
- client.shutdown()
-
def test_statistics_content(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.set_property(ClientProperties.STATISTICS_ENABLED.name, True)
- config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, self.STATS_PERIOD)
-
map_name = random_string()
-
- near_cache_config = NearCacheConfig(map_name)
- config.near_caches[map_name] = near_cache_config
-
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id,
+ statistics_enabled=True,
+ statistics_period=self.STATS_PERIOD,
+ near_caches={
+ map_name: {},
+ })
client_uuid = client._connection_manager.client_uuid
client.get_map(map_name).blocking()
@@ -164,17 +106,13 @@ def test_statistics_content(self):
client.shutdown()
def test_special_characters(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.set_property(ClientProperties.STATISTICS_ENABLED.name, True)
- config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, self.STATS_PERIOD)
-
map_name = random_string() + ",t=es\\t"
-
- near_cache_config = NearCacheConfig(map_name)
- config.near_caches[map_name] = near_cache_config
-
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id,
+ statistics_enabled=True,
+ statistics_period=self.STATS_PERIOD,
+ near_caches={
+ map_name: {},
+ })
client_uuid = client._connection_manager.client_uuid
client.get_map(map_name).blocking()
@@ -189,17 +127,13 @@ def test_special_characters(self):
client.shutdown()
def test_near_cache_stats(self):
- config = ClientConfig()
- config.cluster_name = self.cluster.id
- config.set_property(ClientProperties.STATISTICS_ENABLED.name, True)
- config.set_property(ClientProperties.STATISTICS_PERIOD_SECONDS.name, self.STATS_PERIOD)
-
map_name = random_string()
-
- near_cache_config = NearCacheConfig(map_name)
- config.near_caches[map_name] = near_cache_config
-
- client = HazelcastClient(config)
+ client = HazelcastClient(cluster_name=self.cluster.id,
+ statistics_enabled=True,
+ statistics_period=self.STATS_PERIOD,
+ near_caches={
+ map_name: {},
+ })
client_uuid = client._connection_manager.client_uuid
test_map = client.get_map(map_name).blocking()
diff --git a/tests/threading_test.py b/tests/threading_test.py
index 2f86bfbd71..1432553608 100644
--- a/tests/threading_test.py
+++ b/tests/threading_test.py
@@ -14,7 +14,7 @@
class ThreadingTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def setUp(self):
diff --git a/tests/transaction_test.py b/tests/transaction_test.py
index 492abbde5a..2b003d282f 100644
--- a/tests/transaction_test.py
+++ b/tests/transaction_test.py
@@ -10,7 +10,7 @@
class TransactionTest(SingleMemberTestCase):
@classmethod
def configure_client(cls, config):
- config.cluster_name = cls.cluster.id
+ config["cluster_name"] = cls.cluster.id
return config
def test_begin_and_commit_transaction(self):
diff --git a/tests/util.py b/tests/util.py
index 5893b5a04d..b887984e75 100644
--- a/tests/util.py
+++ b/tests/util.py
@@ -3,7 +3,7 @@
import time
from uuid import uuid4
-from hazelcast.config import ClientConfig, PROTOCOL
+from hazelcast.config import SSLProtocol
def random_string():
@@ -39,19 +39,19 @@ def get_ssl_config(cluster_name, enable_ssl=False,
certfile=None,
keyfile=None,
password=None,
- protocol=PROTOCOL.TLS,
+ protocol=SSLProtocol.TLSv1_2,
ciphers=None):
- config = ClientConfig()
- config.cluster_name = cluster_name
- config.network.ssl.enabled = enable_ssl
- config.network.ssl.cafile = cafile
- config.network.ssl.certfile = certfile
- config.network.ssl.keyfile = keyfile
- config.network.ssl.password = password
- config.network.ssl.protocol = protocol
- config.network.ssl.ciphers = ciphers
-
- config.connection_strategy.connection_retry.cluster_connect_timeout = 2
+ config = {
+ "cluster_name": cluster_name,
+ "ssl_enabled": enable_ssl,
+ "ssl_cafile": cafile,
+ "ssl_certfile": certfile,
+ "ssl_keyfile": keyfile,
+ "ssl_password": password,
+ "ssl_protocol": protocol,
+ "ssl_ciphers": ciphers,
+ "cluster_connect_timeout": 2,
+ }
return config
diff --git a/tests/util_test.py b/tests/util_test.py
index 7c5445587d..8138308c8a 100644
--- a/tests/util_test.py
+++ b/tests/util_test.py
@@ -1,3 +1,5 @@
+from hazelcast.config import IndexConfig, IndexUtil, IndexType, QueryConstants, \
+ UniqueKeyTransformation
from hazelcast.util import TimeUnit, calculate_version
from unittest import TestCase
@@ -45,4 +47,71 @@ def test_version_string(self):
self.assertEqual(30702, calculate_version("3.7.2-SNAPSHOT"))
self.assertEqual(109902, calculate_version("10.99.2-SNAPSHOT"))
self.assertEqual(109930, calculate_version("10.99.30-SNAPSHOT"))
- self.assertEqual(109900, calculate_version("10.99-RC1"))
\ No newline at end of file
+ self.assertEqual(109900, calculate_version("10.99-RC1"))
+
+
+class IndexUtilTest(TestCase):
+ def test_with_no_attributes(self):
+ config = IndexConfig()
+
+ with self.assertRaises(ValueError):
+ IndexUtil.validate_and_normalize("", config)
+
+ def test_with_too_many_attributes(self):
+ attributes = ["attr_%s" % i for i in range(512)]
+ config = IndexConfig(attributes=attributes)
+
+ with self.assertRaises(ValueError):
+ IndexUtil.validate_and_normalize("", config)
+
+ def test_with_composite_bitmap_indexes(self):
+ config = IndexConfig(attributes=["attr1", "attr2"], type=IndexType.BITMAP)
+
+ with self.assertRaises(ValueError):
+ IndexUtil.validate_and_normalize("", config)
+
+ def test_canonicalize_attribute_name(self):
+ config = IndexConfig(attributes=["this.x.y.z", "a.b.c"])
+ normalized = IndexUtil.validate_and_normalize("", config)
+ self.assertEqual("x.y.z", normalized.attributes[0])
+ self.assertEqual("a.b.c", normalized.attributes[1])
+
+ def test_duplicate_attributes(self):
+ invalid_attributes = [
+ ["a", "b", "a"],
+ ["a", "b", " a"],
+ [" a", "b", "a"],
+ ["this.a", "b", "a"],
+ ["this.a ", "b", " a"],
+ ["this.a", "b", "this.a"],
+ ["this.a ", "b", " this.a"],
+ [" this.a", "b", "a"],
+ ]
+
+ for attributes in invalid_attributes:
+ with self.assertRaises(ValueError):
+ config = IndexConfig(attributes=attributes)
+ IndexUtil.validate_and_normalize("", config)
+
+ def test_normalized_name(self):
+ config = IndexConfig(None, IndexType.SORTED, ["attr"])
+ normalized = IndexUtil.validate_and_normalize("map", config)
+ self.assertEqual("map_sorted_attr", normalized.name)
+
+ config = IndexConfig("test", IndexType.BITMAP, ["attr"])
+ normalized = IndexUtil.validate_and_normalize("map", config)
+ self.assertEqual("test", normalized.name)
+
+ config = IndexConfig(None, IndexType.HASH, ["this.attr2.x"])
+ normalized = IndexUtil.validate_and_normalize("map2", config)
+ self.assertEqual("map2_hash_attr2.x", normalized.name)
+
+ def test_with_bitmap_indexes(self):
+ bio = {
+ "unique_key": QueryConstants.THIS_ATTRIBUTE_NAME,
+ "unique_key_transformation": UniqueKeyTransformation.RAW
+ }
+ config = IndexConfig(type=IndexType.BITMAP, attributes=["attr"], bitmap_index_options=bio)
+ normalized = IndexUtil.validate_and_normalize("map", config)
+ self.assertEqual(bio["unique_key"], normalized.bitmap_index_options.unique_key)
+ self.assertEqual(bio["unique_key_transformation"], normalized.bitmap_index_options.unique_key_transformation)