Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Heavy slow-down depending on number of keys #57

Open
ronny-rentner opened this issue Feb 25, 2022 · 5 comments
Open

Heavy slow-down depending on number of keys #57

ronny-rentner opened this issue Feb 25, 2022 · 5 comments

Comments

@ronny-rentner
Copy link

ronny-rentner commented Feb 25, 2022

Tried on Debian, Python 3.9, main branch,

Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux

>>> import shared_memory_dict, timeit
>>> orig = {}
>>> for i in range(100): orig[i] = i
... 
>>> len(orig)
100
>>> timeit.timeit('orig[10]', globals=globals())
0.03487025899812579
>>> for i in range(1000): orig[i] = i
... 
>>> timeit.timeit('orig[10]', globals=globals())
0.0321930960053578

>>> shared = shared_memory_dict.SharedMemoryDict('some_name', 10000)
>>> for i in range(100): shared[i] = i
... 
>>> len(shared)
100
>>> timeit.timeit('shared[10]', globals=globals())
3.8467855940107256
>>> for i in range(1000): shared[i] = i
... 
>>> len(shared)
1000
>>> timeit.timeit('shared[10]', globals=globals())
29.410958576016128

Is that the intended or expected behavior?

It looks to me like it is unserializing the whole dict for every single get of a value, even when nothing has ever changed.

@spaceone
Copy link
Contributor

Yes, the whole dictionary is unserialized on every access. shared memory doesn't have a way to inform about changes.

A way to solve this which comes into my mind is to store a version or something in the first line and read out this first and compare with the already/last fetched one. If it's unchanged the current value can still be used and no unserializing has to be done.
But this would be an API change in the serializers.

@ronny-rentner
Copy link
Author

Hmm, not sure regarding the serializers.

I've hacked togeter a different approach that uses a stream of updates instead of serializing the whole dict. It only serializes the whole dict if necessary, ie. the stream buffer is full. Feel free to check it out: https://github.com/ronny-rentner/UltraDict

It's not a real package yet, just a hack.

@wnark
Copy link

wnark commented Mar 10, 2023

I have tried to use numpy to convert dictionaries to numpy arrays for faster speed and then store them in shared memory, but reading numpy arrays written to shared memory in the same process is normal, and when reading across compilers, the reading process crashes and cannot generate dump files, after a while if there is a need I will see how to debug with gbd.

Dictionary and array interchange:

import numpy as np

def _dict_to_array(message_dict: dict):
    """遍历字典并将嵌套字典转换为numpy数组"""
    if not message_dict:
        return np.array([])
    items = []
    for key, value in message_dict.items():
        if isinstance(value, dict):
            if value:
                value = _dict_to_array(value)
                # 将递归返回的结果添加到items列表中
                items.append((key, value))
            else:
                value = np.array([])
                # 将递归返回的结果添加到items列表中
                items.append((key, value))
        else:
            items.append((key, value))
    message_array = np.array(items, dtype=object)
    return message_array

message = {
    "stress_info":{
        "stress_path":"/mnt/",
        "process_num":2,
        "cycles_num":3,
        "monitor":{
            "d64f3e14-a084-11ed-acbb-fa163e959a50":{
                "pid_num":126664,
                "pid_code":"0x0000",
                "pid_state":"running",
                "pid_message":"Average reading speed: 100.25 MB/s\n"
            },
            "d65f25a4-a084-11ed-860f-fa163e959a50":{
                "pid_num":126665,
                "pid_code":"0x0000",
                "pid_state":"running",
                "pid_message":"Average reading speed: 124.76 MB/s\n"
            }
        },
        "completed_num":6,
        "total_num":6
    }
}

def _array_to_dict(
        message_array:np.ndarray
    ):
    """遍历数组并将嵌套数组转换为字典

    Args:
        message_array (np.ndarray): 存有字典结构的NumPy 数组
    Return:
        dict: NumPy 数组转换后字典
    """
    # 如果数组是空的,则返回空字典
    if not message_array.size:
        return {}
    message_dict = {}
    for item in message_array:
        # 遍历numpy数组,将数组中的一组提取出来
        # 获取对应的字典(键,值)
        key, value = item
        if isinstance(value, np.ndarray):
            # 如果value是ndarray数组,则递归继续解构
            # 后续不用递归
            value = _array_to_dict(value)
            # 递归完成后,将递归得到的value和key组成字典
            message_dict[key] = value
        else:
            # 如果value不是ndarray数组,则组合成字典
            message_dict[key] = value
    # 返回结果字典
    return message_dict

result = _dict_to_array(message)
# print(result)
import sys
print(f"numpy数组字节{sys.getsizeof(result)}")
print(result.nbytes)
print(result.itemsize)
dict_shape = result.shape # 获取维度数
new_message = _array_to_dict(result)
print(new_message)

The part of the array that will be written to memory:

            # 将字典转换为numpy数组
            send_message_array = self._dict_to_array(send_message)
            # 获取数组的维度,返回一个元组
            dict_shape = send_message_array.shape
            # 使用获取的元组,以共享内存构造缓冲区
            sm_store_buf = sm_store.buf
            np_array = np.ndarray(dict_shape, dtype=object, buffer=sm_store_buf)
            # 将原始数据复制一对一到numpy数组使用共享内存
            np_array[:]=send_message_array[:]
            print("server将numpy数组写入内存成功")
            # 调试使用尝试读取内存可以读取
            receive_message_array = np.ndarray(dict_shape, dtype=object, buffer=sm_store_buf)
            # 将dict_shape元组写入numpy元组使用共享内存(这里实现不准确,后续再修)
            sm_store_shape_buf = sm_store_shape.buf
            for i , value in enumerate(dict_shape):
                sm_store_shape_buf[i] = value
            
            # 调试使用,读取buf
            dict_shape_read = tuple(sm_store_shape_buf)

@wnark
Copy link

wnark commented Mar 10, 2023

@ronny-rentner On my side, I was thinking of using locks to force each read to be refreshed when using memory communication, while caching relies on multiple processes using shared memory, such as intra-process pipelines, to implement themselves.

@ronny-rentner
Copy link
Author

@wnark I don't know what is your specific use case but as I have created UltraDict, I no longer need shared-memory-dict.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants