Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote multithreading stalls client ... and a pitch #491

Closed
notEvil opened this issue May 14, 2022 · 23 comments
Closed

Remote multithreading stalls client ... and a pitch #491

notEvil opened this issue May 14, 2022 · 23 comments
Assignees
Labels
Triage Investigation by a maintainer has started

Comments

@notEvil
Copy link

notEvil commented May 14, 2022

Hi,

following the issue template: I expect the example below to work ^^
Problematic is the assignment process of incoming requests to local threads.

To make it work rpyc obviously needs to spawn client side threads to "mirror" server side threads if necessary. What about

  • on request, threads enter a pool and send their request
    • requests contain two ids: (local thread id, remote thread id or None)
  • one thread in the pool is responsible for receiving
  • the receiving thread unpacks the request and directs it to the corresponding thread or a new thread
    • if the request is directed to itself, it will pass responsibility of receiving to another thread
  • the corresponding thread leaves the thread pool and processes the request
    • new threads may be reused

This scheme has a positive side-effect: threads don't interfere with each other anymore (no race conditions, no stalling due to shared responsibilities). Downsides: more bytes/request and some use cases may create many idle threads (yet one thread sync request on many connections is fine). Async is covered by not entering the pool in the first place.

Environment
  • rpyc 5.1.0
  • python 3.10
  • Arch Linux
Minimal example

Server:

import rpyc
import threading
import time

class Service(rpyc.Service):
    def exposed_function(self, event):
        threading.Thread(target=event.wait).start()
        time.sleep(1)
        threading.Thread(target=event.set).start()

rpyc.ThreadedServer(Service(), port=18812).start()

Client:

import rpyc
import threading

connection = rpyc.connect("localhost", 18812, config=dict(allow_public_attrs=True))
connection.root.function(threading.Event())
@notEvil
Copy link
Author

notEvil commented May 17, 2022

I just finished a prototype at notEvil@46b49e3
It looks complicated due to the massive locking, but I wanted this to not race under any circumstance (yet there is still one with close). For an easier start, a few notes regarding the variables

  • lock protects shared state
  • queues hold received data for non-receiving threads
  • receiving states if some thread is currently receiving
  • events are used to synchronize on state change (data received, pass responsibility of receiving, ..) and represent the pool of waiting threads
  • request_counts are used to ignore waiting threads for new requests (because they aren't supposed to work on others tasks)
  • thread_ids are used to link local threads to remote threads (in case of ping pong)

The unittests that succeeded with master also succeed with this patch. Some unittests fail or at least print a stacktrace due to race conditions on close which I couldn't figure out. Fyi introducing time.sleep(0.5) right after async request HANDLE_CLOSE "resolves" the race conditions.

@notEvil
Copy link
Author

notEvil commented May 18, 2022

The prototype has no issues with the following script. Current rpyc, as expected, stalls some threads yielding expired results and is much slower due to single threaded operation. Hope this increases the chance of consideration :) (not complaining!)

import rpyc
import logging
import random
import threading
import time


SLEEPLESS = False
PORT = 18812


class Service(rpyc.Service):
    def __init__(self):
        super().__init__()

        self._remote_function = None

    def on_connect(self, connection):
        self._remote_function = connection.root.function
        for _ in range(10):
            threading.Thread(target=self._worker, args=(connection,)).start()

    def _worker(self, connection):
        for _ in range(2000 if SLEEPLESS else 100):
            if not SLEEPLESS:
                logging.info(repr(_))
            self._call()

    def exposed_function(self, complexity):
        if random.random() < 0.292893219:  # (1 - _)**2 == 0.5
            self._call()
        if SLEEPLESS:
            if random.random() < 0.01:
                time.sleep(0.01)
        else:
            time.sleep(complexity)
        if random.random() < 0.292893219:
            self._call()
        return complexity

    def _call(self):
        complexity = 0 if random.random() < 0.5 else random.uniform(0, 1)
        while self._remote_function is None:
            pass
        assert self._remote_function(complexity) == complexity


server_thread = threading.Thread(
    target=rpyc.ThreadedServer(Service, port=PORT).start, daemon=True
)
server_thread.start()

connection = rpyc.connect("localhost", PORT, service=Service)

@comrumino
Copy link
Collaborator

I'll see if I can review this soon. I just pushed out a new unit test named test_affinity.py. The unittest is derived from client/server scripts created to reproduce past race condition issues. Lmk how we should move forward with #492

@notEvil
Copy link
Author

notEvil commented May 19, 2022

Great, #492 is not really related to this issue (though the prototype doesn't need the fix by design) so I'll make it short: still think there is no way around extending the lock to wait and _dispatch to prevent races, which to be fair are less painful on busy connections.

@comrumino
Copy link
Collaborator

#492 is not really related to this issue

Oops, I was connecting dots that are not there. Let us refocus... the title of this issue: "remote multithreading stalls client."

The claim in the title is inaccurate. The remote multithreading does not stall client. In fact, your implementation sleeps on the server thread that serves the client. The client is invoking using a synchronous request. The client waiting for a response when doing a synchronous requests is expected behavior.

I rewrote your client/server example to better illustrate what happens and here is the output

% python ./bin/client.py
Running async example...
Created async result after 0.0012280941009521484s
Value returned after 1.0123558044433594s: silly sleeps on server threads

Running synchronous example...
Value returned after 1.0149140357971191s: silly sleeps on server threads

client.py

import rpyc
import threading
import time


def async_example(connection):
    t0 = time.time()
    print(f"Running async example...")
    _async_function = rpyc.async_(connection.root.function)
    res = _async_function(threading.Event())
    print(f"Created async result after {time.time()-t0}s")
    value = res.value
    print(f"Value returned after {time.time()-t0}s: {value}")
    print()


def synchronous_example(connection):
    t0 = time.time()
    print(f"Running synchronous example...")
    value = connection.root.function(threading.Event())
    print(f"Value returned after {time.time()-t0}s: {value}")
    print()


if __name__ == "__main__":
    connection = rpyc.connect("localhost", 18812, config=dict(allow_public_attrs=True))
    async_example(connection)
    synchronous_example(connection)

server.py

import rpyc
import threading
import time


class Service(rpyc.Service):
    def exposed_function(self, event):
        threading.Thread(target=event.wait).start()
        time.sleep(1)
        threading.Thread(target=event.set).start()
        return 'silly sleeps on server threads'


if __name__ == "__main__":
    rpyc.ThreadedServer(Service(), hostname="localhost", port=18812).start()

I hope this helps clarify the behavior you are seeing.

@comrumino comrumino self-assigned this May 19, 2022
@comrumino comrumino added the Triage Investigation by a maintainer has started label May 19, 2022
@notEvil
Copy link
Author

notEvil commented May 19, 2022

I'm a little puzzled by the fact that current master doesn't stall, but 7ea2d24 (a few commits back) as well as 5.1.0 does on my setup. Can you confirm?

The example is rather simple and my goal was to let the server call Event.wait, yielding a call request which the client happily dispatches, and later call Event.set, again yield a call request which the client won't dispatch because it is busy waiting. What am I missing?

update: threading.Thread tries to access target.__name__ which isn't allowed with public attrs only. The threads therefore never run. When using allow all attrs, the example times out on master as expected.

@comrumino
Copy link
Collaborator

comrumino commented May 19, 2022

I'm a little puzzled by the fact that current master doesn't stall

I confirmed that master does not timeout but 7ea2d24 does timeout... #492 is related to the unexpected behavior you saw after all 🤷

The example is rather simple

Concurrency in action is rarely simple imo 😆

let the server call Event.wait, yielding a call request which the client happily dispatches, and later call Event.set, again yield a call request which the client won't dispatch because it is busy waiting. What am I missing?

Let's take a step back and ask a few questions about what we are trying to accomplish.

  1. Why are you sending an Event object to the server?
  2. Can this be solved using a producer/consumer pattern?
  3. Is the server stateful?

RPyC has some documentation on handling events

@notEvil
Copy link
Author

notEvil commented May 19, 2022

Generally, this is more a theoretical problem and the example is completely artificial. In practice one can launch as many client and server threads as necessary to get the job done and hope the possibility of thread A receiving a result for thread B which is occupied with a complex request for thread C won't yield expired results. However, some usecases may require thread-local execution and benefit from not having to worry about threading requirements and delays (thread B).

Regarding the puzzle, I found that current master somehow doesn't process the example as I'd expect. I've changed my code to print before wait and its never reached.

Server

import rpyc
import threading
import time

class Service(rpyc.Service):
    def exposed_function(self, c):
        threading.Thread(target=c.wait).start()
        time.sleep(1)
        c.set()

rpyc.ThreadedServer(Service(), port=18812).start()

Client

import rpyc
import threading

class C:
    def __init__(self):
        self.event = threading.Event()

    def wait(self):
        print('waiting')
        self.event.wait()
        print('done waiting')

    def set(self):
        self.event.set()

connection = rpyc.connect("localhost", 18812, config=dict(allow_public_attrs=True))
connection.root.function(C())

@comrumino
Copy link
Collaborator

comrumino commented May 19, 2022

  • server.py, Each client connection is spawned one thread by ThreadedServer.
  • client.py, has only one thread when using synchronous requests

Now let's rewrite your client a little bit such that the last few lines look like

#connection.root.function(C())
_asyncfunc = rpyc.async_(connection.root.function)
c = C()
res = _asyncfunc(c)
print(f'c.event internal flag is {c.event.is_set()}')
print(f'We are and value is: {res.value}')
print(f'c.event internal flag is {c.event.is_set()}')

The output is now

c.event internal flag is False
We are and value is: None
c.event internal flag is True

Why I changed what I did?

  • the server side is calling wait on a netref/poxy to the client event—being in a thread does not change this
  • the client event blocks b/c the server told RPyC to call c.wait
  • the client's only thread is now blocked and cannot receive further instructions from remote
  • I fixed changed the code so that the client's main thread is not blocked by moving the service function call into a separate thread which allows RPyC to read the socket to get new instructions

The code example you provided @notEvil is still expected behavior. Even so, there is likely room for improvement towards RPyC's "transparency" design—especially w/ respect to threading.

Edit: strikethrough added to inaccurate information

comrumino added a commit that referenced this issue May 19, 2022
@notEvil
Copy link
Author

notEvil commented May 19, 2022

  • the server side is calling wait on a netref/poxy to the client event—being in a thread does not change this

  • the client event blocks b/c the server told RPyC to call c.wait

  • the client's only thread is now blocked and cannot receive further instructions from remote

Thanks for confirming this.

  • I fixed the code so that the client thread is not blocked by moving the service function call into a separate thread which allows RPyC to read the socket to get new instructions

Looking at rpyc.async_ I don't see no thread spawn. I guess you mean the client thread is free to do stuff now, but it would still consume the messages in the same order the server sends them when the thread decides to serve (.value). Anyways, if this is intended then fair enough.

@comrumino
Copy link
Collaborator

I don't see no thread spawn.

I revised my comment to strikethrough inaccurate information (like the thread spawn bit as you pointed out). The async_ wrapper simply loops to serve within the same thread as it was called. Sorry for the inaccuracy of my previous comment!

The way your example was written results in a race condition, but not due to the RPyC protocol. If we patch the server to join the Thread objects, the behavior becomes more predictable and the client will always infinitely wait—the wait may or more not execute on the client without calling join. After the client is stalled and we send an interrupt signal , the stack trace shows that the client is still trying to acquire the waiter.

Traceback (most recent call last):
  File "/Users/James.stronz/repo/rpyc/rpyc/core/protocol.py", line 324, in _dispatch_request
    res = self._HANDLERS[handler](self, *args)
  File "/Users/James.stronz/repo/rpyc/rpyc/core/protocol.py", line 605, in _handle_call
    return obj(*args, **dict(kwargs))
  File "/Users/James.stronz/.pyenv/versions/3.10.3/lib/python3.10/threading.py", line 600, in wait
    signaled = self._cond.wait(timeout)
  File "/Users/James.stronz/.pyenv/versions/3.10.3/lib/python3.10/threading.py", line 320, in wait
    waiter.acquire()
KeyboardInterrupt

The latest version of demos/async_client better displays the buggy behavior people are seeing.

@comrumino
Copy link
Collaborator

tl;dr RPyC netrefs treat threading as a second class concept. What does that mean? It means that when a proxy object is constructed it does not track the context that it running in. So, when a proxy object interacts with another address space there is no mechanism in place to control which thread that interaction happens under.

@notEvil
Copy link
Author

notEvil commented Jun 16, 2022

Thanks! I've been using notEvil@46b49e3 for weeks without any issues. It would change a lot, maybe reduce performance due to bookkeeping, so I think if it were to be added to rpyc, it should be an optional feature!? Maybe add a bool/enum to the configuration dict?

@benjamin-kirkbride
Copy link

Could this be related to #475 ?

@notEvil
Copy link
Author

notEvil commented Jul 23, 2022

@benjamin-kirkbride in short, no

@comrumino
Copy link
Collaborator

or weeks without any issues. It would change a lot, maybe reduce performance due to bookkeeping, so I think if it were to be added to rpyc, it should be an optional feature!?

I would be okay with it as an optional feature. I would be interested in the benchmark differences. If you want to open a PR for it, we can work through a review of the changes.

@comrumino
Copy link
Collaborator

comrumino commented Jul 29, 2022

@notEvil do we have unit test coverage for notEvil@46b49e3

I greatly appreciate all of your recent contributions! Threading support has been on my radar but I've never been motivated to flesh it out—all of your time and contributions are a blessing. I'm going to do a release for RPyC (hopefully tomorrow). Then we can hopefully work on improving threading support. That way we don't have too many changes in a single release.

@notEvil
Copy link
Author

notEvil commented Jul 29, 2022

@notEvil do we have unit test coverage for notEvil@46b49e3

Almost 100%. When I remove test_gevent_server from the suite, all* tests pass in a single run. With test_gevent_server there's a fatal Python error later in the run. I guess some threads are still alive and access resources (sockets) they shouldn't.

*all except some for unrelated reasons, mostly ssh config

If you want to open a PR for it, we can work through a review of the changes.

Sure

Thanks for your support and continued efforts to maintain and improve rpyc. My contributions so far are little and few in comparison

@notEvil
Copy link
Author

notEvil commented Sep 8, 2022

Finally some progress; I rethought the core idea and wrote it up:

Multithreading in rpyc

  • the following doesn't cover thread synchronization, with few exceptions, for the sake of simplicity

Current implementation

  • core function serve
    • get message from socket
    • process message
  • get message from socket is protected by lock
  • usually serve is called in a loop
    • not waiting for a response
    • waiting for a response
      • AsyncResult.wait
      • synchronization in serve
        • with condition
          • try acquire lock
          • if acquire failed
            • wait for condition
            • return
        • get message from socket
        • process message
        • release lock
        • with condition
          • notify all about condition
      • race condition
        • sequence
          • thread A: AsyncResult.wait decides to serve because the response hasn't been processed yet
          • thread B: completes process message of the response for thread A and release lock
          • thread A: try acquire lock succeeds and get message from socket, potentially creating a deadlock
        • a timeout for get message from socket reduces the time in deadlock

Consequences

  • thread boundary violations
    • breaks thread-local state
  • delays
    • example
      • sequence
        • thread A: sends request, waits for the response and reaches get message from socket
        • thread B: sends request
        • remote: sends request for thread B
        • remote: sends response for thread A
      • thread A will process the request for thread B which may significantly prolong the wait
    • if the application is not multiprocess, timing is not important and there are no waits, this is irrelevant due to the GIL

Proposed implementation

  • core function serve
    • get message queue for this thread
    • get message from message queue
    • if message doesn't exists
      • get message from socket
      • get local thread id from message
      • get message queue for local thread
      • add message to message queue
    • if message exists
      • process message
  • get message from socket is still protected by lock
  • in serve, if local thread id is not defined, bind remote thread to some local thread
    • requires remote thread id in message
    • requires knowledge about thread occupation
    • if all threads are occupied, either queue message or spawn thread and bind to it

Advantages

  • no thread boundary violations
  • no delays
  • solves race condition in AsyncResult.wait

Disadvantages

  • complexity of implementation
  • thread spawn on demand
    • required to prevent deadlocks because local threads are bound to remote threads
  • performance
    • should either pay off or be insignificant

Reference implementation

  • send request function
    • get remote thread id for this thread (binding)
    • add (this thread id, remote thread id) to message
    • send message
    • get request count for this thread
    • increment request count (set occupied)
  • core function serve
    • get message queue for this thread
    • if message queue exists
      • get message from message queue
      • if message is available (just process)
        • process message
        • return
    • if receiving
      • add this thread to thread pool (enter pool)
      • if message queue doesn't exist
        • create message queue
      • wait for message in message queue
      • remove this thread from thread pool (leave pool)
      • if message not available (timeout)
        • return
      • if message is available
        • if message not RECEIVE
          • process message
          • return
    • set receiving
    • loop
      • get message from socket
      • if message not available (timeout)
        • get any thread from thread pool
        • if thread is available (pass receiving)
          • get message queue for thread
          • add RECEIVE to message queue
        • if thread not available (stop receiving)
          • reset receiving
        • return
      • get local thread id from message
      • get remote thread id from message
      • if local thread not defined (root request)
        • get request count for this thread
        • if request count is zero (this)
          • get any thread from thread pool
          • if thread is available (pass receiving)
            • get message queue for thread
            • add RECEIVE to message queue
          • if thread not available (stop receiving)
            • reset receiving
          • set remote thread id for this thread (bind)
          • process message
          • return
        • if request count not zero (other)
          • for thread in thread pool
            • get request count for thread
            • if request count is zero
              • set remote thread id for thread (bind)
              • get message queue for thread
              • add message to message queue
              • break
          • if didn't break (new)
            • spawn thread
            • set remote thread id for thread (bind)
            • create message queue for thread
            • add message to message queue
      • if local thread is defined
        • get message queue for local thread
        • add message to message queue
  • send response function
    • get remote thread id for this thread (binding)
    • add (this thread id, remote thread id) to message
    • send message
    • get request count for this thread
    • if request count is zero
      • remove remote thread id for this thread (unbind)
  • process response function
    • get request count for this thread
    • decrement request count (reset occupation)

This should be much easier to reason about than the commit. The reference implementation always spawns new threads, but can be adjusted to queue messages up instead.

@notEvil
Copy link
Author

notEvil commented Sep 11, 2022

The "new" design is essentially the same as found in notEvil@46b49e3, but I ended up with much more concise code:

notEvil@6f4edd5 (100% unit test coverage)

Again, it replaces the current implementation instead of being optional, because the implications are still up for debate imo. For one, thread binding can lead to deadlocks (no thread receiving) and thread spawn behavior might be undesirable (e.g. initially client and server request simultaneously, causing 1 additional thread at both sides). If you want to move the discussion to a PR, I'll open one.

@notEvil
Copy link
Author

notEvil commented Sep 15, 2022

I would be interested in the benchmark differences.

Well ... its complicated. Consider the following

import bokeh.layouts as b_layouts
import bokeh.plotting as b_plotting
import numpy
import pandas
import rpyc
import rpyc.utils.factory as ru_factory
import rpyc.utils.helpers as ru_helpers
import scipy.stats as s_stats
import argparse
import pathlib
import pickle
import time
import sys


SOCKET_PATH = "/tmp/socket"


class Service(rpyc.Service):
    def __init__(self):
        super().__init__()

        self._remote_function = None

    def on_connect(self, connection):
        super().on_connect(connection)

        self._remote_function = connection.root.function

        # ru_helpers.BgServingThread(connection)  # force second thread

    def exposed_function(self, argument):
        if argument == 0:
            return 0

        return self._remote_function(argument - 1)


if len(sys.argv) == 1:  # server
    path = pathlib.Path(SOCKET_PATH)
    assert not path.exists()

    try:
        rpyc.OneShotServer(Service, socket_path=SOCKET_PATH).start()

    finally:
        path.unlink()

    sys.exit(0)


# client
depth, candidate_path, reference_path, plot_path = sys.argv[1:]
depth = int(depth)


if 0 <= depth:
    client = ru_factory.unix_connect(SOCKET_PATH, service=Service)

    n = int(1e4)
    remote_function = client.root.function
    candidate_seconds = []

    for count in range(n):
        start_time = time.monotonic()
        _ = remote_function(depth)
        seconds = time.monotonic() - start_time
        candidate_seconds.append(seconds)

    client.close()

    with open(candidate_path, "wb") as file:
        pickle.dump(candidate_seconds, file)

else:
    with open(candidate_path, "rb") as file:
        candidate_seconds = pickle.load(file)

    with open(reference_path, "rb") as file:
        reference_seconds = pickle.load(file)

    candidate_seconds = pandas.Series(candidate_seconds)
    reference_seconds = pandas.Series(reference_seconds)

    print("reference")
    print(reference_seconds.describe())
    print("candidate")
    print(candidate_seconds.describe())
    print("t-test")
    print(s_stats.ttest_ind(candidate_seconds, reference_seconds, equal_var=False))

    b_plotting.output_file(plot_path)

    figures = []

    _ = max(reference_seconds.max(), candidate_seconds.max())
    figure = b_plotting.Figure(y_axis_label="seconds", y_range=(_ * -0.05, _ * 1.05))
    figure.scatter(
        reference_seconds.index,
        reference_seconds.values,
        color="black",
        legend_label="reference",
    )
    figure.scatter(
        candidate_seconds.index, candidate_seconds.values, legend_label="candidate"
    )
    figure.legend.location = "top_right"
    figure.legend.click_policy = "hide"
    figures.append(figure)

    figure = b_plotting.Figure(x_axis_label="seconds", y_axis_label="frequency")

    kde = s_stats.gaussian_kde(reference_seconds.values)
    x = numpy.linspace(reference_seconds.min(), reference_seconds.max(), 1001)
    figure.line(x, kde(x), color="black", legend_label="reference")

    kde = s_stats.gaussian_kde(candidate_seconds.values)
    x = numpy.linspace(candidate_seconds.min(), candidate_seconds.max(), 1001)
    figure.line(x, kde(x), legend_label="candidate")

    figure.legend.location = "top_right"
    figure.legend.click_policy = "hide"
    figures.append(figure)

    b_plotting.save(b_layouts.column(figures))

Its essentially a trivial function call inside a tight loop. Server and client run as separate processes communicating over a unix socket.

Current rpyc produces consistent times for depth = 0 (no sub request). The new commit however does reasonably well at times and terrible at others. With CPU frequency scaling set to Performance (Schedutil by default) the times become consistent and settle somewhere around 1.5x (slower). I did some profiling with viztracer and saw rpyc polling or waiting for the event 90% of the time. I'd therefore argue that most time is lost due to thread switching which is inherent to the design.

If you know how to better pin down the cost of the design, please let me know!

edit: just pushed some changes to https://github.com/notEvil/rpyc/tree/thread_bind which replace the deterministic handover with a controlled race. This way, the currently active thread has a chance to stay active and it usually does so for many iterations. Still, thread switching occurs and hampers performance significantly

@comrumino
Copy link
Collaborator

comrumino commented Sep 16, 2022

Task switching impacting performance does not surprise me. @notEvil tysvm for the effort and time.

Here is where my head is at....

  1. A user that is dead set on threading can always create a connection per thread and it would be "thread-safe"
  2. A user that does not want threading and wants more performance for a single connection would be out of luck
    Attempting to make a connection object "thread-safe" is not a flexible design for use case two above. Replacing the existing functionality for an experimental implementation will result in a number of grumbly users. So, I'm trying to find the best way forward to make everyone happy in a way that is technically sound and feasible.

Here was my readings of the day:

  1. http://masnun.rocks/2016/10/06/async-python-the-different-forms-of-concurrency/
  2. https://docs.python.org/3/library/asyncio.html
  3. https://docs.python.org/3/library/asyncio-task.html#
  4. https://docs.python.org/3/library/asyncio-llapi-index.html

Way forward:

  1. improving the underlying design to support asyncio and distributing of tasks
  2. explore threading implications and improvements from asyncio enhancements
  3. improve documentation to make it clear that connections are not thread-safe but support asyncio
  4. determine best way for users to thread when coroutines do not work for their use case (not sure what that would be tbh) and document

I would be open to @notEvil's threading changes if they were configurable or allowed users to opt-in. Penalizing all projects for the sake of some projects is not very flexible. I'm open to feedback @notEvil @Jongy @tomerfiliba @coldfix et. al

@notEvil
Copy link
Author

notEvil commented Sep 17, 2022

1. A user that is dead set on threading can always create a connection per thread and it would be "thread-safe"

assumes that you can prevent netrefs from crossing thread boundaries. If you can, sure, but then again transparency is the first item on the feature list.

2. A user that does not want threading and wants more performance for a single connection would be out of luck
   Attempting to make a connection object "thread-safe" is not a flexible design for use case two above. Replacing the existing functionality for an experimental implementation will result in a number of grumbly users. So, I'm trying to find the best way forward to make everyone happy in a way that is technically sound and feasible.

not necessarily, in my experience, the cost of locking is insignificant in a single threaded environment. I just added two lines to my code which stop the spawned threads as soon as they served their initial purpose, and got performance on par with current rpyc on my benchmark. (did surprise me tbh)

Here was my readings of the day:

1. http://masnun.rocks/2016/10/06/async-python-the-different-forms-of-concurrency/

2. https://docs.python.org/3/library/asyncio.html

3. https://docs.python.org/3/library/asyncio-task.html#

4. https://docs.python.org/3/library/asyncio-llapi-index.html

Please consider support for Trio in addition to asyncio, I think it deserves it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Triage Investigation by a maintainer has started
Projects
None yet
Development

No branches or pull requests

3 participants