-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
polling seems not to be working with jinad #1815
Comments
can't reproduce in the following steps:
import numpy as np
from jina import Flow
with Flow().add(host='localhost:8000', parallel=3) as f:
f.index(np.random.random([100000, 10]))
👻 DAEMON@23375[I]:127.0.0.1:64806 is disconnected
pod0/3@23491[I]:recv ControlRequest from ctl▸pod0/3/ZEDRuntime▸⚐
pod0/3@23491[I]:#sent: 670 #recv: 335 sent_size: 4.0 MB recv_size: 3.9 MB
pod0/3@23491[I]:no update since 2021-01-29 19:51:50, will not save. If you really want to save it, call "touch()" before "save()" to force saving
pod0/3@23375[S]:terminated
👻 PeaStore@23375[S]:445d5011-a452-4d78-a9b6-889dee224370 is released from the store.
👻 DAEMON@23375[I]:127.0.0.1:64784 is disconnected
pod0/2@23489[I]:recv ControlRequest from ctl▸pod0/2/ZEDRuntime▸⚐
pod0/2@23489[I]:#sent: 670 #recv: 335 sent_size: 4.0 MB recv_size: 3.9 MB
pod0/2@23489[I]:no update since 2021-01-29 19:51:49, will not save. If you really want to save it, call "touch()" before "save()" to force saving
pod0/2@23375[S]:terminated
👻 PeaStore@23375[S]:f7982669-54ab-4518-81b7-1472ded8f191 is released from the store.
👻 DAEMON@23375[I]:127.0.0.1:64762 is disconnected
pod0/1@23487[I]:recv ControlRequest from ctl▸pod0/1/ZEDRuntime▸⚐
pod0/1@23487[I]:#sent: 666 #recv: 333 sent_size: 4.0 MB recv_size: 3.9 MB
pod0/1@23487[I]:no update since 2021-01-29 19:51:49, will not save. If you really want to save it, call "touch()" before "save()" to force saving
pod0/1@23375[S]:terminated |
Did u see them actually receiving IndexRequests? I do not remember what avg results were being printed, but we did not see them receiving any data |
yes, they do you can replicate my steps on your laptop and check it out. For understanding the bug I need a reproducible example. |
It does not seem to be a problem |
Problem has been seen again! |
It seems that when request arrivrd then no IDLE pea existed. Would it make sense or is it possible to randomize which pea receives request if none is idle? it seems to be always the first Pea |
i dont understand what is the problem what is the code that can reproduce the problem? |
It is not a problem I think. It is not easy to reproduce but we were seeing a case where But it can be due to the fact that by the time next request arrived no Pea was IDLE. I am not sure if this is a known behavior. we did not manage to reproduce it consistently, but we see it often in our tests in AWS |
Will try to debug further |
https://stackoverflow.com/questions/52278364/is-id-returns-the-actual-memory-address-in-cpython The problem is that we are relying on |
Describe the bug
In the same Pod, there seems to be only one Pea receiving all the load.
The text was updated successfully, but these errors were encountered: