-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IO bridge Processes #686
IO bridge Processes #686
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work on this PR @gkarray
At a high-level, I think it's an important feature to enable, but I'm not convinced this method is the right way to go about it. This pipe-from-the-parent-process will introduce temporal and behavioral coupling between lava and non-lava code, but may not behave as a user expects if their execution is not correctly synchronized.
At minimum, it would be good to see all of the following behaviors tested and clearly documented to users:
- Run the calling code and lava models for many timesteps
- Run the calling code and lava models for different number of timesteps (e.g. send_data 100 times while RunSteps=50 and vice versa)
- Start calling send_data before calling proc.run
- Send and receive non-trivial data structure
- Test the edge cases for pipe filling up and emptying out (i.e. if the calling code for InputBridge.send_data runs signficantly faster than the lava code, confirm whether it will eventually block or raise; vice versa for calling code for OutputBridge.recv_data)
Since I don't see any replies but I do see a bunch of new commits, I'm not sure what you think of my comments above, but here's a more concrete proposal for naming: input_bridge.py > input_synchronizer.py output_bridge.py > output_synchronizer.py You really don't need async protocol models for the synchronizer processes, because the way they're written makes them useful specifically for synchronizing your code to your synchronous Loihi model, hence the renames to describe their actual function. See also Lif models.py, Dense models.py, etc, where the models start with Py, not PyLoihi, and include Model, but not ProcessModel. Continuing: Note that the process name should describe the behavior of all models, in this case Async should not refer to Async protocol vs Loihi protocol, but to the behavior in which the input is injected asynchronously with respect to the updates of the connected port. Basically, this process should allow me to sporadically send data or recv data without caring whether I send or recv the correct number of times, and without ever blocking my calling code or the connected port. out_bridge.py > async_extractor.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice! Great cleanups, this is a clear, simple, and super useful little addition to the core Lava API.
Only top level suggestion to add is to drop the "bridge" in module paths, and just locate these three modules in io.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall - some minor changes and naming suggestions.
We did this in a live review, so my comments are a bit short; more reminders for @gkarray .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work Ghassen.
* First prototype of IO bridge Processes * Progress on IO bridge Processes * Progress on IO bridge Processes * Progress on IO bridge Processes * new version of processes * have async_bridge in loihiprotocol * tried to use PyPyChannel, serialization problem? * PyPyChannel fix * started renaming and cleaning up * started renaming and cleaning up * added tests and some input validation * removed sync processes/models and adjusted inheritance * add ring_queue, add fixed_point model * few more PM tests * started adding ring_queue in channels * refactor in progress * rmv ringqueue * tests mostly finished, refactor in progress * add extractor * refactor in progress * Injector Process + tests * continue tests * adding docstrings * adding Extractor tests * fix linting * fix codacy * refactor * addressing change requests * fix lint * addressing change requests * fix typo * minor refactor * minor update --------- Co-authored-by: SveaMeyer13 <svea.meyer@tum.de> Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com> Co-authored-by: Philipp Plank <philipp.plank@intel.com>
Issue Number: #687
Objective of pull request: Addition of IO bridge (Python only) Processes for getting input to/output from Lava: Injector and Extractor.
Pull request checklist
Your PR fulfills the following requirements:
flakeheaven lint src/lava tests/
) and (bandit -r src/lava/.
) pass locallypytest
) passes locallyPull request type
Please check your PR type:
What is the current behavior?
Dataloader
orRingBuffer
Processes, or theset()
method on ProcessVar
.Dataloader
Process keeps data in disk, and only loads portions of it step-by-step.RingBuffer
Process loads all data to memory from the start.set()
method requires running Lava workloads one time step at a time, and calling it between eachrun()
call.RingBuffer
Process, or theget()
method on ProcessVar
.RingBuffer
Process buffers data in memory, and one has to callget()
at the end of the run on itsdata
Var
to get the data out.get()
method requires running Lava workloads one time step at a time, and calling it between eachrun()
call.What is the new behavior?
Injector
Process on the input side, users would be able to seamlessly integrate Lava workloads (running in non-blocking mode (!!)) into broader applications. These Lava workloads would be able to get dynamically generated input data from Python applications, without having to pause and re-run every time step.RingBuffer
option).Dataloader
option).set()
option).Extractor
Process on the output side, users would be able to seamlessly integrate Lava workloads into broader applications. These broader applications would be able to get real-time output data from Lava workloads, without having to pause and re-run every time step.RingBuffer
option).get()
option).Does this introduce a breaking change?
Supplemental information
TODO:
VEC_SPARSE
.AsyncProcessModels
, implementing theAsyncProtocol
.