-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation of from_bytes_multiple() #39
Comments
Added in 88db4ab You can install with |
Wow, that was a really quick reply, thank you! Using the new function makes python segfault after some time, though. This is the full backtrace from the core dump: I pulled your changes and ran setup.py build --debug && pip install . capnproto is from Jan 4th 2015. Several files were created before/during the compile step: The serialized data crashing python can be found at I am not quite sure if this is an issue with pycapn or capnproto itself. Let me know cat data.bin | capnp decode schema Events works just fine. |
I'm having troubles reproducing on my end. I downloaded schema.capnp and test.py from your first gist as well as data.bin.gz, and ran Also, I'm a little surprised you've been using |
The compile step was done from the habit of compiling things for the c++ part of capnproto. I did not think too much about it. So I am not using this atm. I copied the schema and test.py to another machine (different Distro with Python 2.7.9 (default, Dec 12 2014, 18:48:30) [GCC 4.8.3] on linux2) with the same results. Here is what I did: https://gist.github.com/thomaspenteker/2a8b4b0eb55152e3496e it kept going, though. I also noticed that running ran 2 times
In [2]: del x
In [3]: [EOF]
zsh: segmentation fault ipython --no-banner --no-confirm-exit So calling the destructor Can this be an issue in capnproto itself? How could I find the difference between your invocaton and mine? I'm at loss for ideas why this works for you and fails for me. |
Gah so this was due to a subtle issue in deconstructor ordering with Cython/C++. It should be fixed now in If you're interested in the nitty-gritty details, it arose because InputMessageReader inherits from MessageReader in both the original C++ library and in my Cython wrapping of those classes, and in Cython I keep a reference to an input buffer (usually a python string) in the CythonInputMessageReader subclass. This reference must survive for the lifetime of the underlying C++ InputMessageReader. Because of deconstructor ordering, CythonInputMessageReader is deconstructed before CythonMessageReader, and the buffer is deallocated immediately because CPython garbage collects immediately in cases like this. Then in CythonMessageReader the deconstructor of the C++ MessageReader class is called, but it's virtual and calls InputMessageReader's destructor. This then segfaults, since the buffer it expects to exist has already been deallocated. It's a subtle issue, and I'm going to have to audit my code to make sure I didn't make this mistake anywhere else. Thankfully, I don't think I used inheritance in this manner very much... |
This fixed the issue, thank you! + interesting catch wrt the order destructors :) |
I'm fairly certain I'm hitting this deallocation issue from an exception being thrown at just the right time -- is master in a good state to cut a new release to pypi and/or can I bribe you to do so? :) |
Alright, I've uploaded a new version (v0.5.2) up to pypi. Please let me know if it solves the issue for you. |
Thanks! Unfortunately it still seems to segfault in the same spot. I discovered I can recreate the crash very easily with something like this in my app: obj = item.from_packed_bytes() ...although I can't seem to recreate it in a stripped down context. I managed to run it against Valgrind and it looks like ~PackedMessageReader() is in the call stack so I still think it's related to this issue. I've pasted the Valgrind error log here: https://gist.github.com/JohnEmhoff/db3b44e0ecf43c3c2d22 In the meantime I'll see if I can reproduce the crash in a smaller program. |
Okay, I've got it reproduced in a pretty small program that I've linked to below. Loading up certain messages with from_bytes_packed seems to cause a segfault when the item is GC'd. To reproduce: |
Hello,
would it be possible to provide a function from_bytes_multiple that mimics it read_multiple() counter-part? There are several situations where data may be received via network. Currently there's the option to use os.pipe() to provide file-object based access to the data. This is suboptimal due to constraints on the pipe size. The other to make this work is by writing the data to temporary files and reading it again with read_multiple() which is bad for obvious reasons.
Thanks in advance.
The text was updated successfully, but these errors were encountered: