-
-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unrecognized format: dag-cbor #967
Comments
looks like those are emanated from here https://github.com/ipfs/js-ipfs/blob/1082fce9b2b027b6e176b8807d84b21f1358f296/packages/ipfs-http-client/src/lib/core.js#L91 to me this doesnt look like an orbitdb issue. im guessing go-ipfs or something else may have dropped some built in support for dag-cbor. but i also thought go-ipfs 0.11 was working well with orbit-db. |
I just noticed the release notes for ipfs-http-client v55.0.0 about breaking changes to pubsub. I'm wondering if that has something to do with the issues and warning messages I'm seeing: https://github.com/ipfs/js-ipfs/releases/tag/ipfs-http-client%4055.0.0 |
After watching the CPU spikes, it isn't continuous. There are periods of intense CPU usage that causes the app the hang. I haven't been able to isolate what the cause is. Things I tried that didn't work:
The only thing that consistently stops the error messages and CPU spikes is disabling OrbitDB. When removing OrbitDB from the app, the error messages and CPU spikes go away. When the app hangs, it's usually followed by the cbor error message, and a time stamp. Example:
Of note, the CPU spike is in my app, not the go-ipfs node. I'm using I tried to load ipfs-http-client from source so that I could do some debugging, but my app uses |
I managed to get an example script running, using ipfs-http-client from source (as opposed to an npm library). I was able to inject a stack trace at the origination point that @tabcat pointed out. Here is the stack trace:
|
After chasing down the stack trace, I figured out that the error message is originating from here. The data comes in with a format of 'dag-cbor'. This causes the error message seen in the debug logs. The catch enclosure changes the format to 'cbor' and recursively calls the function again. So in the end, an error message is reported, but then the error is handled. So the error message was a red hearing. The error message is not directly related to the CPU spike. But I suspect it might be indirectly related, as they seem to happen in tandem. I'm going to close this issue, as I've resolved the source of the error message. I'll open up a new issue if I can find the source of the CPU spikes. |
any updates with this? im troubleshooting some failed tests with go-ipfs@0.11 and ipfs-http-client@56 |
could you give me a code snippet to try and reproduce this? |
@tabcat to print out the stack trace, I above the line you pointed out in ipfs-http-client, I added this line of code:
|
In terms of reproducing it, I'm not sure how. I was running this example script for ipfs-coord. It set up several OrbitDB instances between peers, and I was seeing the dag-cbor error messages when running that script with |
I initially filed this issue in the js-ipfs repo but was able to trace the root cause of the issue to orbit-db through process of elimination. When I comment out the creation of the OrbitDB, the issue goes away.
I'm seeing very high CPU usage, and this error message keeps getting repeated when I run with
DEBUG=*
:I still haven't been able to isolate the area in orbit-db where the error message is originating from. I'm assuming the error messages and CPU usage are related.
Does anyone know what is causing these errors? Can anyone post a link to the source code where the error message is generated?
Environment:
The text was updated successfully, but these errors were encountered: