-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transport endpoint is not connected #236
Comments
If you search into gdfuse.log, don't you find anything starting with CURLE_ (e.g. CURLE_COULDNT_RESOLVE_HOST)? |
Did a search on an old saved log i had from a crash and didn't see anything like that. it failed on me overnight, but i wasn't running debug that time. I've got it running debug again... see if/when it crashes. |
Anyway version 0.6.8 should fix a similar issue. I don't know if it's the same you are experiencing, but it's worth giving a try. |
I was on a fresh install anyways, already using 0.6.8. same issue again after a random amount of time. Not "CURLE" etc. in the gdfuse.log
That's the final lines in gdfuse.log. I can double check an old log i have saved, but thats the first time i noticed an insanely large negative number referring to, unless i'm mistaken, the section of the files it's streaming. Everything above it looks significantly more sane. |
OK, it's a different bug. Could you please send your gdfuse.log to alessandro.strada@gmail.com? Thanks |
I double checked an old gdfuse.log and didn't see the negative references, so not sure if this is something coming from 0.6.8, but i sent the new log your way. |
[415.183022] TID=67355: Error during request: Code: 28, Description: CURLE_OPERATION_TIMEOUTED, ErrorBuffer: Resolving timed out after 5518 milliseconds On the 0.6.9 pulled from beta PPA. This was with 10x default memory cache and buffer settings though, gonna put that back to default as i noticed i was hitting memory limits i think. I just knew that caused issues a lot in the past and i wanted to see if that changed. Additional unrelated observation(and a million variables, so no way to tell,) overall speeds seem to have lowered. I saw speeds upwards of 150MB/s during some initial testing the past couple weeks. now i've seen it top out at like 55MB/s. Again, just an observation. that's still crazy fast compared to what i originally expected, and what is needed, just thought i'd share. |
Okay even on default settings, still lost the mount. BUT i saw that it was in fact still a memory error, nothing showed in the log but i saw it in the terminal before the process was killed. Before i start the mount i have about 1.5GB of RAM free. As the program is accessing files it seems to pretty steadily guzzle up memory, and then balances at about 100MB of RAM free for a while, until(i assume) it finally tries to use too much then fails and quits. Maybe a leak somewhere? |
Fixed in 0.6.10 |
Been trying to troubleshoot this for a few days, can't find any clue what's triggering it to lose the mount. Debug doesn't tell me anything, it seems to just be in the middle of working, then stops logging without any mention of issues.
I am using stream_large_files=true. and 100% of my usage is with "large files"
Originally I was also increasing the max_memory_cache and memory_buffer_size but that seemed to increase the frequency, so i set those to default.
I'm not using my own client and secret as that also seemed to increase the error's occurrence.
Let me know if there's any specifics I can give to help. Loving this project, I just can't seem to keep it stable.
The text was updated successfully, but these errors were encountered: