Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

usbip: error: failed to record. out queue full #20

Closed
cantops opened this issue Mar 8, 2019 · 15 comments
Closed

usbip: error: failed to record. out queue full #20

cantops opened this issue Mar 8, 2019 · 15 comments
Labels
bug Something isn't working

Comments

@cantops
Copy link

cantops commented Mar 8, 2019

This error occurs when attach usb disk scrolls, but does not affect actual use。
#16 does not solve the problem
image

@Oxalin
Copy link

Oxalin commented Mar 8, 2019

I'm pretty sure this is the same bug as #16 . Can you give us more info: Windows version and can you be more specific on your USB disk.

@Oxalin Oxalin added the bug Something isn't working label Mar 8, 2019
@cantops
Copy link
Author

cantops commented Mar 8, 2019

@Oxalin
usbip server: usbip (usbip-utils 2.0) on ubuntu, kernel version:4.18.0-15-generic
usbip drivers & client: The latest code pulled on git
windows: windows 10 x64 10.0.17134.1
usb disk: Kingston Technology : DataTraveler G4 (0951:1666)

@ttuibk
Copy link

ttuibk commented Apr 12, 2019

I have the same issue.
usbip server: usbip-server package 2.0-5 on OpenWrt-8devices Designated Driver v2.10 r49377
usbip drivers & client: The latest code pulled on git
Windows 10 x64 1809.
the usb device is an FTDI USB to serial converter used to drive an RS422 bus.

@perlland
Copy link

perlland commented May 2, 2019

Please, somebody has any solution for this issue?

@cezuni
Copy link

cezuni commented May 5, 2019

@perlland @ttuibk : Default out queue length is 256. In normal case it is a sufficient number. But a usb disk with many files sometimes incurs queue full. So could you try with an empty USB disk? If you success, increasing a queue length may help as shown in #14 (comment). Of course, current implementation should be improved to support dynamic queue length. 😢

@cantops
Copy link
Author

cantops commented May 8, 2019

@Oxalin @cezuni
Record_outq_seqnum and is_outq_seqnum should be 1:1, right?Is_outq_seqnum writes the bytes to the queue, and is_outq_seqnum is executed to clean up the queue after the acknowledgement is sent, it will overflow if the queue is not cleaned up in time.

@cezuni
Copy link

cezuni commented May 8, 2019

Record_outq_seqnum and is_outq_seqnum should be 1:1, right?Is_outq_seqnum writes the bytes to the queue, and is_outq_seqnum is executed to clean up the queue after the acknowledgement is sent, it will overflow if the queue is not cleaned up in time.

Exactly yes @cantops .. But queue full will occur due to too many outgoing requests or an outgoing request is not processed timely or ignored on server side. Later case implys that a client may request in an unexpected way. So increasing a queue length such as my previous comment is just a workaround.

@cantops
Copy link
Author

cantops commented May 9, 2019

image
I have tried empty USB disk and ukey, both of which have overflow prompts. Increasing the queue length only prolongs the overflow time, so I think it should be caused by not cleaning up the queue properly.
According to this guess, I made a simple print of the record_outq_seqnum and is_outq_seqnum calls. According to the print result, the same data is called twice to record_outq_seqnum (different number of loops), while is_outq_seqnum is called only once. Is this the main cause of overflow?
@Oxalin @cezuni

@cezuni
Copy link

cezuni commented May 9, 2019

@cantops :

I have tried empty USB disk and ukey, both of which have overflow prompts. Increasing the queue length only prolongs the overflow time, so I think it should be caused by not cleaning up the queue properly.

I had suspected that your server did not respond while a client pushed usbip requests.

According to the print result, the same data is called twice to record_outq_seqnum

Well, it looks weird that record_outq_seqnum() resides in get_xfer_len(). If get_xfer_len() is called twice, record_outq_seqnum() is also called two times. So more inspections should be needed. Could you add more debugging logs for get_xfer_len()?

@koudis
Copy link
Collaborator

koudis commented Jun 20, 2019

I track the "usbip: error: failed to record. out queue full" because I need USBIP on my server farm (so thanks for nice solution :) ).

The problem is that for some seqnums the record_outq_seqnum is called twice and is_outq_seqnum is not called at all.

Patch (with log entries) and log.txt attached.
After patch apply, I call ".\usbip.exe attach -r 192.168.0.129 -b 1-2 > log.txt" in powershell in admin mode.

usbip-win-log.zip

@cezuni
Copy link

cezuni commented Jun 22, 2019

Thanks @koudis & @cantops . As you pointed out, record_outq_seqnum() can be called multiple times. This means that the same seq numbers are marked at out queue and result in queue full.

I create patch_out_queue_full_fix.txt to resolve this issue. This patch is just a work-around, which avoids marking seq number if it is duplicated in record_outq_seqnum(). More desirable solution will be a hash table for bookkeeping outbound messages.

I have checked that a patched usbip works good for my usb bluetooth dongle. If anyone confirms, I'll merge the patch into master branch.

@koudis
Copy link
Collaborator

koudis commented Jun 23, 2019

@cezuni

test

I tested it on my Flash Storage. It seems to work without problems. (from Debian Stretch to Windows 10 Home)

Hash Table

Is there any Hash table implementation in the project? (I cannot find any useful for this purpose).
Or the Hash Table must be implemented first?

@cezuni
Copy link

cezuni commented Jun 23, 2019

@koudis :

I tested it on my Flash Storage. It seems to work without problems. (from Debian Stretch to Windows 10 Home)

Good. I'll merge the patch.

Is there any Hash table implementation in the project? (I cannot find any useful for this purpose).
Or the Hash Table must be implemented first?

Hash table should be implemented first. Well, it would be better that simple one is imported into this project.

@cezuni cezuni closed this as completed in 2e0fae5 Jun 24, 2019
@koudis
Copy link
Collaborator

koudis commented Jun 24, 2019

@cezuni
I think that Hash Table is not necessary because it non-trivialy increases complexity of the impl. (another library, another non trivial code)
Will Has Table be used in another place except to record_outq_seqnum...?

If Yes the solution below is not good :D.

Is there any reason why is the Queue 256 items long? Can we decrease the number of Queue item to 128 (at least)?
If Yes we can create another Queue with length N, where N <= 128. Items in the new Queue will be "unmarked" seqnums. We just move seqnums from one queue to another.
The Complexity of the operation remains.

@cezuni
Copy link

cezuni commented Jun 25, 2019

@koudis :

Is there any reason why is the Queue 256 items long? Can we decrease the number of Queue item to 128 (at least)?

I remembered that queue implementation and 256 queue length were determined by old repository. Optimal queue length is hard to guess. We just hope that 256 is sufficiently large enough. Smaller number than 256 would be fine in a normal case.

If Yes we can create another Queue with length N, where N <= 128. Items in the new Queue will be "unmarked" seqnums. We just move seqnums from one queue to another.
The Complexity of the operation remains.

Multiple queue is interesting!! But every queue should be checked when a response message arrives. Besides previously addressed implementations, linked list may be a good candidate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants