Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please support larger screens #521

Closed
rrthomas opened this issue May 1, 2022 · 9 comments
Closed

Please support larger screens #521

rrthomas opened this issue May 1, 2022 · 9 comments
Assignees

Comments

@rrthomas
Copy link

rrthomas commented May 1, 2022

Is your feature request related to a problem? Please describe.
I have a two-monitor setup with 2× 4k screens. This requires a line buffer of 30720 bytes, but in rfb.h UPDATE_BUF_SIZE is defined as 30000. When I try to serve this setup I get the error:

rfbSendRectEncodingRaw: send buffer too small for 30720 bytes

It might be reasonable to use a much bigger value, e.g. at least 61440, to allow setups with 2× 8k monitors.

Describe the solution you'd like
Please increase the value of UPDATE_BUF_SIZE. This doesn't seem to be a big deal, since there's only one buffer of this size per struct _rfbClientRec, and even (say) doubling that again would only be 128KB per client.

Thanks for LibVNC!

@bk138 bk138 added this to the Release 0.10.0 milestone May 2, 2022
@bk138
Copy link
Member

bk138 commented May 2, 2022

Yeah, sounds reasonable!

@sunweaver
Copy link

sunweaver commented May 2, 2022

@bk138 Any chance to get an upstream patch any time soonish which I then could cherry-pick into libvncserver in Debian prior to an upstream release?

@bk138
Copy link
Member

bk138 commented May 2, 2022

@bk138 Any chance to get an upstream patch any time soonish which I then could cherry-pick into libvncserver in Debian prior to an upstream release?

Yeah I can look into this this or next week, as it's probably a fairly small change.

@bk138 bk138 self-assigned this May 2, 2022
@bk138
Copy link
Member

bk138 commented May 8, 2022

@rrthomas Two questions:

  • How do you actually end up with 30720 bytes for a 2 x 4096 x 4 setup?
  • Are you in control of the server code, i.e. would a configurable buffer size work for you?

@rrthomas
Copy link
Author

rrthomas commented May 8, 2022

@rrthomas Two questions:

* How do you actually end up with 30720 bytes for a 2 x 4096 x 4 setup?

Horizontal resolution for 4k is (at least in my case) 3840: 3840×4×2 = 30720.

* Are you in control of the server code, i.e. would a configurable buffer size work for you?

It would work for me, but why make it complicated when raising the limit would use very little memory? At least raise it to 30,720 (an extra 720 bytes), which would work for up to 2×4k monitors or 1×8k monitor.

If raising a static allocation is seriously not an option, could that memory be dynamically allocated?

@sunweaver
Copy link

X11 max resolution is 32768x32768, so maybe simply put the X11 max values?

@rrthomas
Copy link
Author

rrthomas commented May 8, 2022

@sunweaver, that would be 262,144 bytes. Also, good point about the max size, I was wondering how many screens horizontally is "reasonable", but this is independent of that.

@sunweaver
Copy link

it is actually not 32768x32768, but MAXSHORTxMAXSHORT whereas MAXSHORT is SHRT_MAX set to 32767 in C's limits.h.

@bk138
Copy link
Member

bk138 commented May 9, 2022

@rrthomas I'm mainly thinking about existing users with embedded use case where a doubling of the static allocation per client might already have serious effects.

@sunweaver using X11 max values is an understandable idea, the use case of the lib is not limited to X11 though, so this would be arbitrary as well.

I'll simply raise the default to 32768 which is not an awful lot of an increase and should not hurt the embedded guys.

If someone needs more, we can still go down the malloc() route, but I'll leave this to the users to request and will KISS for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants