We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
As far as I see, the problem is that UA_Byte and char are not sign-compatible.
Does anyone have an elegant solution for that? Or should we just replace char by UA_Byte everywhere?
The text was updated successfully, but these errors were encountered:
UA_SByte == char UA_Byte == unsigned char
Why don't you use SByte when you mean char?
Sorry, something went wrong.
Do we mean char? Before the encoding we do not have any type. Imho we should just replace it with UA_Byte.
i am fine with it
I would vote to have UA_Byte* instead of char* for the buffers in the _encode/ _decode-functions
af72881
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
No branches or pull requests
As far as I see, the problem is that UA_Byte and char are not sign-compatible.
Does anyone have an elegant solution for that? Or should we just replace char by UA_Byte everywhere?
The text was updated successfully, but these errors were encountered: