forked from locka99/opcua
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async server #4
Merged
Merged
Async server #4
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Instead of using bitflags, this reimplements the status code struct by hand. Using bitflags has some unfortunate side effects in that it makes the debug implementation completely wrong. This way we get a much richer implementation of the status code, which is also more correct. This includes a validate method, which might be handy, but it is not enforced anywhere. The implementation uses some relatively simple declarative macros, so the old JS script used for codegen is removed.
It is now possible to establish a connection, end to end, using the async client and async server. Next up is writing framework code for services, then implementing the existing in-memory node manager.
This one is more correct, but this got very complex. Probably worth splitting into its own module and perhaps finding some way to improve code clarity at some point. In general the change is good though, I think.
This service is a nightmare with this design. Hopefully things will get a lot simpler once we get away from browse.
Pretty much just empty impls for now, but the stuff in the node manager is at least really simple, which is good.
This should, maybe, fix the one remaining hole in that implementation.
Finally finishing up the core logic, just need to implement services and node manager functionality.
Next is handling monitored items
This thing may need some more work, it also means that having multiple sessions in a single transport won't work properly.
Should go through server config, generally.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What and Why
This is a rewrite of the server from scratch, with the primary goal of taking the server implementation from a limited, mostly embedded server, to a fully fledged, general purpose server SDK. The old way of using the server does still more or less exist, see samples for the closest current approximation, but the server framework has changed drastically internally, and the new design opens the door for making far more complex and powerful OPC-UA servers.
Goals
Currently my PC uses about ~1% CPU in release mode running the demo server, which updates 1000 variables once a second. This isn't bad, but I want this SDK to be able to handle servers with millions of nodes. In practice this means several things:
High level changes
First of all, there are some fundamental structural changes to better handle multiple clients and ensure compliance with the OPC-UA standard. Each TCP connection now runs in a tokio
task
, and most requests will actually spawn atask
themselves. This is reasonably similar to how frameworks likeaxum
handle web requests.Subscriptions and sessions are now stored centrally, which allows us to implement
TransferSubscriptions
and properly handle subscriptions outliving their session as they are supposed to in OPC-UA. I think technically you can run multiple sessions on a single connection now, though I have no way to test this at the moment.The web server is gone. It could have remained, but I think it deserves a rethink. It would be better (IMO), and deal with issues such as locka99#291, if we integrate with the
metrics
library, and optionally export some other metrics using some other generic interface. In general I think OPC-UA is plenty complicated enough without extending it with tangentially related features, though again this might be related to the shift I'm trying to create here from a specialized embedded server SDK, to a generic OPC-UA SDK.Events are greatly changed, and quite unfinished. I believe a solid event implementation requires not just more thought, but a proper derive macro to make implementing them tolerable. The old approach relied on storing events as nodes, which works, and has some advantages, but it's not particularly efficient, and required setting a number of actually superfluous values, i.e. setting the displayname of an event, which is a value that cannot be accessed, as I understand it. The new approach is just storing them as structs,
dyn Event
.Node managers
The largest change is in how most services work. The server now contains a list of
NodeManager
s, an idea stolen from the .NET reference SDK, though I've gone further than they do there. Each node manager implements services for a collection of nodes, typically the nodes from one or more namespaces. When a request arrives we give each node manager the request items that belongs to it, so when we callRead
, for example, a node manager will get theReadValueId
s where theNodeManager
methodowns_node
returnstrue
.There are some exceptions, notably the
view
services can often involve requests that cross node-manager boundaries. Even with this, the idea is that this complexity is hidden from the user.Implementing a node manager from scratch is challenging, see
node_manager/memory/diagnostics.rs
for an example of a node manager with very limited scope (but one where the visible nodes are dynamic!).To make it easier for users to develop their own servers, we provide them with a few partially implemented node managers that can be extended:
InMemoryNodeManager
deals with all non-value attributes, as well asBrowse
, and provides some convenient methods for setting values in the address space. Node managers based on this use the oldAddressSpace
. Each such node manager contains something implementingInMemoryNodeManagerImpl
, which is a much more reasonable task to implement. Seetests/utils/node_manager.rs
for a very complete example, ornode_manager/memory/core.rs
for a more realistic example (this node manager implements the core namespace, which may also be interesting).SimpleNodeManager
is an approximation of the old way to use the SDK. Nodes are added externally, and you can provide getters, setters, and method callbacks. These are no longer part of the address space.More node managers can absolutely be added if we find good abstractions, but these are solid enough to let us implement what we need for the time being.
Lost features
Some features are lost, some forever, others until we get around to reimplementing them. I could have held off on this PR until they were all ready, but it's already large enough.
-1
no longer works. I wanted to make everything work, but in typical OPC-UA fashion some features are just incredibly hard to properly implement in a performant way. I'm open for suggestions for implementing this in a good way, but it's such a niche feature that I felt it was fine to leave it out for now.General improvements
Integration tests are moved into the library as cargo integration tests, and they are quite nice. I can run
cargo test
in about ~30 seconds, most of which is spent on some expensive crypto methods. There is a test harness that allows you to spin up a server using port0
, meaning that you get dynamically assigned a port, which means we can run tests in parallel arbitrarily.This almost certainly fixes locka99#359, locka99#358, locka99#324, locka99#291, and locka99#281, and probably more.
Future work
See
todo.md
, the loose ends mentioned in this PR description need to be tied up, and there is a whole lot of other stuff in that file that would be nice to do.