-
Notifications
You must be signed in to change notification settings - Fork 30
Use case: The ASHRAE 223 standard (soon to be open for public review) #343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@steveraysteveray thanks for the use case
What are the assumptions on execution? SHACL-AF has the text:
Is it assumed that each rule is run once only? Or do some rules potentially depend on others running? Does the output of running the rules get added back to the data graph? Storage elsewhere? Discarded after each run? |
We definitely iterate multiple times over the rules. Using TopBraidComposer, that is easily configured. For non-TopBraid users, in the GitLab build process we wrote a script that repeatedly invokes the SHACL inference API call distributed by TopQuadrant until no new triples are asserted. Typically on the order of 7 or so iterations are made. Following that, the resulting graphs are validated. The output is added back to the data graph on each iteration and is usually discarded after the validation, but that decision is left for the application developer. However, on the https://models.open223.info/intro.html site that contains example models, you will see links to the "original" and "compiled" versions of the models. The compiled version is the result of the inferences. I agree with your three bullet points. Only SHACL inferencing is performed. Negation is indeed used, which is one reason for the SPARQL validation rules rather than native SHACL. We adopt a closed world assumption, and have written a SPARQLConstraint to enforce it (for entities within 223 or QUDT, which is used by 223). One other point - While writing this issue, I noticed that the open223 website is a bit behind our GitLab repository. Did you count the rules from that site? If you would like access to our GitLab, it is available upon request to jjb5@cornell.edu. where you can view all the files. I'd be happy to answer any questions you might have about the organization of the repo. |
I followed the given link to https://data.ashrae.org/BACnet/223p/223p.ttl
No thank you. There would be licensing, copyright and IP issues that I don't want to get involved with. |
OK. We will try to bring open223 up to the current state - not sure when. Regarding access to the GitLab, we could add you without merge request capability - basically read only - if you like, but it's your call. |
The two versions are quite different in size. The older one is larger - is that because it includes the rules output?
Just to check - so some rules depend on other rules? How is the relationship between rules designed? |
No, a different reason entirely. The older one includes most of the QUDT ontology and vocabularies, which is much larger than the 223 standard. Since that time we agreed it does not belong as part of the standard - only by reference (import). Also, none of the rules output affect the standard - just data files that use the standard.
Some of the rules fire as a result of triples appearing because of other rules, but there is no explicit use of sh:order or any other dependency mechanism. Each rule stands on its own. We just iterate until no new triples appear. One good example is illustrated by the following figure. Triples involving all the relations shown in the figure can be inferred from triples using the "base" relation cnx. |
For reference, here are the rules: Extracted rules from https://open223.info/223p.ttl
|
I posted some analysis that I did in Dec 2003: https://github.com/VladimirAlexiev/ashrae-rules, see README.html for easier reading. @afs: thanks for the export above!
AFAIR, it is mostly for setting of defaults (eg see the first two rules about "medium")
Steve, can you comment? Andy, does it make sense for me to write an UCR
|
@VladimirAlexiev I think the default value requirement is quite well-covered already. It is too early to say whether there should be exactly that feature or whether it is done by a feature with wider applicability (c.f. Node expressions might benefit as convenience syntax based on |
@afs A special construct |
@VladimirAlexiev, all connected things don't need to have exactly the same Medium, but they do need to be "compatible". For example, one vendor might have specified chilled water as the medium, and another as just water. That's OK. We check 7 different cases for compatibility between ConnectionPoints, Connections, etc. There are 7 validation cases because of mixtures (see https://docs.open223.info/explanation/medium_mixtures.html). Even these tests are a little too forgiving, but we felt it is better to allow an invalid case than it is to deny a valid case. Regarding your parent-children question, I'm assuming you are talking about inferring the medium in a mapsTo situation between ConnectionPoints. First, a mapsTo can only involve two ConnectionPoints, so there is no "multiple children" situation. If there are multiple contained ConnectionPoints, they get combined inside the containing Equipment either via a Connection or a Junction, and only one ConnectionPoint mapsTo the containing ConnectionPoint. Incompatibility between the contained ConnectionPoints is handled via the above validation tests. |
Thanks for the explanation @steveraysteveray ! From examining the rules, the only negation that doesn't match There are also several rules that check for uniqueness (count=1). |
The normative specification of ASHRAE standard 223 (under development) is a set of SHACL files including extensive use of both validation shapes and SHACL-AF rules, on the order of 41 sh:TripleRules and 23 sh:SPARQLRules. These can be browsed in a non-official way at https://open223.info/.
Rather than repeat the rationale and use of 223, please review the above link.
The main use of the inference rules is to flesh out a "minimal" model when possible, as well as to implement two OWL implications (symmetric relations and inverse relations). The standard avoids the use of OWL predicates altogether except for graph management (i.e. owl:imports and owl:Ontology declarations).
Overall, we found SHACL rules were up to the task of handling what we needed, although we have been at this for 5 years and I might have forgotten some things we tried and gave up on.
Conformance to the standard is largely handled by SHACL shapes, although satisfying all such shapes does not guarantee conformance.
The text was updated successfully, but these errors were encountered: