-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide a MODEL concept for attaching different interpretations of RACK data #299
Conversation
I like the approach - can you expand on what exactly "REQUIREMENT_STATEMENT linked by states" would look like. This is the critical link to REQUIREMENT. In your TEMPORAL_LOGIC_STATEMENT example, should there be a link to the REQUIREMENT? Another discussion point was different types of requirements and requirement decomposition. |
In general I like the approach. I can't help but wonder could we make it a little bit more non-specific. What I mean is should we add a concept of a
then when you would be able to assign the confidence in the model. The would allow the same basic constructs to be used for other types of modeling that people may be doing. Part of what I hang-up a bit with this is the idea of having a confidence value in any thing with
As you can see from above I also would suggest reversing the direction of the connection between the REQUIREMENT and |
Yeah, reversing the link makes a lot of sense. The requirement stands on its own and the models are modeling a particular requirement. I'll fix that up now. Generalizing out to MODEL allows us to model all sorts of rack entities and saves a lot of potential duplication! |
@russell-d-e Are you thinking MODEL would relate to a thing because we might not only model and entity, but also an ACTIVITY? As general as MODEL is going to be, should it live in our PROV-S, or in its own top-level ontology file. It seems to me like it actually fits in the core PROV-S, even though it's not derived from PROV itself, but just because it becomes so core the the way to capture information. Opinions? |
@AbhaMoitra I like your MODEL but I think the notion of confidence should live on the subclasses of MODEL rather than on the top-level class itself. I can't imagine that confidence numbers would be comparable between different subclasses. In the case of GrammaTech's requirement model they have many confidences. Other models might not have a notion of confidence, and those that do might use different scales. |
I did use My thinking was that it would be in its own top-level ontology file. Just like we have |
I think having it as a top-level sadl file makes the most sense having thought about it. |
Also see email: (hbr17 = Howard Reubenstein) I think this is largely an issue of the physical organization of the Rack as given this representation style we can define queries or utilities like the following: One thing I see missing from the proposal is a way to interrogate what models for a requirement exist, e.g., (I see the link from each model to the requirements, but do we need a link from the requirement to its set of models? Before implementing this change I would strongly suggest providing some notional pseudo-code showing how this feature would be used by the TA3s. (To evaluate API proposals – and this is sort of an API proposal – I always like to see sample code that shows how the API might be used). An example notional task I would like to see pseudo-code for is: Given a reference to a requirement by UID how would I access its preconditions(assumes) clause. |
The new proposal adds no new complexity in this case. Whether we are using subclasses of REQUIREMENT or subclasses of MODEL we'll have to have a nodegroup query to list the generic thing and its type to help us decide what we want to do. We can make a query that lists all the models and the type of the model associated with any particular piece of data. This would be similar to how we'd need to have a nodegroup that lists the subtype of a requirement to know how to access the fields of its subtypes. TA3s will need advanced knowledge of the different models available in the dataset in order to do anything interesting with them. There should be no surprise MODEL subtypes to discover. We'll want anyone generating a new model type to carefully document the meaning of the model so that a consume of the model can productively use it while building an assurance case. |
This would be done through any nodegroup with a Here's a nodegroup that has a runtime constraint on the model identifier (not indicated in the picture) that returns all the models associated with a particular requirement along with the type of that model so that you can request more specific information. An alternative would be what was suggested above in writing a union query to select any of the kinds of models that you have code to handle. |
Relations live independently of the class they happen to be be described by in the SADL file. There's no query power changed by moving the relation from one file to the other. The directionality of a query like the nodegroup above comes from specifying either the REQUIREMENT or the MODEL identifier as the runtime constraint. |
Here's a nodegroup that queries the Acert-extracted I don't think it's fair to assume, in general, that all models have a distinguished assumes field, so you'd probably have to know what kind of model you were looking at to know this. I would expect that for a complex requirement that there might not be a single assumption that covers all behaviors of the requirement. A more interesting requirement in a structured language might allow for a much richer logical implication or set of assumptions. I'm willing to be convinced that no requirements are this complex, however. |
Dave wrote:
We can write a query that pulls in optional matches from multiple subtypes in one shot. Here's the simplest example. |
Made a pass over this to check that I understand how it implements the functionality Eric described. All looks good to my eye. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reviewed most of the files and they look fine. The files I did not review are Thing.json and store_data.csv. Can you add a new line at the end of the files STR.sadl and Thing.json.
@AbhaMoitra @russell-d-e I'd like your take on this approach!
I propose to separate out the concept of a project REQUIREMENT and the interpretation of the statements one makes. I think we can almost definitely say what a requirement is from the point of view of a particular project. The various teams are interested in going further and providing interpretations of the semantics of these requirements. This approach would allow has to capture these different interpretations in an extensible way while not trying to subclass the concept of the requirement itself.
The raw natural language text of a requirement can be stored in the
description
field we already have. Any understanding of that requirement can be tracked in aREQUIREMENT_STATEMENT
linked bystates
. These names are up for discussion, of course.I think this division of requirement from interpretation helps make our current
ifText
givenText
thenText
properties make much more sense as we can link them much closer to the confidence parameters GrammaTech wanted.In the future we can add other statement subtypes for other interesting formal languages. I'm imagining something like:
The idea being that these statements could have a lot more structure to them if that was appropriate. By not merely subclassing REQUIREMENT for each of these we allow different tools and teams to interpret a single REQUIREMENT with multiple statements in different languages without duplicating the requirement.
Thoughts?