Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reasoning against pre-reasoned sources #23

Open
jeswr opened this issue Mar 9, 2022 · 3 comments
Open

Reasoning against pre-reasoned sources #23

jeswr opened this issue Mar 9, 2022 · 3 comments

Comments

@jeswr
Copy link
Member

jeswr commented Mar 9, 2022

Expanding upon SolidLabResearch/Challenges#14 (comment) with some implementation specific ideas.

As a first-pass lets make the following assumptions (which can be verified inside the appropriate actors test method) to simplify the problem:

  1. We have one pre-reasoned source that is reasoned with using the ruleset we wish to apply
  2. We have one non-reasoned source
  3. The rules we are applying are in a premise-conclusion style format (i.e. no quantification, nesting etc.)

As a 'naive' first implementation we can then do the following:

  1. On the first iteration of reasoning with our (already optimized) rule set - we first check the premise of each rule against the 'unreasoned' dataset. If there are no premises that match that rule it is not included in this round of reasoning. If there is a match evaluate the remainder evaluate the rule against both databases, using the matches from the unreasoned dataset as initialBindings.
  2. For each following round of reasoning the logic is similar to 1, except now we create these initialBindings using results produced in the previous iteration of reasoning.

^^ Most of this logic would also be re-usable to optimize the current rule evaluation strategies as well.

@jeswr
Copy link
Member Author

jeswr commented Mar 11, 2022

As suggested by @arminhaller - in lieu of metadata about the reasoning status of sources as discussed in SolidLabResearch/Challenges#14 (comment), we can in some case heuristically determine this based on the presense of data. For example some superset of RDFS inferencing is likely enabled if a triple of the form ?s a rdfs:resource is present.

I suggest we create a bus that takes a source as input and returns the types of reasoning that have been applied to that source. Then one actor can be this heuristic actor, a second actor uses metadata exposed by the source, and a third actor uses the ActionContext in cases where users have explicitly stated the pre-reasoning done on sources.

@jeswr
Copy link
Member Author

jeswr commented Mar 18, 2022

One can possibly view this reduced problem as one of 'incremental reasoning' - where the pod data can be abstractly considered as an 'insertion' into the main KB.

@jeswr
Copy link
Member Author

jeswr commented Apr 1, 2022

For extra points - handle dialogical reasoning SolidLabResearch/Challenges#22

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant