Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port to OWL API 4.0.2 #3

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open

Port to OWL API 4.0.2 #3

wants to merge 8 commits into from

Conversation

ignazio1977
Copy link

Made the necessary changes to compile and build with OWLAPI 4.0.2.
I've made a local release of 4.0.2 so that I can point POMS at it, so this won't build correctly until 4.0.2 is releases.

@ignazio1977 ignazio1977 changed the title Port to OWL API 4.0.0 - for feedback Port to OWL API 4.0.2 Mar 25, 2015
@ansell
Copy link

ansell commented Mar 25, 2015

This is failing due to TravisCI not being able to find owlapi-4.0.2, as 4.0.2-SNAPSHOT is believed by maven to be prior to 4.0.2, I think, and hence not within the range specified:

[ERROR] Failed to execute goal on project protege-owlapi-extensions: Could not resolve dependencies for project edu.stanford.protege:protege-owlapi-extensions:bundle:3.2.8-SNAPSHOT: Failed to collect dependencies at net.sourceforge.owlapi:owlapi-osgidistribution:jar:[4.0.2,4.1): No versions available for net.sourceforge.owlapi:owlapi-osgidistribution:jar:[4.0.2,4.1) within specified range -> [Help 1]

@ignazio1977
Copy link
Author

Yes I expected the failure. I'm trying to get all my local forks to build against a local mock release, to avoid having to update all the pull requests again when 4.0.2 is actually released (I don't want to release it just yet in case there are bugs that need fixing to get Protege to build - I've spotted three or four while updating Pellet and HermiT, so I'm a bit wary).
The bad side of this is that I can't rely on Travis builds in the meantime...

@ansell
Copy link

ansell commented Mar 25, 2015

Once you think you have all of the bugs ironed out it may be useful to deploy a 4.0.2-beta1 release, which would be inside of the range that Travis can see.

@ignazio1977
Copy link
Author

Good shout, cheers.

@sesuncedu
Copy link
Contributor

Maven version comparision test tool:
https://jira.codehaus.org/browse/MNG-5730

java -jar maven-artifact-*.jar 3.2.4-alpha-1 3.2.4-SNAPSHOT 3.2.4.0
Display parameters as parsed by Maven (in canonical form) and comparison result:
1. 3.2.4-alpha-1 == 3.2.4.alpha.1
   3.2.4-alpha-1 < 3.2.4-SNAPSHOT
2. 3.2.4-SNAPSHOT == 3.2.4.snapshot
   3.2.4-SNAPSHOT < 3.2.4.0
3. 3.2.4.0 == 3.2.4

So
4.0.2-alpha-1 < 4.0.2-SNAPSHOT < 4.0.2

thus
4.0.2-alpha-1 ∈ [4.0.0,4.0.1)
but
4.0.2 ∉ [4.0.0,4.0.1)

(more annoyingly, 4.0.0-alpha ∈ [3.0,4.0)

@sesuncedu
Copy link
Contributor

So much simpler to define versions such that that outside of upper bound specifications, 4.0.2 is short for 4.0.2.⊤, ; but an upper bound , it is short for 4.0.2.⊥

3.5.0.⊥ < 3.5.0-alpha < 3.5.0-SNAPSHOT < 3.5.0.⊤ < 3.5.0

If confined to upper bounds,

[3.5.1,4) contains 3.5.1-alpha, 3.5.1-SNAPSHOT, etc, but would not contain 4.0.0-alpha .

@ignazio1977
Copy link
Author

Given the choice, I would go for never selecting a SNAPSHOT, ALPHA or BETA unless pointed out explicitly

@sesuncedu
Copy link
Contributor

On Thu, Mar 26, 2015 at 5:31 PM, Ignazio Palmisano <notifications@github.com

wrote:

Given the choice, I would go for never selecting a SNAPSHOT, ALPHA or BETA
unless pointed out explicitly

Try changing the sesame version in owlapi to [2.7,2.8)....


@Override
public void saveOntology() throws OWLOntologyStorageException {
writeLock.lock();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't these be read locks (on all save methods?). I'm going to reimplement all of this. Will this stuff be necessary in 5.0.0?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understood the idea to be: write lock, I don't want changes going in while I'm doing the save. Might be wrong though.

Regarding 5, I'm not sure - we can see whether something can be worked in to remove the need for this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a read lock is locked then not changes can be applied. The ReadWriteLock allows for multiple concurrent readers but only a single writer. The write lock is only for things that change the actual ontology. Well, that how I think it should work!

I think all of this needs starting afresh. I don't feel confident that ProtegeOWLOntologyManager will work after this.

Oh, and there's way too many different things going on in this library :( Might separate out the concurrent stuff.

btw are the streams in v5 streams on copies of collections?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The majority are streams on collections that cannot be modified; however the ones coming from Internals will need some work done to avoid issues.
Thinking of which, once Internals is changed enough to be threadsafe (in the sense that applying changes does not kill ongoing streams) I believe most of these locks would be redundant.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The majority are streams on collections that cannot be modified; however the ones coming from Internals will need some work done to avoid issues.

Cool.

Thinking of which, once Internals is changed enough to be threadsafe (in the sense that applying changes does not kill ongoing streams) I believe most of these locks would be redundant.

Do you know how you're going to do this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking of tests that would attempt to access an ontology from multiple....

Ah right, more than a unit test then. Sorry, I don't have these.

Regarding listeners and other ancillary collections in the managers, I'd be more tempted to synchronize on the collection itself,

Well, what ever is equivalent.

or use synchronized collections

I'm not sure this is a good idea.

Also, they are completely separate from ontology data, I believe - unless we want to guarantee that a listener receives all changes sent by a thread up until it gets removed from the manager by another thread?

Yes, something like this.

My head is spinning a bit now, I think we're describing two different sets of changes.

:)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's an OWLApiThreadSafetyIssue class in the history of protege-owlapi that might just be the ticket.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, the locks need to go in (be shared between) various places,
including the implementation of OWLOntologyManager. It's my (current)
understanding that the lock for a manager should be the same for its
ontologies as well (well, makes sense - @stdotjohn
https://github.com/stdotjohn is this correct?).

Yes that is correct. At one time I had lots of different reader-writer
locks but that was far too complicated. So I changed it to the less
fine grained idea of using one reader-writer lock associated with the
manager.

Then I think extra locks are needed for lists of listeners
(adding/removing a listener requires a read lock to be acquired I think).

Something like this sounds right. I don't see that code in the current
version and I am not sure why. It has been a very long time and also I
am not up on what has changed.

-Timothy

On 04/08/2015 02:16 PM, Matthew Horridge wrote:

In
src/main/java/org/protege/owlapi/concurrent/WriteSafeOWLOntologyImpl.java
#3 (comment):

  •    }
    
  • }
  • @OverRide
  • public void setOWLOntologyManager(OWLOntologyManager saveOntology) {
  •    writeLock.lock();
    
  •    try {
    
  •        delegate.setOWLOntologyManager(saveOntology);
    
  •    } finally {
    
  •        writeLock.unlock();
    
  •    }
    
  • }
  • @OverRide
  • public void saveOntology() throws OWLOntologyStorageException {
  •    writeLock.lock();
    
Speaking of which, I think we can reach the same effect putting
read write locks around Internals. Smaller API so easier to maintain.

Perhaps. Mucking about with the internals in the future might not be
so easy though. The one thing with the current code (ignoring the
design of how the factories etc. fit together, but this is an OWL API
thing anyway) is that it's pretty simple to see what is going on with
locking in each ontology. i.e. it's immediately possible to see that
gettingX requires a read lock be acquired. I think this is a plus.

Also, the locks need to go in (be shared between) various places,
including the implementation of OWLOntologyManager. It's my (current)
understanding that the lock for a manager should be the same for its
ontologies as well (well, makes sense - @stdotjohn
https://github.com/stdotjohn is this correct?). Then I think extra
locks are needed for lists of listeners (adding/removing a listener
requires a read lock to be acquired I think).


Reply to this email directly or view it on GitHub
https://github.com/protegeproject/protege-owlapi/pull/3/files#r28014289.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stdotjohn excellent. Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants