Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ISPN-8865 Move AdvancedCacheLoader over to using Publisher #5794

Merged
merged 15 commits into from
May 11, 2018

Conversation

wburns
Copy link
Member

@wburns wburns commented Feb 23, 2018

https://issues.jboss.org/browse/ISPN-8865

Added AdvancedCacheLoader.publishEntries method and deprecated/removed all references to calling old process method that I could.

This JIRA is mostly to let CI run on it and crunch out some stuff to see if there are test failures so far.

@wburns wburns added this to the 9.3.0.Alpha1 milestone Feb 23, 2018
@wburns
Copy link
Member Author

wburns commented Feb 23, 2018

All current loaders have been converted except the SoftIndexFileStore, which I will tackle next week.

*/
@ThreadSafe
@Deprecated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't you let this already extend Predicate? That way you could safely change the method parameters to Predicate because all KeyFilters passed in would be Predicates...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I am not sure why I didn't do this before 👍

final AtomicInteger result = new AtomicInteger(0);
acl.process(filter, (marshalledEntry, taskContext) -> result.incrementAndGet(), new WithinThreadExecutor(), false, false);
return result.get();
public static <K, V> int count(AdvancedCacheLoader<K, V> acl, Predicate<? super K> filter) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This class is public, you need to make the change backwards-compatible.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

above will fix that

final Set<K> set = new HashSet<K>();
acl.process(filter, (marshalledEntry, taskContext) -> set.add(marshalledEntry.getKey()), new WithinThreadExecutor(), false, false);
return set;
return Flowable.fromPublisher(acl.publishEntries(filter, false, false))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would a separate publishKeys method make sense?

Copy link
Member Author

@wburns wburns Feb 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was actually thinking about that over the weekend. I was debating about having a new method or possibly adding a "builder" API (but this might be overkill).

Something like

AdvancedCacheLoader {
   ...
   StorePublisher publish(Predicate<? super K> filter);
   ...
}

interface StorePublisher {
   Publisher<K> keys();

  Publisher<MarshalledEntry<K, V>> entries(boolean fetchValues, boolean fetchKeys);
}

What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't scope it under such interface, just

Publisher<K> publishKeys(Predicate<? super K> filter);
Publisher<MarshalledEntry<K, V>> publishEntries(Predicate<? super K> filter, boolean fetchValue, boolean fetchMetadata);

I have considered dropping the fetchValue completely but there might be use case for key + metadata (e.g. expiration).

Copy link
Member Author

@wburns wburns Feb 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I am fine with the methods as well - and obviously I can make it a nice default method :)

I am thinking that publishKeys will have to be documented saying it is up to the cache store implementation whether or not expired keys are returned.

In regards to fetchValue I can look over the usages again and see if we can possibly drop it. Although to be honest with you, most implementations if you supply either fetchValue or fetchMetadata it is just as easy to get the other. In that case I wonder if it makes any sense to have these arguments anymore.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess scattered cache requires just the metadata :) So I guess I will keep both arguments for now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually looking at this more publishKeys has to guarantee that it returns non expired keys. Or else the usefulness of the method is quite questionable (for example keySet iterator can't use it otherwise). So in that case I will try to limit perf impact as much as possible.


/**
* @author wburns
* @since 9.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bah, another Closeables class... btw. the since tag does not match.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah unfortunately reactive streams isn't in commons yet :(

My eclipse isn't updated - thanks 👍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it hurt to rather add the dependency there? We want to eventually keep up parity with remote clients anyway...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Afaik it shouldn't hurt. I agree, I was trying to delay the introduction of it until we absolutely needed it in the client.

@wburns
Copy link
Member Author

wburns commented Feb 26, 2018

I have added in SoftIndexFileStore support. This also fixes https://issues.jboss.org/browse/ISPN-8880

I have also updated all the various stores to have the new publishKeys method. Those that don't implement this gain no real benefit so they just fall back to default.

Also I made sure to go through and make sure that all stores don't return expired entries/keys. There were a couple bugs around this as reactive streams spec doesn't allow for null values and thus a couple map methods had to be redone.

@wburns
Copy link
Member Author

wburns commented Feb 26, 2018

Also I am fine with reactive streams int commons, but I think we need to discuss further before moving rxjava2. In this case I can't remove the Closeables class yet.

  1. I still need to refactor the RestStore so it does async which will allow for better performance. I am thinking of capping the number of concurrent async gets. This may be configurable or just a sensible # like number of CPUs. https://github.com/infinispan/infinispan/pull/5794/files#diff-bb46c82f9776070f5d1036eb22a12387R396

  2. So the other only change at this time I can think of is possibly adding a parallel observation of JpaStore elements when fetchValues or fetchMetadata is true since this will cause a lot of blocking for the requesting thread. https://github.com/infinispan/infinispan/pull/5794/files#diff-8ec36f45d11a29c923951eb51f28ab24R614

@wburns
Copy link
Member Author

wburns commented Mar 1, 2018

Updated and cleaned up. Should be ready for 9.3 after we branch now.

@wburns
Copy link
Member Author

wburns commented Mar 1, 2018

I am still not keen on using the persistence executor for the JPAStore or LuceneCacheLoader as this only has 4 threads by default. I could possibly use the Schedulers.io but this would be a different thread pool.

@wburns wburns changed the title Ispn 8865 publisher loader Ispn 8865 Move AdvancedCacheLoader over to using Publisher Mar 1, 2018
@wburns wburns force-pushed the ISPN-8865_publisher_loader branch from fe72508 to f6a2466 Compare March 6, 2018 16:12
@wburns
Copy link
Member Author

wburns commented Mar 7, 2018

There are test failures: https://ci.infinispan.org/job/Infinispan/job/PR-5794/6/#showFailuresLink
Also I need to add reactive streams to the parent dependencies for cache stores it seems, have to look closer.

@wburns
Copy link
Member Author

wburns commented Mar 9, 2018

I have rebased on top of #5824 now as well. I still have to clean up test failures.

@wburns wburns force-pushed the ISPN-8865_publisher_loader branch from aee7235 to 464b2c2 Compare March 9, 2018 22:12
@wburns
Copy link
Member Author

wburns commented Mar 9, 2018

Added modules, however I think there is at least 1 or more failure I still need to fix for Monday.

@wburns
Copy link
Member Author

wburns commented Mar 13, 2018

Closing this until #5824 is integrated.

@wburns wburns closed this Mar 13, 2018
@danberindei
Copy link
Member

Please reopen Will, I didn't realize most changes were in #5824, so I have lots of pending comments here :)

@wburns
Copy link
Member Author

wburns commented Mar 14, 2018

Reopening, this is missing a few updates, but I can do that after opened.

@wburns wburns reopened this Mar 14, 2018
Copy link
Member

@danberindei danberindei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hope this works

Collection<K> buildKeyCollection(Collection<?> c) {
Collection<K> toRemove = new ArrayList<>(c.size());
for (Object obj : c) {
K entry = keyToStorage(obj);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer wrapping the collection of keys, in case it's a "virtual" collection like RangeSet. And shouldn't we implement removeIf as well?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I was debating about both these points a bit.

I think I need to go back and just implement all the methods and let the underlying sets do their jobs. I should make a JIRA to do this though since it isn't directly related to the future changes here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Override
public CloseableIterator<CacheEntry<K, V>> iterator() {
CloseableIterator<CacheEntry<K, V>> iterator = Closeables.iterator(entrySet.stream());
// This can be a HashSet since it is only written to from the local iterator which is only invoked
// from user thread
Set<K> seenKeys = new HashSet<>(cache.getAdvancedCache().getDataContainer().sizeIncludingExpired());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't notice this before, but it's quite a big set... I wonder if we could use the live dataContainer.keySet() + listeners to keep track of added/removed keys instead.

Also, an interceptor should inject the needed components directly instead of going through getAdvancedCache().

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I don't know if the listener will work very well. I will have to listen to all modifications to the cache as well all activations and passivations. My thought for fixing this was to have it store all if it is OBJECT storage and only a subset if it is OFF-HEAP or BINARY.

  2. True about the injection, I can do that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also a side note but after I add in segmented store and data container, this code will only be used for non dist stores. And it is possible with other cache modes to do iteration over each segment individually so that you only have so many resurrected objects.

But I will think more on doing a live view, it gets a bit messy with the listeners.

https://github.com/wburns/infinispan/blob/ISPN-8905_non_shared_store_segments/core/src/main/java/org/infinispan/stream/impl/local/PersistencKeyStreamSupplier.java#L78

* @param <K>
* @param <V>
* @return
* @deprecated Please use {@link #toKeySet(AdvancedCacheLoader, Predicate)} instead
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the reference should be to toEntrySet(ACL, Predicate, IFE). And I'm wondering if these should really be public API, I can't imagine using them outside our tests.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed

public Publisher<MarshalledEntry<K, V>> publishEntries(Predicate<? super K> filter, boolean fetchValue, boolean fetchMetadata) {
State state = this.state.get();
ByRef<Boolean> hadClear = new ByRef<>(Boolean.FALSE);
Map<Object, Modification> modificationMap = state.flattenModifications(hadClear);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't we use flatMap below instead and avoid copying the map?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately we need to make a copy, otherwise the state can be updated after iterating through it and causing us to exclude an entry from the store that we shouldn't have.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, the modification queue size shouldn't be very big anyway.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep my thoughts as well.

synchronized (entries) {
for (Map.Entry<K, FileEntry> e : entries.entrySet()) {
K key = e.getKey();
if ((filter == null || filter.test(key)) && !e.getValue().isExpired(now)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd put the isExpired(now) check first, even if it doesn't filter out a lot of entries it's very cheap.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

int batchsize = 10;
// Request over the max amount so we can ensure the subscriber will get notified of completion
for (int i = 0; i < NUM_ENTRIES / batchsize - 1; i++) {
// Now request all the entries on different threads - just to see if the publisher can handle it
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this really request all the entries? Does it still work if the number of keys is not a multiple of the batch size?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other comment it is a bit outdated and purposely isn't requesting all entries.

executor.execute(() -> subscriber.request(batchsize));
}

subscriber.awaitCount(NUM_ENTRIES - 10, () -> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is 10 the batch size here, or something else?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it is the batch size. I can update to have the variable.

}, TimeUnit.SECONDS.toMillis(10));

// Now request on the main thread - which should guarantee requests from different threads
subscriber.request(11);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does a request from the main thread would guarantee requests from different threads? Is it because request size > batch size?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I think I just need to cleanup the comments here.

First the size of 11 is here just as a further test (there should only be 10 entries, but I am making sure I only get back 10 despite requesting 11).

And actually the key part is that it is guaranteeing that objects are returned via onNext on different threads. The request is already guaranteed to be on different threads from above, but for performance reasons some publishers might only call onNext on one of those threads (ones that add entries to a queue for example (and it is faster to have one thread drain all elements from the queue before finishing).

For example our ClusterStreamManagerImpl publisher only allows for 1 remote request per node to guarantee and adds additional requests (so future requests can request all of those) until any pending request is complete.

return Flowable.fromIterable(allInternalEntries);
})
.filter(me -> filter == null || filter.test(me.getKey()))
.sequential();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't we allow the callers to decide for themselves whether to get a parallel flowable or a sequential one?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The user can still do whatever they want to the resulting publisher here. The problem is with reactive streams is you can't call onNext from different threads concurrently. This code here is ensuring that onNext observes this contract - note it will most likely be called from different threads, just not at the same time.

The user can still observe these entries on any number of threads that they want. Note a publisher is not "used" until someone subscribes to it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main problem here was that I was assuming the result of parallel() implements Publisher, and I couldn't imagine why one kind of Publisher would be a better return value than the other.

* <p/>
* <b>Preload</b>.In order to support preload functionality the store needs to read the string keys from the database and transform them
* into the corresponding key objects. {@link org.infinispan.persistence.keymappers.Key2StringMapper} only supports
* <b>Preload & Iteration</b>.In order to support preload or iteration functionality the store needs to read the string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Github highlights the &, should probably be &amp;

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

@wburns wburns removed the Preview label May 8, 2018
@wburns
Copy link
Member Author

wburns commented May 8, 2018

There appear to be some related test failures https://ci.infinispan.org/job/Infinispan/job/PR-5794/18/

@wburns wburns force-pushed the ISPN-8865_publisher_loader branch 2 times, most recently from 8fceaf2 to 626926c Compare May 8, 2018 14:17
@wburns
Copy link
Member Author

wburns commented May 8, 2018

Tests should be fixed now, waiting on latest run.

@wburns wburns added this to the 9.3.0.Beta1 milestone May 10, 2018
process

* Introduce publishEntries and publishKeys methods
* Limit scope of changes to minimum
* Add persistence executor to Context for stores/loaders
process

* Convert AdvancedAsyncCacheLoader
* Optimize code to not retrieve keys first
@wburns wburns force-pushed the ISPN-8865_publisher_loader branch from 626926c to 0702419 Compare May 10, 2018 13:47
Copy link
Member

@danberindei danberindei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More nitpicking on javadoc comments ;)

@@ -10,9 +12,11 @@
*
* @author Manik Surtani
* @since 6.0
* @deprecated This will be replaced by {@link java.util.function.Predicate} in the future
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like having since x.y in the deprecated comment so it's more obvious when it's safe to remove it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, forgot to put one here.

}
return empty;
}
// TODO: this can't be added in until stores size method is guaranteed to not count expired entries
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO counting entries while excluding expired ones is a potentially expensive operation compared to starting publishKeys() and stopping after the first one.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure sounds good. I had thought about doing that with iterator, but didn't think of publishKeys. I think it has just been too long since i looked at this.

public Publisher<MarshalledEntry<K, V>> publishEntries(Predicate<? super K> filter, boolean fetchValue, boolean fetchMetadata) {
State state = this.state.get();
ByRef<Boolean> hadClear = new ByRef<>(Boolean.FALSE);
Map<Object, Modification> modificationMap = state.flattenModifications(hadClear);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, the modification queue size shouldn't be very big anyway.

/**
* Returns a publisher that will publish all entries stored by the underlying cache store. Only the first
* cache store that implements {@link AdvancedCacheLoader} will be used. Predicate is applied by the underlying
* loader in a best attempt to improve performance.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I always assumed that each loader has some extra entries compared to the previous one, otherwise it wouldn't make any sense to have both loaders. E.g. maybe the first loader reads from a local SingleFileStore, and the next loader invokes a couple REST API to re-compute the value. Of course, the 2nd loader probably wouldn't implement AdvancedCacheLoader in this scenario, but if it did I wouldn't consider it a bug to publish all the entries from both loaders.

* loader in a best attempt to improve performance.
* <p>
* Caller can tell the store to also fetch the value or metadata. In some cases this can improve performance. If
* metadata is not fetched it is not guaranteed if the publisher will return expired entries or not.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you could simplify this to "If metadata is not fetched the publisher may include expired entries".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay

}
}
Publisher<MarshalledEntry<Object, Object>> publisher = persistenceManager.publishEntries(false, true);
Flowable.fromPublisher(publisher).map(MarshalledEntry::getMetadata).blockingForEach(metadata -> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any benefit in extracting the getMetadata() call into a separate step?

Copy link
Member Author

@wburns wburns May 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean just doing it inside the blockingForEach lambda? No it should be the same.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One less lambda though ;)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I already changed it.

return new CloseableIterator<E>() {
@Override
public void close() {
((Disposable) iterator).dispose();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we don't need to dispose the iterable or the publisher, only the iterator?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only the iterator. The publisher and iterable don't really apply as they don't do anything, until the iterator is invoked. It has all the resources.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok

}, executor, true, true);
Flowable.fromPublisher(loader.publishEntries(null, true, true))
.observeOn(Schedulers.from(executor))
.takeUntil((MarshalledEntry<Object, Object> me) -> stopped.get())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the entries.size() checks can be stricter now

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrmm tbh I am not exactly sure. I think it has to invoke doOnNext before it can call takeUntil again. Let me change it and we can find out :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah so it will always only modify on a single thread at a time. I can change it to assert it equals the size exactly then. This should be better, because I could make it so we are doing this across multiple threads with Flowable.parallel or Flowable.flatMap, but then we are testing rxjava2, which isn't the point of this test.

Although tbh you wouldn't want to do cancellation like this in the first place, you should just invoke Flowable.take(100) instead.

The more I thought about it, this method should just be removed 🗡

return Flowable.fromIterable(allInternalEntries);
})
.filter(me -> filter == null || filter.test(me.getKey()))
.sequential();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main problem here was that I was assuming the result of parallel() implements Publisher, and I couldn't imagine why one kind of Publisher would be a better return value than the other.

/**
*
* NOTE: This store can return expired entries on any given operation if {@link JpaStoreConfiguration#storeMetadata()}
* was not set to true.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does that include publishKeys? It doesn't really return entries, but I don't see any additional checks.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, want me to say keys or entries instead? The problem is there is no place to store expiration metadata if there is not table for it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I'd like you to change the javadoc of publishKeys in AdvancedCacheLoader and PersistenceManager, because they say "This publisher will never return a key which belongs to an expired entry"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I changed it to 'was set to false' - easier to read imo.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrmm well that is the contract, which I would like to keep on AdvancedCacheLoader. JpaStore is special in that it violates this contract, which is why I would rather put it here. I don't want an implementation to return expired keys.

For this reason DummyStore doesn't implement publishKeys, because the metadata is stored with entry so we have to read that as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think a javadoc here is enough, because there's nothing anyone can do about JpaStore breaking the contract:

  • We won't add a special case for JpaStore in core, we're going to call publishKeys based on the AdvancedCacheLoader javadoc.
  • Users call methods on the cache, they don't necessarily know which cache method calls which store method internally.

IMO we should throw an exception when the user tries to store mortal entries in a JpaStore if we can't read the metadata back. It's a bit complicated by the async writer catching all the exceptions, but at least they'll see some errors in the log. And ideally I'd like to make expiration support optional, both at the store level and at the cache level, so we can throw that exception right from CacheImpl.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrmm I think it might be easier to log another JIRA for this.

It would cover

  1. I would like to throw an exception at start up if JPAStore is configured not being able to store metadata. Unfortunately we can't do this, since they could be using the store as a read only.
  2. We probably can only throw exceptions at runtime if an entry is written to the JPAStore with expiration.

Either way neither of these are great options, thus why I would recommend a separate JIRA to talk about this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created https://issues.jboss.org/browse/ISPN-9143 to continue this.

import net.jcip.annotations.ThreadSafe;

/**
* RestStore.
*
* @author Tristan Tarrant
* @since 6.0
* @deprecated This cache store will be changed to only implement {@link org.infinispan.persistence.spi.CacheLoader}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't we using it for rolling upgrades?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, Hot Rod only

@wburns wburns force-pushed the ISPN-8865_publisher_loader branch from e89d897 to c1510ea Compare May 10, 2018 17:00
@tristantarrant tristantarrant changed the title Ispn 8865 Move AdvancedCacheLoader over to using Publisher ISPN-8865 Move AdvancedCacheLoader over to using Publisher May 11, 2018
process

* Removed process from PersistenceManager
* Remove core usages of process
process
ISPN-8892 RestStore should only implement CacheLoader

* Convert RestStore
* Deprecate AdvancedCacheLoader portion of RestStore
* Deprecate RestTargetMigrator
process

* Convert remaining server test loaders
process

* Added module information for various stores
process

* Added some documentation noting new SPI
@wburns wburns force-pushed the ISPN-8865_publisher_loader branch from c1510ea to 3301101 Compare May 11, 2018 13:23
@danberindei danberindei merged commit df9ffb5 into infinispan:master May 11, 2018
@danberindei
Copy link
Member

Integrated, thanks Will!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants