-
Notifications
You must be signed in to change notification settings - Fork 628
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ISPN-8865 Move AdvancedCacheLoader over to using Publisher #5794
ISPN-8865 Move AdvancedCacheLoader over to using Publisher #5794
Conversation
All current loaders have been converted except the SoftIndexFileStore, which I will tackle next week. |
5e18ec5
to
55b433c
Compare
*/ | ||
@ThreadSafe | ||
@Deprecated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't you let this already extend Predicate? That way you could safely change the method parameters to Predicate because all KeyFilters passed in would be Predicates...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I am not sure why I didn't do this before 👍
final AtomicInteger result = new AtomicInteger(0); | ||
acl.process(filter, (marshalledEntry, taskContext) -> result.incrementAndGet(), new WithinThreadExecutor(), false, false); | ||
return result.get(); | ||
public static <K, V> int count(AdvancedCacheLoader<K, V> acl, Predicate<? super K> filter) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This class is public, you need to make the change backwards-compatible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
above will fix that
final Set<K> set = new HashSet<K>(); | ||
acl.process(filter, (marshalledEntry, taskContext) -> set.add(marshalledEntry.getKey()), new WithinThreadExecutor(), false, false); | ||
return set; | ||
return Flowable.fromPublisher(acl.publishEntries(filter, false, false)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would a separate publishKeys method make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was actually thinking about that over the weekend. I was debating about having a new method or possibly adding a "builder" API (but this might be overkill).
Something like
AdvancedCacheLoader {
...
StorePublisher publish(Predicate<? super K> filter);
...
}
interface StorePublisher {
Publisher<K> keys();
Publisher<MarshalledEntry<K, V>> entries(boolean fetchValues, boolean fetchKeys);
}
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't scope it under such interface, just
Publisher<K> publishKeys(Predicate<? super K> filter);
Publisher<MarshalledEntry<K, V>> publishEntries(Predicate<? super K> filter, boolean fetchValue, boolean fetchMetadata);
I have considered dropping the fetchValue
completely but there might be use case for key + metadata (e.g. expiration).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I am fine with the methods as well - and obviously I can make it a nice default method :)
I am thinking that publishKeys
will have to be documented saying it is up to the cache store implementation whether or not expired keys are returned.
In regards to fetchValue
I can look over the usages again and see if we can possibly drop it. Although to be honest with you, most implementations if you supply either fetchValue
or fetchMetadata
it is just as easy to get the other. In that case I wonder if it makes any sense to have these arguments anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess scattered cache requires just the metadata :) So I guess I will keep both arguments for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually looking at this more publishKeys
has to guarantee that it returns non expired keys. Or else the usefulness of the method is quite questionable (for example keySet iterator can't use it otherwise). So in that case I will try to limit perf impact as much as possible.
|
||
/** | ||
* @author wburns | ||
* @since 9.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bah, another Closeables class... btw. the since tag does not match.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah unfortunately reactive streams isn't in commons yet :(
My eclipse isn't updated - thanks 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it hurt to rather add the dependency there? We want to eventually keep up parity with remote clients anyway...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Afaik it shouldn't hurt. I agree, I was trying to delay the introduction of it until we absolutely needed it in the client.
4caafc9
to
dba2c13
Compare
I have added in SoftIndexFileStore support. This also fixes https://issues.jboss.org/browse/ISPN-8880 I have also updated all the various stores to have the new Also I made sure to go through and make sure that all stores don't return expired entries/keys. There were a couple bugs around this as reactive streams spec doesn't allow for null values and thus a couple map methods had to be redone. |
Also I am fine with reactive streams int commons, but I think we need to discuss further before moving rxjava2. In this case I can't remove the Closeables class yet.
|
81e4386
to
fe72508
Compare
Updated and cleaned up. Should be ready for 9.3 after we branch now. |
I am still not keen on using the persistence executor for the JPAStore or LuceneCacheLoader as this only has 4 threads by default. I could possibly use the |
fe72508
to
f6a2466
Compare
There are test failures: https://ci.infinispan.org/job/Infinispan/job/PR-5794/6/#showFailuresLink |
f6a2466
to
aee7235
Compare
I have rebased on top of #5824 now as well. I still have to clean up test failures. |
aee7235
to
464b2c2
Compare
Added modules, however I think there is at least 1 or more failure I still need to fix for Monday. |
Closing this until #5824 is integrated. |
Please reopen Will, I didn't realize most changes were in #5824, so I have lots of pending comments here :) |
Reopening, this is missing a few updates, but I can do that after opened. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hope this works
Collection<K> buildKeyCollection(Collection<?> c) { | ||
Collection<K> toRemove = new ArrayList<>(c.size()); | ||
for (Object obj : c) { | ||
K entry = keyToStorage(obj); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer wrapping the collection of keys, in case it's a "virtual" collection like RangeSet
. And shouldn't we implement removeIf
as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I was debating about both these points a bit.
I think I need to go back and just implement all the methods and let the underlying sets do their jobs. I should make a JIRA to do this though since it isn't directly related to the future changes here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Override | ||
public CloseableIterator<CacheEntry<K, V>> iterator() { | ||
CloseableIterator<CacheEntry<K, V>> iterator = Closeables.iterator(entrySet.stream()); | ||
// This can be a HashSet since it is only written to from the local iterator which is only invoked | ||
// from user thread | ||
Set<K> seenKeys = new HashSet<>(cache.getAdvancedCache().getDataContainer().sizeIncludingExpired()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't notice this before, but it's quite a big set... I wonder if we could use the live dataContainer.keySet()
+ listeners to keep track of added/removed keys instead.
Also, an interceptor should inject the needed components directly instead of going through getAdvancedCache()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
I don't know if the listener will work very well. I will have to listen to all modifications to the cache as well all activations and passivations. My thought for fixing this was to have it store all if it is OBJECT storage and only a subset if it is OFF-HEAP or BINARY.
-
True about the injection, I can do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also a side note but after I add in segmented store and data container, this code will only be used for non dist stores. And it is possible with other cache modes to do iteration over each segment individually so that you only have so many resurrected objects.
But I will think more on doing a live view, it gets a bit messy with the listeners.
* @param <K> | ||
* @param <V> | ||
* @return | ||
* @deprecated Please use {@link #toKeySet(AdvancedCacheLoader, Predicate)} instead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the reference should be to toEntrySet(ACL, Predicate, IFE)
. And I'm wondering if these should really be public API, I can't imagine using them outside our tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed
public Publisher<MarshalledEntry<K, V>> publishEntries(Predicate<? super K> filter, boolean fetchValue, boolean fetchMetadata) { | ||
State state = this.state.get(); | ||
ByRef<Boolean> hadClear = new ByRef<>(Boolean.FALSE); | ||
Map<Object, Modification> modificationMap = state.flattenModifications(hadClear); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't we use flatMap
below instead and avoid copying the map?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately we need to make a copy, otherwise the state can be updated after iterating through it and causing us to exclude an entry from the store that we shouldn't have.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, the modification queue size shouldn't be very big anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep my thoughts as well.
synchronized (entries) { | ||
for (Map.Entry<K, FileEntry> e : entries.entrySet()) { | ||
K key = e.getKey(); | ||
if ((filter == null || filter.test(key)) && !e.getValue().isExpired(now)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd put the isExpired(now)
check first, even if it doesn't filter out a lot of entries it's very cheap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure
int batchsize = 10; | ||
// Request over the max amount so we can ensure the subscriber will get notified of completion | ||
for (int i = 0; i < NUM_ENTRIES / batchsize - 1; i++) { | ||
// Now request all the entries on different threads - just to see if the publisher can handle it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this really request all the entries? Does it still work if the number of keys is not a multiple of the batch size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See other comment it is a bit outdated and purposely isn't requesting all entries.
executor.execute(() -> subscriber.request(batchsize)); | ||
} | ||
|
||
subscriber.awaitCount(NUM_ENTRIES - 10, () -> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is 10 the batch size here, or something else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, it is the batch size. I can update to have the variable.
}, TimeUnit.SECONDS.toMillis(10)); | ||
|
||
// Now request on the main thread - which should guarantee requests from different threads | ||
subscriber.request(11); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does a request from the main thread would guarantee requests from different threads? Is it because request size > batch size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I think I just need to cleanup the comments here.
First the size of 11 is here just as a further test (there should only be 10 entries, but I am making sure I only get back 10 despite requesting 11).
And actually the key part is that it is guaranteeing that objects are returned via onNext
on different threads. The request is already guaranteed to be on different threads from above, but for performance reasons some publishers might only call onNext
on one of those threads (ones that add entries to a queue for example (and it is faster to have one thread drain all elements from the queue before finishing).
For example our ClusterStreamManagerImpl publisher only allows for 1 remote request per node to guarantee and adds additional requests (so future requests can request all of those) until any pending request is complete.
return Flowable.fromIterable(allInternalEntries); | ||
}) | ||
.filter(me -> filter == null || filter.test(me.getKey())) | ||
.sequential(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't we allow the callers to decide for themselves whether to get a parallel flowable or a sequential one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The user can still do whatever they want to the resulting publisher here. The problem is with reactive streams is you can't call onNext
from different threads concurrently. This code here is ensuring that onNext
observes this contract - note it will most likely be called from different threads, just not at the same time.
The user can still observe these entries on any number of threads that they want. Note a publisher is not "used" until someone subscribes to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main problem here was that I was assuming the result of parallel()
implements Publisher
, and I couldn't imagine why one kind of Publisher
would be a better return value than the other.
* <p/> | ||
* <b>Preload</b>.In order to support preload functionality the store needs to read the string keys from the database and transform them | ||
* into the corresponding key objects. {@link org.infinispan.persistence.keymappers.Key2StringMapper} only supports | ||
* <b>Preload & Iteration</b>.In order to support preload or iteration functionality the store needs to read the string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Github highlights the &
, should probably be &
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure
There appear to be some related test failures https://ci.infinispan.org/job/Infinispan/job/PR-5794/18/ |
8fceaf2
to
626926c
Compare
Tests should be fixed now, waiting on latest run. |
process * Introduce publishEntries and publishKeys methods * Limit scope of changes to minimum * Add persistence executor to Context for stores/loaders
process * Convert SingleFileStore
process * Convert DummyInMemoryStore
process * Convert AdvancedAsyncCacheLoader * Optimize code to not retrieve keys first
process * Convert JdbcStringBasedStore
process * Convert JpaStore
626926c
to
0702419
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More nitpicking on javadoc comments ;)
@@ -10,9 +12,11 @@ | |||
* | |||
* @author Manik Surtani | |||
* @since 6.0 | |||
* @deprecated This will be replaced by {@link java.util.function.Predicate} in the future |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like having since x.y
in the deprecated comment so it's more obvious when it's safe to remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, forgot to put one here.
} | ||
return empty; | ||
} | ||
// TODO: this can't be added in until stores size method is guaranteed to not count expired entries |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO counting entries while excluding expired ones is a potentially expensive operation compared to starting publishKeys()
and stopping after the first one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure sounds good. I had thought about doing that with iterator, but didn't think of publishKeys
. I think it has just been too long since i looked at this.
public Publisher<MarshalledEntry<K, V>> publishEntries(Predicate<? super K> filter, boolean fetchValue, boolean fetchMetadata) { | ||
State state = this.state.get(); | ||
ByRef<Boolean> hadClear = new ByRef<>(Boolean.FALSE); | ||
Map<Object, Modification> modificationMap = state.flattenModifications(hadClear); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, the modification queue size shouldn't be very big anyway.
/** | ||
* Returns a publisher that will publish all entries stored by the underlying cache store. Only the first | ||
* cache store that implements {@link AdvancedCacheLoader} will be used. Predicate is applied by the underlying | ||
* loader in a best attempt to improve performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I always assumed that each loader has some extra entries compared to the previous one, otherwise it wouldn't make any sense to have both loaders. E.g. maybe the first loader reads from a local SingleFileStore
, and the next loader invokes a couple REST API to re-compute the value. Of course, the 2nd loader probably wouldn't implement AdvancedCacheLoader
in this scenario, but if it did I wouldn't consider it a bug to publish all the entries from both loaders.
* loader in a best attempt to improve performance. | ||
* <p> | ||
* Caller can tell the store to also fetch the value or metadata. In some cases this can improve performance. If | ||
* metadata is not fetched it is not guaranteed if the publisher will return expired entries or not. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you could simplify this to "If metadata is not fetched the publisher may include expired entries".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay
} | ||
} | ||
Publisher<MarshalledEntry<Object, Object>> publisher = persistenceManager.publishEntries(false, true); | ||
Flowable.fromPublisher(publisher).map(MarshalledEntry::getMetadata).blockingForEach(metadata -> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any benefit in extracting the getMetadata()
call into a separate step?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean just doing it inside the blockingForEach
lambda? No it should be the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One less lambda though ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I already changed it.
return new CloseableIterator<E>() { | ||
@Override | ||
public void close() { | ||
((Disposable) iterator).dispose(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we don't need to dispose
the iterable or the publisher, only the iterator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only the iterator. The publisher and iterable don't really apply as they don't do anything, until the iterator is invoked. It has all the resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok
}, executor, true, true); | ||
Flowable.fromPublisher(loader.publishEntries(null, true, true)) | ||
.observeOn(Schedulers.from(executor)) | ||
.takeUntil((MarshalledEntry<Object, Object> me) -> stopped.get()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe the entries.size()
checks can be stricter now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrmm tbh I am not exactly sure. I think it has to invoke doOnNext
before it can call takeUntil
again. Let me change it and we can find out :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah so it will always only modify on a single thread at a time. I can change it to assert it equals the size exactly then. This should be better, because I could make it so we are doing this across multiple threads with Flowable.parallel
or Flowable.flatMap
, but then we are testing rxjava2, which isn't the point of this test.
Although tbh you wouldn't want to do cancellation like this in the first place, you should just invoke Flowable.take(100)
instead.
The more I thought about it, this method should just be removed 🗡
return Flowable.fromIterable(allInternalEntries); | ||
}) | ||
.filter(me -> filter == null || filter.test(me.getKey())) | ||
.sequential(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main problem here was that I was assuming the result of parallel()
implements Publisher
, and I couldn't imagine why one kind of Publisher
would be a better return value than the other.
/** | ||
* | ||
* NOTE: This store can return expired entries on any given operation if {@link JpaStoreConfiguration#storeMetadata()} | ||
* was not set to true. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that include publishKeys
? It doesn't really return entries, but I don't see any additional checks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, want me to say keys or entries instead? The problem is there is no place to store expiration metadata if there is not table for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I'd like you to change the javadoc of publishKeys
in AdvancedCacheLoader
and PersistenceManager
, because they say "This publisher will never return a key which belongs to an expired entry"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I changed it to 'was set to false' - easier to read imo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrmm well that is the contract, which I would like to keep on AdvancedCacheLoader
. JpaStore is special in that it violates this contract, which is why I would rather put it here. I don't want an implementation to return expired keys.
For this reason DummyStore doesn't implement publishKeys, because the metadata is stored with entry so we have to read that as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think a javadoc here is enough, because there's nothing anyone can do about JpaStore
breaking the contract:
- We won't add a special case for
JpaStore
in core, we're going to callpublishKeys
based on theAdvancedCacheLoader
javadoc. - Users call methods on the cache, they don't necessarily know which cache method calls which store method internally.
IMO we should throw an exception when the user tries to store mortal entries in a JpaStore
if we can't read the metadata back. It's a bit complicated by the async writer catching all the exceptions, but at least they'll see some errors in the log. And ideally I'd like to make expiration support optional, both at the store level and at the cache level, so we can throw that exception right from CacheImpl
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrmm I think it might be easier to log another JIRA for this.
It would cover
- I would like to throw an exception at start up if JPAStore is configured not being able to store metadata. Unfortunately we can't do this, since they could be using the store as a read only.
- We probably can only throw exceptions at runtime if an entry is written to the JPAStore with expiration.
Either way neither of these are great options, thus why I would recommend a separate JIRA to talk about this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I created https://issues.jboss.org/browse/ISPN-9143 to continue this.
import net.jcip.annotations.ThreadSafe; | ||
|
||
/** | ||
* RestStore. | ||
* | ||
* @author Tristan Tarrant | ||
* @since 6.0 | ||
* @deprecated This cache store will be changed to only implement {@link org.infinispan.persistence.spi.CacheLoader} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't we using it for rolling upgrades?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, Hot Rod only
e89d897
to
c1510ea
Compare
process * Removed process from PersistenceManager * Remove core usages of process
process * Convert RemoteStore
process * Convert RocksDBStore
process ISPN-8892 RestStore should only implement CacheLoader * Convert RestStore * Deprecate AdvancedCacheLoader portion of RestStore * Deprecate RestTargetMigrator
process * Convert SoftIndexFileStore
process * Convert LuceneCacheLoader
process * Convert remaining server test loaders
process * Added module information for various stores
process * Added some documentation noting new SPI
c1510ea
to
3301101
Compare
Integrated, thanks Will! |
https://issues.jboss.org/browse/ISPN-8865
Added AdvancedCacheLoader.publishEntries method and deprecated/removed all references to calling old process method that I could.
This JIRA is mostly to let CI run on it and crunch out some stuff to see if there are test failures so far.