Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

L2 cache does not support distributed server #5

Closed
awarrior opened this issue Sep 11, 2017 · 8 comments
Closed

L2 cache does not support distributed server #5

awarrior opened this issue Sep 11, 2017 · 8 comments

Comments

@awarrior
Copy link

The problem is that the L2 cache using Ehcache plugin with distributed support does not work well (throws nothing found exception).

follow datanucleus/datanucleus-core#260 (comment)

The first app found pc is null and go to initialize a new class:

java.lang.Thread#getStackTrace#1589
org.datanucleus.enhancer.EnhancementHelper#registerClass#349
org.apache.hadoop.hive.metastore.model.MRoleMap##-1
java.lang.Class#forName0#-2
java.lang.Class#forName#274
org.datanucleus.ClassLoaderResolverImpl#ClassOrNullWithInitialize#533
org.datanucleus.ClassLoaderResolverImpl#classForNameWithInitialize#287
org.datanucleus.ClassLoaderResolverImpl#classForName#360
org.datanucleus.state.ObjectProviderFactoryImpl#getInitialisedClassForClass#306
org.datanucleus.state.ObjectProviderFactoryImpl#newForHollow#109
org.datanucleus.ExecutionContextImpl#findObject#3055
org.datanucleus.store.rdbms.query.PersistentClassROF#getObjectForDatastoreId#460
org.datanucleus.store.rdbms.query.PersistentClassROF#getObject#385
org.datanucleus.store.rdbms.query.ForwardQueryResult#nextResultSetElement#181
org.datanucleus.store.rdbms.query.ForwardQueryResult$QueryResultIterator#next#400
org.datanucleus.store.rdbms.query.ForwardQueryResult#processNumberOfResults#143
org.datanucleus.store.rdbms.query.ForwardQueryResult#advanceToEndOfResultSet#164
org.datanucleus.store.rdbms.query.ForwardQueryResult#getSizeUsingMethod#511
org.datanucleus.store.query.AbstractQueryResult#size#357
org.datanucleus.store.query.Query#executeQuery#1863
org.datanucleus.store.query.Query#executeWithArray#1733
org.datanucleus.api.jdo.JDOQuery#executeInternal#365
org.datanucleus.api.jdo.JDOQuery#executeWithArray#264
org.apache.hadoop.hive.metastore.ObjectStore#getMSecurityUserRoleMap#3421
org.apache.hadoop.hive.metastore.ObjectStore#grantRole#3341
sun.reflect.NativeMethodAccessorImpl#invoke0#-2
sun.reflect.NativeMethodAccessorImpl#invoke#57
sun.reflect.DelegatingMethodAccessorImpl#invoke#43
java.lang.reflect.Method#invoke#606
org.apache.hadoop.hive.metastore.RawStoreProxy#invoke#101
com.sun.proxy.$Proxy28#grantRole#-1

The second app found pc from cache and go to get that meta:

throws 'Cannot lookup meta info for MRoleMap - nothing found'
org.datanucleus.enhancer.EnhancementHelper#getMeta#495
org.datanucleus.enhancer.EnhancementHelper#newInstance#147
org.datanucleus.state.StateManagerImpl#initialiseForHollow#248
org.datanucleus.state.StateManagerImpl#initialiseForCachedPC#600
org.datanucleus.state.ObjectProviderFactoryImpl#newForCachedPC#280
org.datanucleus.ExecutionContextImpl#getObjectFromLevel2Cache#5169
org.datanucleus.ExecutionContextImpl#getObjectFromCache#5060
org.datanucleus.ExecutionContextImpl#findObject#3004
org.datanucleus.store.rdbms.query.PersistentClassROF#getObjectForDatastoreId#460
org.datanucleus.store.rdbms.query.PersistentClassROF#getObject#385
org.datanucleus.store.rdbms.query.ForwardQueryResult#nextResultSetElement#181
org.datanucleus.store.rdbms.query.ForwardQueryResult$QueryResultIterator#next#400
org.datanucleus.store.rdbms.query.ForwardQueryResult#processNumberOfResults#143
org.datanucleus.store.rdbms.query.ForwardQueryResult#advanceToEndOfResultSet#164
org.datanucleus.store.rdbms.query.ForwardQueryResult#getSizeUsingMethod#511
org.datanucleus.store.query.AbstractQueryResult#size#357
org.datanucleus.store.query.Query#executeQuery#1863
org.datanucleus.store.query.Query#executeWithArray#1733
org.datanucleus.api.jdo.JDOQuery#executeInternal#365
org.datanucleus.api.jdo.JDOQuery#executeWithArray#264
org.apache.hadoop.hive.metastore.ObjectStore#getMSecurityUserRoleMap#3421
org.apache.hadoop.hive.metastore.ObjectStore#grantRole#3341
sun.reflect.NativeMethodAccessorImpl#invoke0#-2
sun.reflect.NativeMethodAccessorImpl#invoke#57
sun.reflect.DelegatingMethodAccessorImpl#invoke#43
java.lang.reflect.Method#invoke#606
org.apache.hadoop.hive.metastore.RawStoreProxy#invoke#101
com.sun.proxy.$Proxy28#grantRole#-1

I agree with that L2 cache outlives the PMF because of distributed server in the background. I'm not sure this unsupported feature is relative to only Ehcache or not.

@andyjefferson
Copy link
Member

andyjefferson commented Sep 11, 2017

In which case you look at classes org.datanucleus.store.query.Query and/or org.datanucleus.ExecutionContextImpl#getObjectFromLevel2Cache and look at one of those classes making sure that the "Class" is initialised, USING THE ClassLoaderResolver (part of the ExecutionContext).

The only other thing to note from the second stack trace is that when performing a query, depending on the query executed I would expect the candidate class of the query to be "initialised" at that point, in which case the class will be registered with the EnhancementHelper. You provide no definition of the query or class so no comment possible on that situation.

Also note that current codebase (master) is where any proposed changes need to be, since that is what is developed.

@awarrior
Copy link
Author

I think the reason is that it only uses objectid to get l2cache and do not check whether this 'class' has been registered or not. Here is what I found in the source:

// ExecutionContextImpl.java

Level2Cache l2Cache = nucCtx.getLevel2Cache();
CachedPC cachedPC = l2Cache.get(id);
if (cachedPC != null)
{
// Create active version of cached object with ObjectProvider connected and same id
ObjectProvider op = nucCtx.getObjectProviderFactory().newForCachedPC(this, id, cachedPC);

// ObjectProviderFactoryImpl.java

public ObjectProvider newForCachedPC(ExecutionContext ec, Object id, CachedPC cachedPC)
{
AbstractClassMetaData cmd = ec.getMetaDataManager().getMetaDataForClass(cachedPC.getObjectClass(), ec.getClassLoaderResolver());
ObjectProvider op = getObjectProvider(ec, cmd);
op.initialiseForCachedPC(cachedPC, id);
return op;
}

// StateManagerImpl.java

public void initialiseForCachedPC(CachedPC cachedPC, Object id)
{
// Create a new copy of the input object type, performing the majority of the initialisation
initialiseForHollow(id, null, cachedPC.getObjectClass());

@andyjefferson
Copy link
Member

which doesnt explain what the query is being invoked, and why the candidate class is not "initialised"

@awarrior
Copy link
Author

Here is the definition of the query from client. Any help to judge this question?

query = pm.newQuery(MRoleMap.class, "principalName == t1 && principalType == t2 && role.roleName == t3");
query.declareParameters("java.lang.String t1, java.lang.String t2, java.lang.String t3");
query.setUnique(true);
mRoleMember = (MRoleMap) query.executeWithArray(userName, principalType.toString(), roleName);

@andyjefferson
Copy link
Member

So why isn't the MRoleMap class initialised when it goes through the query compilation etc and before it gets to processing the query result?

@awarrior
Copy link
Author

Do you mean the candidateClass in org.datanucleus.store.query.Query should be initialized after pm.newQuery? Maybe more traces have to be printed in setCandidateClass or around compilation.getCandidateClass()

@andyjefferson
Copy link
Member

I would expect the (Class)MetaData (annotations, XML) for the candidate class to be loaded, populated and initialised. The call to populate of the MetaData should call initialize on the class object, which should "register" it. You will see usage of "cmd" in the query objects, that is the MetaData for the candidate class. The log also tells you when metadata is loaded.

@andyjefferson
Copy link
Member

Closing since no testcase provided, and no apparent willingness to develop a fix for whatever is being seen. Post back here with testcase and/or pull request as required and this can be reopened

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants