New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GEODE-8626: Omitting field-mapping tag of cache.xml when using Simple JDBC Connector #5637
GEODE-8626: Omitting field-mapping tag of cache.xml when using Simple JDBC Connector #5637
Conversation
@masaki-yamakawa I apologize for the delay in reviewing this change. @jchen21 and @dschneider-pivotal would you be in a position to review this PR? |
Reviewing the code. |
...acceptanceTest/java/org/apache/geode/connectors/jdbc/CacheXmlJdbcMappingIntegrationTest.java
Show resolved
Hide resolved
(InternalCache) new CacheFactory().set("locators", "").set("mcast-port", "0").create(); | ||
Set<DistributedMember> targetMembers = findMembers(cache, null, null); | ||
|
||
CliFunctionResult createRegionFuncResult = executeCreateRegionFunction(targetMembers); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To create region, data store and mapping etc, I would recommend using GfshRule
and execute gfsh commands, instead of calling the internal functions that implement the gfsh commands.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that it is recommended to use GfshRule.
With this change, I would like to achieve the same behavior in gfsh command and cache.xml.
So I have implimented JdbcMappingIntegrationTest
class to standardize tests.
I have used the internal functions because some parts of GfshRule can not verify in case of errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an integration test, which tests the behavior of gfsh commands from a user's point view. I don't think this test should use internal functions of specific gfsh command's implementation. The users should not worry about the gfsh implementation.
And I don't recommend using a lot of System.out.println
in the tests. Is that for debugging purpose?
If you have to use internal functions, I would recommend testing them in a unit test, or some other integration test or dunit that specifically test a specific gfsh command. You said you use the internal functions because some parts of GfshRule
can not verify in case of errors. Can you give a specific example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I will try to implement the test using gfsh commands.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to implement the test using GfshRule, but I couldn't do it because I use cache.getService(JdbcConnectorService.class)
to get the implementation of JdbcConnectorService
to verify FieldMapping, and GfshRule does not allow it.
And I would like to make the condition setting part and the verification part of CacheXmlJdbcMappingIntegrationTest
and GfshXmlJdbcMappingIntegrationTest
common.
So I plan one of the following.
- Change the
GfshJdbcMappingIntegrationTest
toFunctionJdbcMappingIntegrationTest
. - Remove
GfshJdbcMappingIntegrationTest
. (I can use thedescribe jdbc-mapping
command to verify it, but I don't think I should do that because it would reduce maintainability and make it difficult to commonize tests.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to use gfsh
and verify the field mapping, CreateMappingCommandDUnitTest
has some good examples which use GfshCommandRule
(not GfshRule
though). You can see if you would like add some more tests there. Typically, a DUnit is used for testing that involves gfsh
commands. In the DUnit, you create a cluster with locator and server(s) and execute the gfsh
commands.
Using the functions that implement a specific gfsh
command in integration tests is discouraged. That is not what an integration test should do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed GfshJdbcMappingIntegrationTest
because I no longer need to impliment it.
@@ -1142,7 +1142,7 @@ public void createMappingWithExistingQueueFails() { | |||
+ " must not already exist."); | |||
} | |||
|
|||
private static class Employee implements PdxSerializable { | |||
public static class Employee implements PdxSerializable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this class has to be public
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the test, CreatemappingPreconditionCheckFunction
is called and executed on several JVMs(Distributed members).
In this Function, PDX class such as Employee is instantiated, so I need to be able to access this class even if it runs on another JVM.
If it is in private scope, the following exceptions will be thrown
[error 2020/11/07 12:08:19.411 JST <Function Execution Processor2> tid=69] Could not generate a PdxType for the class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingCommandDUnitTest$IdAndName because it did not have a public zero arg constructor. Details: java.lang.IllegalAccessException: Class org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl can not access a member of class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingCommandDUnitTest$IdAndName with modifiers "public"
org.apache.geode.connectors.jdbc.JdbcConnectorException: Could not generate a PdxType for the class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingCommandDUnitTest$IdAndName because it did not have a public zero arg constructor. Details: java.lang.IllegalAccessException: Class org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl can not access a member of class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingCommandDUnitTest$IdAndName with modifiers "public"
at org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl.createInstance(JdbcConnectorServiceImpl.java:361)
at org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl.generatePdxTypeForClass(JdbcConnectorServiceImpl.java:334)
at org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl.getPdxTypeForClass(JdbcConnectorServiceImpl.java:320)
at org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunction.executeFunction(CreateMappingPreconditionCheckFunction.java:74)
at org.apache.geode.management.cli.CliFunction.execute(CliFunction.java:37)
...
Other similar tests like CreateMapingCommandForProxyRegionDUnitTest are also public scope.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the surface, it seems a trivial change from private
to public
, in order to fix the exception. However, under the hood, it is doing unnecessary serialization and deserialization of the Employee
class. Because getPdxTypeForClass()
, generatePdxTypeForClass()
and createInstance()
are moved from CreateMappingPreconditionCheckFunction
to JdbcConnectorServiceImpl
, which is probably because you want to use these methods in RegionMappingConfiguration
.
I would recommend keeping getPdxTypeForClass()
, generatePdxTypeForClass()
and createInstance()
in CreateMappingPreconditionCheckFunction
. And in RegionMappingConfiguration
, implement similar methods. Though it looks like duplicating the code, it does avoid unnecessary serialization and deserialization of user defined classes like Employee
, which is more important.
And I don't think adding more methods to JdbcConnectorService
is necessary. e.g. createDefaultFieldMapping()
is something you want to automatically achieve for the users, so it is not necessary be exposed as an API. Just put it as a private method in RegionMappingConfiguration
should be good. The same reason applies for getTableMetaDataView()
and getPdxTypeForClass
. Basically, keep JdbcConnectorService
unchanged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your detailed explanation. I have reverted the CreateMappingPreconditionCheckFunction
and JdbcConnectorService
to the original source code. And I have implemented similar methods in RegionMappingConfiguration
.
@@ -1185,7 +1185,7 @@ public void fromData(PdxReader reader) { | |||
} | |||
} | |||
|
|||
private static class EmployeeNumeric implements PdxSerializerObject { | |||
public static class EmployeeNumeric implements PdxSerializerObject { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this class has to be public?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason is the same as previous one.
@@ -1234,7 +1234,7 @@ void setRefid(long refid) { | |||
} | |||
} | |||
|
|||
private static class IdAndName implements PdxSerializable { | |||
public static class IdAndName implements PdxSerializable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this class has to be public?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason is the same as previous one.
DataSource dataSource = getDataSource(regionMapping.getDataSourceName()); | ||
if (dataSource == null) { | ||
throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName() | ||
+ "\" found when getting table meta data \"" + regionMapping.getRegionName() + "\""); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data source has nothing to do with table metadata or region name. I recommend removing this line of error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data source is used to get the table metadata.
I would like to confirm your suggestion.
Your suggestion is the following, right?
if (dataSource == null) {
// throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName()
// + "\" found when getting table meta data \"" + regionMapping.getRegionName() + "\"");
throw new JdbcConnectorException();
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to say:
if (dataSource == null) {
throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName());
}
If you would like to provide more information,
if (dataSource == null) {
throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName()
+ "\" found when getting table meta data \"" + regionMapping.getTableName() + "\"");
}
Note that it is regionMapping.getTableName()
, not regionMapping.getRegionName()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you.
I would like to fix it as you suggested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have fixed it.
DataSource dataSource = getDataSource(regionMapping.getDataSourceName()); | ||
if (dataSource == null) { | ||
throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName() | ||
+ "\" found when creating mapping \"" + regionMapping.getRegionName() + "\""); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data source has nothing to do with table metadata or region name. I recommend removing this line of error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have fixed it.
.../src/main/java/org/apache/geode/connectors/jdbc/internal/xml/RegionMappingConfiguration.java
Outdated
Show resolved
Hide resolved
...rg/apache/geode/connectors/jdbc/internal/cli/CreateMappingPreconditionCheckFunctionTest.java
Outdated
Show resolved
Hide resolved
@@ -306,45 +292,10 @@ public void executeFunctionThrowsGivenPdxSerializableWithNoZeroArgConstructor() | |||
"Could not generate a PdxType for the class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunctionTest$PdxClassDummyNoZeroArg because it did not have a public zero arg constructor. Details: java.lang.NoSuchMethodException: org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunctionTest$PdxClassDummyNoZeroArg.<init>()"); | |||
} | |||
|
|||
@Test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some functions of CreateMappingPreconditionCheckFunction
are moved to JdbcConnectorServiceImpl
. So if you remove this test here, there should be similar test coverage added in JdbcConnectorServiceTest
. In JdbcConnectorServiceTest
there are some tests that test ReflectionBasedAutoSerializer
. However, those tests don't verify FieldMapping
like this test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As you mentioned, I have moved some methods.
In addition, I have reduced the the particle size of the test.
The corresponding test for executionFuncationGivenNonPdxUsesReflectionBasedAutoSerializer
is JdbcConnectorServiceTest.getPdxTypeForClassSucceedsWithGivenNonPdxUsesReflectionBasedAutoSerializer
.
This test verify the PdxType. And then, the PdxType is specified to createDefaultFieldMapping
to create the FieldMapping.
createDefaultFieldMapping is tested with JdbcConnectorServiceTest.createDefaultFieldMappingXXX
, which verify the FieldMapping at this level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that. However, none of the JdbcConnectorServiceTest.createDefaultFieldMapping*
uses ReflectionBasedAutoSerializer
. Although those tests do verify the field mapping. I expect there is some test that uses ReflectionBasedAutoSerializer
and verifies the field mapping as well, like the test deleted below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add the test using the ReflectionBasedAutoSerializer
and verify the field mapping.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added mappingSuccessWhenFieldMappingsAreOmittedWithNonSerializedClass test method that uses ReflectionBasedAutoSerializer to JdbcMappingIntegrationTest
@jchen21 Thank you for your review. |
...acceptanceTest/java/org/apache/geode/connectors/jdbc/CacheXmlJdbcMappingIntegrationTest.java
Show resolved
Hide resolved
DataSource dataSource = getDataSource(regionMapping.getDataSourceName()); | ||
if (dataSource == null) { | ||
throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName() | ||
+ "\" found when getting table meta data \"" + regionMapping.getRegionName() + "\""); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to say:
if (dataSource == null) {
throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName());
}
If you would like to provide more information,
if (dataSource == null) {
throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName()
+ "\" found when getting table meta data \"" + regionMapping.getTableName() + "\"");
}
Note that it is regionMapping.getTableName()
, not regionMapping.getRegionName()
.
...rg/apache/geode/connectors/jdbc/internal/cli/CreateMappingPreconditionCheckFunctionTest.java
Outdated
Show resolved
Hide resolved
@Test | ||
public void getTableMetaDataViewThrowsExceptionWhenGetConnectionHasSqlException() | ||
throws SQLException { | ||
when(dataSource.getConnection()).thenThrow(SQLException.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor pick. I would recommend some error message for the SQLException
, e.g.
String reason = "connection failed";
when(dataSource.getConnection()).thenThrow(new SQLException(reason));
The the error message in the next few lines would be Exception thrown while connecting to datasource \"dataSource\": connection failed
, instead of Exception thrown while connecting to datasource \"dataSource\": null
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your recommendation. I have fixed it.
@@ -306,45 +292,10 @@ public void executeFunctionThrowsGivenPdxSerializableWithNoZeroArgConstructor() | |||
"Could not generate a PdxType for the class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunctionTest$PdxClassDummyNoZeroArg because it did not have a public zero arg constructor. Details: java.lang.NoSuchMethodException: org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunctionTest$PdxClassDummyNoZeroArg.<init>()"); | |||
} | |||
|
|||
@Test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that. However, none of the JdbcConnectorServiceTest.createDefaultFieldMapping*
uses ReflectionBasedAutoSerializer
. Although those tests do verify the field mapping. I expect there is some test that uses ReflectionBasedAutoSerializer
and verifies the field mapping as well, like the test deleted below.
(InternalCache) new CacheFactory().set("locators", "").set("mcast-port", "0").create(); | ||
Set<DistributedMember> targetMembers = findMembers(cache, null, null); | ||
|
||
CliFunctionResult createRegionFuncResult = executeCreateRegionFunction(targetMembers); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an integration test, which tests the behavior of gfsh commands from a user's point view. I don't think this test should use internal functions of specific gfsh command's implementation. The users should not worry about the gfsh implementation.
And I don't recommend using a lot of System.out.println
in the tests. Is that for debugging purpose?
If you have to use internal functions, I would recommend testing them in a unit test, or some other integration test or dunit that specifically test a specific gfsh command. You said you use the internal functions because some parts of GfshRule
can not verify in case of errors. Can you give a specific example?
@jchen21 |
@masaki-yamakawa I have reviewed your code again. Please see the comments in the code. |
@jchen21 Thank you for your review. I have fixed the code following your suggestion. |
createEmployeeTable(); | ||
|
||
Throwable throwable = | ||
catchThrowable(() -> createCacheAndCreateJdbcMappingWithWrongPdxName("NoPdxName")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bug. I believe the string in the quote should be WrongPdxName
instead of NoPdxName
. There is no such file CacheXmlJdbcMappingIntegrationTest.NoPdxName.cache.xml
. So the actual error is something like FileNotFoundException
returned by getXmlFileForTest()
. And the assertion at line 241 and 242 doesn't really catch the bug.
return cache; | ||
} | ||
|
||
private InternalCache createCacheAndCreateJdbcMappingWithWrongPdxName(String cacheXmlTestName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is method implementation is exactly the same as createCacheAndCreateJdbcMapping()
. The only difference is the method name. So how about just using createCacheAndCreateJdbcMapping()
?
} | ||
|
||
private InternalCache createCacheAndCreateJdbcMapping(String cacheXmlTestName) | ||
throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This throws Exception
can be removed. The same for line 93, 109 and 119.
|
||
private InternalCache createCacheAndCreateJdbcMappingWithNonSerializedClass( | ||
String cacheXmlTestName) throws Exception { | ||
return createCacheAndCreateJdbcMapping(cacheXmlTestName); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would recommend inline this one-line implementation for readability of code.
} | ||
} | ||
|
||
private List<FieldMapping> createDefaultFieldMapping(RegionMapping regionMapping, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The regionMapping
argument is not used, and can be removed.
@masaki-yamakawa Once you are ready, please click the re-request review button on the reviewers section. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome. Code looks good to me. Thank you @masaki-yamakawa!
When you merge, please squash your commits into a single one if possible.
… JDBC Connector Use the default mapping when using the Simple JDBC Connector in cache.xml and without the field-mapping tag
08362f8
to
5f4aa56
Compare
@agingade This PR is ready to be reviewed. Please review it when you're available. |
@masaki-yamakawa We had a team discussion around this PR as it deflects from the way Geode configuration is done (the recommended way). In the past Geode/GemFire was built with cache.xml as a way to configure the system; based on its drawback and keep the overall cluster configuration easier, it was decided to adopt/build Cluster Configuration Service with gfsh as the tool to accomplish it (currently there is work getting done to support the same through new management APIs). As the Cluster Configuration Service got matured, the use of xml is gradually getting deprecated. Also, you can achieve what you are looking for, by exporting the cluster configuration info (once defined using gfsh - which is an xml output) and then importing it to any test/dev/prod system. Can this suffice your requirement? On the other note, based on the PR review, it expects the user class to be in the server side classpath; currently this is not a requirement while using PDX type/object; one of the main functionality of PDX is to support data stores without needing a class definition on the server. The product takes care of this internally when one configures the jdbc mapping automatically. Please let us know your thoughts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to understand more about this requirement as it defers from the expected way of configuring the system (using Cluster Configuration). I have left my comments in the main page of PR.
@agingade I am using "Peer-to-Peer Configuration" to build the application. Specifically, it is an application that integrates Tomcat and Geode. I understand that gfsh and cluster configuration Import/Export can be used in this configuration. However, I would like to avoid this method because of the complication in operation and the risk of mistakes.
|
This PR has been open for some while with no recent activity. Please keep Geode tidy by closing PRs you're not actively using. |
@agingade |
@agingade |
@masaki-yamakawa One change that could help code maintainability is moving your changes (which is duplicate of the code/logic from CreateMappingPreconditionCheckFunction) to RegionMapping as helper methods and then accessing it from both place. E.g.: |
@agingade I agree with your suggestion. And I had implemented it that way in a previous change. |
@masaki-yamakawa
I am unable to see unnecessary serialization because of moving the code/logic from one place to another place.... |
@agingade |
…JdbcConnectorServiceImpl class
@agingade |
@masaki-yamakawa |
Thanks @jinmeiliao @agingade @jchen21 for review. |
Use the default mapping when using the Simple JDBC Connector in cache.xml and without the field-mapping tag
Thank you for submitting a contribution to Apache Geode.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
Is there a JIRA ticket associated with this PR? Is it referenced in the commit message?
Has your PR been rebased against the latest commit within the target branch (typically
develop
)?Is your initial contribution a single, squashed commit?
Does
gradlew build
run cleanly?Have you written or updated unit tests to verify your changes?
[-] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
Note:
Please ensure that once the PR is submitted, check Concourse for build issues and
submit an update to your PR as soon as possible. If you need help, please send an
email to dev@geode.apache.org.