Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GEODE-8626: Omitting field-mapping tag of cache.xml when using Simple JDBC Connector #5637

Merged
merged 3 commits into from Sep 11, 2021

Conversation

masaki-yamakawa
Copy link
Contributor

Use the default mapping when using the Simple JDBC Connector in cache.xml and without the field-mapping tag

Thank you for submitting a contribution to Apache Geode.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

For all changes:

  • Is there a JIRA ticket associated with this PR? Is it referenced in the commit message?

  • Has your PR been rebased against the latest commit within the target branch (typically develop)?

  • Is your initial contribution a single, squashed commit?

  • Does gradlew build run cleanly?

  • Have you written or updated unit tests to verify your changes?

  • [-] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?

Note:

Please ensure that once the PR is submitted, check Concourse for build issues and
submit an update to your PR as soon as possible. If you need help, please send an
email to dev@geode.apache.org.

@kohlmu-pivotal kohlmu-pivotal added this to Awaiting Verification in Quality Review Oct 27, 2020
@kohlmu-pivotal kohlmu-pivotal moved this from Awaiting Triage to Verified - Approved in Quality Review Oct 27, 2020
@kohlmu-pivotal
Copy link
Contributor

@masaki-yamakawa I apologize for the delay in reviewing this change.

@jchen21 and @dschneider-pivotal would you be in a position to review this PR?

@jchen21
Copy link
Contributor

jchen21 commented Nov 4, 2020

Reviewing the code.

(InternalCache) new CacheFactory().set("locators", "").set("mcast-port", "0").create();
Set<DistributedMember> targetMembers = findMembers(cache, null, null);

CliFunctionResult createRegionFuncResult = executeCreateRegionFunction(targetMembers);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To create region, data store and mapping etc, I would recommend using GfshRule and execute gfsh commands, instead of calling the internal functions that implement the gfsh commands.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that it is recommended to use GfshRule.
With this change, I would like to achieve the same behavior in gfsh command and cache.xml.
So I have implimented JdbcMappingIntegrationTest class to standardize tests.
I have used the internal functions because some parts of GfshRule can not verify in case of errors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an integration test, which tests the behavior of gfsh commands from a user's point view. I don't think this test should use internal functions of specific gfsh command's implementation. The users should not worry about the gfsh implementation.

And I don't recommend using a lot of System.out.println in the tests. Is that for debugging purpose?

If you have to use internal functions, I would recommend testing them in a unit test, or some other integration test or dunit that specifically test a specific gfsh command. You said you use the internal functions because some parts of GfshRule can not verify in case of errors. Can you give a specific example?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I will try to implement the test using gfsh commands.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to implement the test using GfshRule, but I couldn't do it because I use cache.getService(JdbcConnectorService.class) to get the implementation of JdbcConnectorService to verify FieldMapping, and GfshRule does not allow it.
And I would like to make the condition setting part and the verification part of CacheXmlJdbcMappingIntegrationTest and GfshXmlJdbcMappingIntegrationTest common.

So I plan one of the following.

  1. Change the GfshJdbcMappingIntegrationTest to FunctionJdbcMappingIntegrationTest.
  2. Remove GfshJdbcMappingIntegrationTest. (I can use the describe jdbc-mapping command to verify it, but I don't think I should do that because it would reduce maintainability and make it difficult to commonize tests.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to use gfsh and verify the field mapping, CreateMappingCommandDUnitTest has some good examples which use GfshCommandRule (not GfshRule though). You can see if you would like add some more tests there. Typically, a DUnit is used for testing that involves gfsh commands. In the DUnit, you create a cluster with locator and server(s) and execute the gfsh commands.

Using the functions that implement a specific gfsh command in integration tests is discouraged. That is not what an integration test should do.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed GfshJdbcMappingIntegrationTest because I no longer need to impliment it.

@@ -1142,7 +1142,7 @@ public void createMappingWithExistingQueueFails() {
+ " must not already exist.");
}

private static class Employee implements PdxSerializable {
public static class Employee implements PdxSerializable {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this class has to be public?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the test, CreatemappingPreconditionCheckFunction is called and executed on several JVMs(Distributed members).
In this Function, PDX class such as Employee is instantiated, so I need to be able to access this class even if it runs on another JVM.
If it is in private scope, the following exceptions will be thrown

[error 2020/11/07 12:08:19.411 JST <Function Execution Processor2> tid=69] Could not generate a PdxType for the class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingCommandDUnitTest$IdAndName because it did not have a public zero arg constructor. Details: java.lang.IllegalAccessException: Class org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl can not access a member of class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingCommandDUnitTest$IdAndName with modifiers "public"
org.apache.geode.connectors.jdbc.JdbcConnectorException: Could not generate a PdxType for the class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingCommandDUnitTest$IdAndName because it did not have a public zero arg constructor. Details: java.lang.IllegalAccessException: Class org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl can not access a member of class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingCommandDUnitTest$IdAndName with modifiers "public"
    at org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl.createInstance(JdbcConnectorServiceImpl.java:361)
    at org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl.generatePdxTypeForClass(JdbcConnectorServiceImpl.java:334)
    at org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl.getPdxTypeForClass(JdbcConnectorServiceImpl.java:320)
    at org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunction.executeFunction(CreateMappingPreconditionCheckFunction.java:74)
    at org.apache.geode.management.cli.CliFunction.execute(CliFunction.java:37)
    ...

Other similar tests like CreateMapingCommandForProxyRegionDUnitTest are also public scope.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the surface, it seems a trivial change from private to public, in order to fix the exception. However, under the hood, it is doing unnecessary serialization and deserialization of the Employee class. Because getPdxTypeForClass(), generatePdxTypeForClass() and createInstance() are moved from CreateMappingPreconditionCheckFunction to JdbcConnectorServiceImpl, which is probably because you want to use these methods in RegionMappingConfiguration.

I would recommend keeping getPdxTypeForClass(), generatePdxTypeForClass() and createInstance() in CreateMappingPreconditionCheckFunction. And in RegionMappingConfiguration, implement similar methods. Though it looks like duplicating the code, it does avoid unnecessary serialization and deserialization of user defined classes like Employee, which is more important.

And I don't think adding more methods to JdbcConnectorService is necessary. e.g. createDefaultFieldMapping() is something you want to automatically achieve for the users, so it is not necessary be exposed as an API. Just put it as a private method in RegionMappingConfiguration should be good. The same reason applies for getTableMetaDataView() and getPdxTypeForClass. Basically, keep JdbcConnectorService unchanged.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your detailed explanation. I have reverted the CreateMappingPreconditionCheckFunction and JdbcConnectorService to the original source code. And I have implemented similar methods in RegionMappingConfiguration.

@@ -1185,7 +1185,7 @@ public void fromData(PdxReader reader) {
}
}

private static class EmployeeNumeric implements PdxSerializerObject {
public static class EmployeeNumeric implements PdxSerializerObject {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this class has to be public?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason is the same as previous one.

@@ -1234,7 +1234,7 @@ void setRefid(long refid) {
}
}

private static class IdAndName implements PdxSerializable {
public static class IdAndName implements PdxSerializable {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this class has to be public?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason is the same as previous one.

DataSource dataSource = getDataSource(regionMapping.getDataSourceName());
if (dataSource == null) {
throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName()
+ "\" found when getting table meta data \"" + regionMapping.getRegionName() + "\"");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The data source has nothing to do with table metadata or region name. I recommend removing this line of error message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The data source is used to get the table metadata.
I would like to confirm your suggestion.
Your suggestion is the following, right?

if (dataSource == null) {
//  throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName()
//      + "\" found when getting table meta data \"" + regionMapping.getRegionName() + "\"");
  throw new JdbcConnectorException();
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to say:

if (dataSource == null) {
      throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName());
}

If you would like to provide more information,

if (dataSource == null) {
      throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName()
          + "\" found when getting table meta data \"" + regionMapping.getTableName() + "\"");
}

Note that it is regionMapping.getTableName(), not regionMapping.getRegionName().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you.
I would like to fix it as you suggested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have fixed it.

DataSource dataSource = getDataSource(regionMapping.getDataSourceName());
if (dataSource == null) {
throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName()
+ "\" found when creating mapping \"" + regionMapping.getRegionName() + "\"");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The data source has nothing to do with table metadata or region name. I recommend removing this line of error message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have fixed it.

@@ -306,45 +292,10 @@ public void executeFunctionThrowsGivenPdxSerializableWithNoZeroArgConstructor()
"Could not generate a PdxType for the class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunctionTest$PdxClassDummyNoZeroArg because it did not have a public zero arg constructor. Details: java.lang.NoSuchMethodException: org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunctionTest$PdxClassDummyNoZeroArg.<init>()");
}

@Test
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some functions of CreateMappingPreconditionCheckFunction are moved to JdbcConnectorServiceImpl. So if you remove this test here, there should be similar test coverage added in JdbcConnectorServiceTest. In JdbcConnectorServiceTest there are some tests that test ReflectionBasedAutoSerializer. However, those tests don't verify FieldMapping like this test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you mentioned, I have moved some methods.
In addition, I have reduced the the particle size of the test.
The corresponding test for executionFuncationGivenNonPdxUsesReflectionBasedAutoSerializer is JdbcConnectorServiceTest.getPdxTypeForClassSucceedsWithGivenNonPdxUsesReflectionBasedAutoSerializer.
This test verify the PdxType. And then, the PdxType is specified to createDefaultFieldMapping to create the FieldMapping.
createDefaultFieldMapping is tested with JdbcConnectorServiceTest.createDefaultFieldMappingXXX, which verify the FieldMapping at this level.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that. However, none of the JdbcConnectorServiceTest.createDefaultFieldMapping* uses ReflectionBasedAutoSerializer. Although those tests do verify the field mapping. I expect there is some test that uses ReflectionBasedAutoSerializer and verifies the field mapping as well, like the test deleted below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add the test using the ReflectionBasedAutoSerializer and verify the field mapping.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added mappingSuccessWhenFieldMappingsAreOmittedWithNonSerializedClass test method that uses ReflectionBasedAutoSerializer to JdbcMappingIntegrationTest

@masaki-yamakawa
Copy link
Contributor Author

@jchen21 Thank you for your review.
I have fixed some of the source code and added comments.

DataSource dataSource = getDataSource(regionMapping.getDataSourceName());
if (dataSource == null) {
throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName()
+ "\" found when getting table meta data \"" + regionMapping.getRegionName() + "\"");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to say:

if (dataSource == null) {
      throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName());
}

If you would like to provide more information,

if (dataSource == null) {
      throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName()
          + "\" found when getting table meta data \"" + regionMapping.getTableName() + "\"");
}

Note that it is regionMapping.getTableName(), not regionMapping.getRegionName().

@Test
public void getTableMetaDataViewThrowsExceptionWhenGetConnectionHasSqlException()
throws SQLException {
when(dataSource.getConnection()).thenThrow(SQLException.class);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor pick. I would recommend some error message for the SQLException, e.g.

 String reason = "connection failed";	
    when(dataSource.getConnection()).thenThrow(new SQLException(reason));

The the error message in the next few lines would be Exception thrown while connecting to datasource \"dataSource\": connection failed, instead of Exception thrown while connecting to datasource \"dataSource\": null

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your recommendation. I have fixed it.

@@ -306,45 +292,10 @@ public void executeFunctionThrowsGivenPdxSerializableWithNoZeroArgConstructor()
"Could not generate a PdxType for the class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunctionTest$PdxClassDummyNoZeroArg because it did not have a public zero arg constructor. Details: java.lang.NoSuchMethodException: org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunctionTest$PdxClassDummyNoZeroArg.<init>()");
}

@Test
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that. However, none of the JdbcConnectorServiceTest.createDefaultFieldMapping* uses ReflectionBasedAutoSerializer. Although those tests do verify the field mapping. I expect there is some test that uses ReflectionBasedAutoSerializer and verifies the field mapping as well, like the test deleted below.

(InternalCache) new CacheFactory().set("locators", "").set("mcast-port", "0").create();
Set<DistributedMember> targetMembers = findMembers(cache, null, null);

CliFunctionResult createRegionFuncResult = executeCreateRegionFunction(targetMembers);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an integration test, which tests the behavior of gfsh commands from a user's point view. I don't think this test should use internal functions of specific gfsh command's implementation. The users should not worry about the gfsh implementation.

And I don't recommend using a lot of System.out.println in the tests. Is that for debugging purpose?

If you have to use internal functions, I would recommend testing them in a unit test, or some other integration test or dunit that specifically test a specific gfsh command. You said you use the internal functions because some parts of GfshRule can not verify in case of errors. Can you give a specific example?

@masaki-yamakawa
Copy link
Contributor Author

@jchen21
I have fixed all suggestions and added comments.
Please re-review when you're available.

@jchen21
Copy link
Contributor

jchen21 commented Jan 9, 2021

@masaki-yamakawa I have reviewed your code again. Please see the comments in the code.

@jchen21 jchen21 requested review from agingade and BenjaminPerryRoss and removed request for BenjaminPerryRoss January 9, 2021 01:55
@masaki-yamakawa
Copy link
Contributor Author

@jchen21 Thank you for your review. I have fixed the code following your suggestion.

createEmployeeTable();

Throwable throwable =
catchThrowable(() -> createCacheAndCreateJdbcMappingWithWrongPdxName("NoPdxName"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bug. I believe the string in the quote should be WrongPdxName instead of NoPdxName. There is no such file CacheXmlJdbcMappingIntegrationTest.NoPdxName.cache.xml. So the actual error is something like FileNotFoundException returned by getXmlFileForTest(). And the assertion at line 241 and 242 doesn't really catch the bug.

return cache;
}

private InternalCache createCacheAndCreateJdbcMappingWithWrongPdxName(String cacheXmlTestName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is method implementation is exactly the same as createCacheAndCreateJdbcMapping(). The only difference is the method name. So how about just using createCacheAndCreateJdbcMapping()?

}

private InternalCache createCacheAndCreateJdbcMapping(String cacheXmlTestName)
throws Exception {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This throws Exception can be removed. The same for line 93, 109 and 119.


private InternalCache createCacheAndCreateJdbcMappingWithNonSerializedClass(
String cacheXmlTestName) throws Exception {
return createCacheAndCreateJdbcMapping(cacheXmlTestName);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend inline this one-line implementation for readability of code.

}
}

private List<FieldMapping> createDefaultFieldMapping(RegionMapping regionMapping,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The regionMapping argument is not used, and can be removed.

@jchen21
Copy link
Contributor

jchen21 commented Jan 12, 2021

@masaki-yamakawa Once you are ready, please click the re-request review button on the reviewers section.

Copy link
Contributor

@jchen21 jchen21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome. Code looks good to me. Thank you @masaki-yamakawa!

When you merge, please squash your commits into a single one if possible.

… JDBC Connector

Use the default mapping when using the Simple JDBC Connector in cache.xml and without the field-mapping tag
@masaki-yamakawa
Copy link
Contributor Author

@agingade This PR is ready to be reviewed. Please review it when you're available.

@agingade
Copy link

@agingade This PR is ready to be reviewed. Please review it when you're available.

@masaki-yamakawa
Participants in team discussion: @jchen21 @dschneider-pivotal
Thanks for your PR. Sorry for delay in responding to this.

We had a team discussion around this PR as it deflects from the way Geode configuration is done (the recommended way). In the past Geode/GemFire was built with cache.xml as a way to configure the system; based on its drawback and keep the overall cluster configuration easier, it was decided to adopt/build Cluster Configuration Service with gfsh as the tool to accomplish it (currently there is work getting done to support the same through new management APIs). As the Cluster Configuration Service got matured, the use of xml is gradually getting deprecated.
The change proposed with this ticket/PR feels like going back to the cache.xml way of configuring the system. Can you please let us know the reason behind not using the cluster configuration service and going with cache.xml; we would like to understand more about the usecase/requirement and see if there are any other better options.

Also, you can achieve what you are looking for, by exporting the cluster configuration info (once defined using gfsh - which is an xml output) and then importing it to any test/dev/prod system. Can this suffice your requirement?

On the other note, based on the PR review, it expects the user class to be in the server side classpath; currently this is not a requirement while using PDX type/object; one of the main functionality of PDX is to support data stores without needing a class definition on the server. The product takes care of this internally when one configures the jdbc mapping automatically.

Please let us know your thoughts.

Copy link

@agingade agingade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to understand more about this requirement as it defers from the expected way of configuring the system (using Cluster Configuration). I have left my comments in the main page of PR.

@masaki-yamakawa
Copy link
Contributor Author

@agingade
Thanks for your comment.

I am using "Peer-to-Peer Configuration" to build the application. Specifically, it is an application that integrates Tomcat and Geode.
It achieves high-performance by managing data in the App heap (Geode). Then, the data in Geode is persisted to RDB asynchronously.

I understand that gfsh and cluster configuration Import/Export can be used in this configuration. However, I would like to avoid this method because of the complication in operation and the risk of mistakes.
Thus I want a configuration file that reads the region definition when the App is launched.
Nowadays, yml, json and annotations are commonly used.
Geode does not support these, so I chose to use XML.

  • I know SpringData For ApacheGeode, but I don't use it because I want to use Geode's native API.

@onichols-pivotal
Copy link
Contributor

This PR has been open for some while with no recent activity. Please keep Geode tidy by closing PRs you're not actively using.

@masaki-yamakawa
Copy link
Contributor Author

@agingade
This PR is to make the current behavior of using cache.xml closer to the behavior of using gfsh.
I would like to consider it separately from the discussion of whether to use cache.xml in the future.
What do you think?

@onichols-pivotal onichols-pivotal marked this pull request as draft May 4, 2021 02:24
@masaki-yamakawa masaki-yamakawa marked this pull request as ready for review May 30, 2021 09:29
@masaki-yamakawa
Copy link
Contributor Author

@agingade
I would like to take this PR to the next step.
Is there anything I can do?

@agingade
Copy link

agingade commented Jun 3, 2021

@masaki-yamakawa
My worry is abandoning the xml support in near future. If you insist to go with this approach, its fine with me.

One change that could help code maintainability is moving your changes (which is duplicate of the code/logic from CreateMappingPreconditionCheckFunction) to RegionMapping as helper methods and then accessing it from both place.

E.g.:
if (mapping.getFieldMappings().isEmpty() && mapping.getPdxName != null) {
mapping.createMappingUsingPdx();
}

@masaki-yamakawa
Copy link
Contributor Author

@agingade
Thank you for your concern. I have no problem.

I agree with your suggestion. And I had implemented it that way in a previous change.
But the review resulted in the current implementation.
#5637 (comment)

@agingade
Copy link

@masaki-yamakawa
I am not sure what this comment means:

Though it looks like duplicating the code, it does avoid unnecessary serialization and deserialization of user defined classes like Employee, which is more important.

I am unable to see unnecessary serialization because of moving the code/logic from one place to another place....
From maintenance point of view, its better to be in one place (as helper methods and calling it from multiple places).

@masaki-yamakawa
Copy link
Contributor Author

@agingade
Thank you for your comment.
I'll try again.

@masaki-yamakawa
Copy link
Contributor Author

@agingade
I have moved the common methods of CreateMappingPreconditionCheckFunction class and RegionMappingConfiguration class to JdbcConnectorServiceImpl class.
If I move the createInstance method, the existing test ( CreateMappingCommandDUnitTest ) does not pass, so I leave the related methods in CreateMappingPreconditionCheckFunction class and RegionMappingConfiguration class.

@agingade
Copy link

@masaki-yamakawa
Thanks for your contribution to Geode. Appreciate your patience in getting this done. It was a tricky one as it was adding a new way to build field mapping.

@kirklund kirklund removed their request for review June 24, 2021 18:03
@masaki-yamakawa
Copy link
Contributor Author

Thanks @jinmeiliao @agingade @jchen21 for review.
Is this PR OK to merge?
If there's anything I shouldo do, please let me know.

@onichols-pivotal onichols-pivotal merged commit 723429f into apache:develop Sep 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Quality Review
Verified - Approved
6 participants