-
Notifications
You must be signed in to change notification settings - Fork 28
Fix the data sync issue #93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughA new batch retrieval method was added to the data synchronization repository. The data sync service was enhanced to process beneficiary details in batches, marking them as processed after updates. The controller was updated to extract a JWT token from request cookies and pass it along during sync. The utility for creating REST requests was refactored to improve JWT token extraction from request headers and cookies. Changes
Suggested reviewers
Possibly related PRs
Poem
β¨ Finishing Touchesπ§ͺ Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. πͺ§ TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π Files selected for processing (2)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
(1 hunks)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
(2 hunks)
β° Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Package-test
- GitHub Check: style-check / checkstyle
- GitHub Check: Build
- GitHub Check: Analyze (java)
π Additional comments (2)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (2)
63-71
: Code logic looks correct for the new table handling.The conditional logic properly routes "i_beneficiarydetails" table to the new batch processing method, which should help with the data sync issue for large volumes.
354-364
: Query construction is correct and secure.The UPDATE query is properly constructed using StringBuilder with parameterized values, which prevents SQL injection vulnerabilities.
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Outdated
Show resolved
Hide resolved
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
Outdated
Show resolved
Hide resolved
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
Outdated
Show resolved
Hide resolved
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
Outdated
Show resolved
Hide resolved
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
β»οΈ Duplicate comments (3)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (3)
76-76
: Make batch size configurable instead of hardcoded.The hardcoded batch size persists from previous reviews. This should be configurable through application properties for better maintainability and performance tuning.
+@Value("${sync.batch.size:500}") +private int batchSize; + -int batchSize = 500; +// Use the injected batchSize field instead
80-88
: Add safeguards to prevent infinite loops.The
while(true)
loop lacks proper termination safeguards and could run indefinitely if there are issues with data processing or the batch method returns inconsistent results.+private static final int MAX_ITERATIONS = 10000; // Safety limit + public void syncTableFromVan(String schema, String tableName, String columnNames, String whereClause, String insertQuery) { int batchSize = 500; int offset = 0; - boolean hasMoreData = true; + int iterations = 0; - while (hasMoreData) { + while (iterations < MAX_ITERATIONS) { List<Map<String, Object>> batch = dataSyncRepositoryCentral.getBatchForTable( schema, tableName, columnNames, whereClause, batchSize, offset); if (batch == null || batch.isEmpty()) { logger.info("No more records to sync for table: {}", tableName); System.out.println("Test:No more records to sync for table: " + tableName); break; } // ... rest of processing logic offset += batchSize; + iterations++; } + + if (iterations >= MAX_ITERATIONS) { + throw new RuntimeException("Batch processing exceeded maximum iterations limit"); + } }
75-103
: Add transaction management for batch operations.Large batch operations should be wrapped in transactions to ensure data consistency and provide rollback capability in case of failures.
+import org.springframework.transaction.annotation.Transactional; +@Transactional public void syncTableFromVan(String schema, String tableName, String columnNames, String whereClause, String insertQuery) { // existing implementation }Also consider adding proper error handling and logging for batch processing operations to aid in debugging sync issues.
π§Ή Nitpick comments (3)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (1)
57-57
: Standardize logging approach.The code uses both SLF4J logger and System.out.println for similar messages. This creates inconsistency and makes log management difficult.
Choose one approach for consistency:
- Use SLF4J logger for production logging
- Use System.out.println only for temporary debugging
logger.info("Starting sync for table: {}", tableName); -System.out.println("Test:Starting sync for table: " + tableName);
Also applies to: 62-62, 67-67, 86-86, 100-100
src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java (2)
139-140
: Consider the debugging approach for production readiness.Multiple
System.out.println
statements have been added throughout the file, often alongside commented-out logger statements. While this is useful for debugging, consider the following for production:
- Temporary vs Permanent: If these are temporary debugging statements, ensure they're removed before production deployment
- Log Level Management: Consider using logger with DEBUG level instead of System.out.println for better log management
- Performance Impact: System.out.println can impact performance in high-throughput scenarios
For production deployment, consider:
-System.out.println("Piramal: SyncData="+new Gson().toJson(syncData)); +logger.debug("SyncData: {}", new Gson().toJson(syncData));Also applies to: 144-145, 153-154, 162-166, 174-175, 283-283, 308-310, 317-318, 334-334, 352-354, 379-380, 384-385, 387-387, 425-425, 428-428
155-155
: Performance consideration for JSON serialization in debug statements.The debug statements are performing JSON serialization with
new Gson().toJson()
which can be expensive for large datasets, especially since they're executed in production code paths.Consider conditional serialization or using logger's lazy evaluation:
-System.out.println("Piramal: SyncData="+new Gson().toJson(syncData)); +if (logger.isDebugEnabled()) { + logger.debug("SyncData: {}", new Gson().toJson(syncData)); +}Also applies to: 165-166, 309-310, 334-334, 384-385, 425-425
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π Files selected for processing (3)
src/main/java/com/iemr/mmu/service/dataSyncActivity/DataSyncRepository.java
(1 hunks)src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java
(7 hunks)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
(1 hunks)
π§° Additional context used
πͺ GitHub Actions: Package
src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java
[warning] Checkstyle warnings: Some methods exceed max allowed length; multiple lines longer than 120 characters; 'if' and 'else' constructs must use braces; naming conventions violations
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
[error] 81-81: Compilation error: cannot find symbol method getBatchForTable(String, String, String, String, int, int) in DataSyncRepositoryCentral
πͺ GitHub Actions: Build On Pull Request
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
[error] 81-81: Compilation error: cannot find symbol method getBatchForTable(String, String, String, String, int, int) in DataSyncRepositoryCentral
πͺ GitHub Actions: Call Checkstyle
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
[error] 81-81: Compilation error: cannot find symbol method getBatchForTable(String, String, String, String, int, int) in variable dataSyncRepositoryCentral of type DataSyncRepositoryCentral
β° Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (java)
π Additional comments (1)
src/main/java/com/iemr/mmu/service/dataSyncActivity/DataSyncRepository.java (1)
88-88
: Debug output added for result set visibility.The debug print statement helps trace the data being retrieved during synchronization operations, which aligns with the overall debugging approach in this PR.
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
Outdated
Show resolved
Hide resolved
7ab2343
to
3ea78f0
Compare
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
π§Ή Nitpick comments (3)
src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java (1)
26-37
: Remove commented code to improve code cleanliness.The commented code should be removed as it's no longer needed and clutters the codebase.
- // if (jwtToken == null || jwtToken.isEmpty()) { - // ServletRequestAttributes attrs = - // (ServletRequestAttributes) RequestContextHolder.getRequestAttributes(); - // if (attrs != null) { - // HttpServletRequest request = attrs.getRequest(); - // try { - // jwtToken = CookieUtil.getJwtTokenFromCookie(request); - // } catch (Exception e) { - // logger.error("Error while getting JWT token from cookie: {}", e.getMessage()); - // } - // } - // }src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java (2)
66-66
: Remove unnecessary blank line.The blank line adds no value and should be removed for cleaner code.
-
66-66
: Remove test prefix from production logging.The log statement contains a "test" prefix which should be removed from production code.
- logger.info("test: vanto server auth="+Authorization); + logger.debug("Authorization header received for data sync");Also consider changing the log level to DEBUG as authorization details are sensitive.
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π Files selected for processing (3)
src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java
(2 hunks)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
(3 hunks)src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java
(2 hunks)
π§ Files skipped from review as they are similar to previous changes (1)
- src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
π§° Additional context used
𧬠Code Graph Analysis (1)
src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java (1)
src/main/java/com/iemr/mmu/utils/response/OutputResponse.java (1)
OutputResponse
(37-206)
β° Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Analyze (java)
- GitHub Check: Package-test
π Additional comments (4)
src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java (4)
39-39
: Import addition looks good.The import for
CookieUtil
is properly added and used in the method implementation.
43-43
: Import addition looks good.The import for
HttpServletRequest
is properly added and used in the method signature.
63-63
: Method signature change is appropriate.The addition of
HttpServletRequest request
parameter enables JWT token extraction from cookies, which aligns with the PR objective of fixing data sync issues.
71-71
: Service method call update is correct.The addition of the
jwtToken
parameter to the service method call properly passes the extracted token for further processing.
src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π Files selected for processing (2)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
(3 hunks)src/main/resources/application.properties
(2 hunks)
π§ Files skipped from review as they are similar to previous changes (1)
- src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
β° Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (java)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
logger.debug("Checking record existence query: {} with params: {}", query, Arrays.toString(queryParams)); | ||
|
||
try { | ||
List<Map<String, Object>> resultSet = jdbcTemplate.queryForList(query, queryParams); |
Check failure
Code scanning / CodeQL
Query built from user-controlled sources
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
|
||
try { | ||
if (params.isEmpty()) { | ||
resultSetList = jdbcTemplate.queryForList(finalQuery); |
Check failure
Code scanning / CodeQL
Query built from user-controlled sources
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 3 months ago
To address the SQL injection vulnerability, the fix involves validating and sanitizing all user-controlled inputs before incorporating them into the query. Specifically:
- Validation: Ensure that schema and table names match predefined acceptable values or conform to strict patterns (e.g., alphanumeric characters only).
- Parameterization: Use placeholders (
?
) for all dynamic parts of the query, and bind values usingPreparedStatement
or equivalent methods. WhileJdbcTemplate
does not support parameterization for table or schema names, validation can mitigate the risk.
Changes will be made in DataSyncRepositoryCentral.java
to validate schema and table names against predefined lists or patterns before constructing the query. If validation fails, an exception will be thrown.
-
Copy modified lines R122-R125 -
Copy modified lines R193-R201
@@ -119,6 +119,10 @@ | ||
public List<Map<String, Object>> getMasterDataFromTable(String schema, String table, String columnNames, | ||
String masterType, Timestamp lastDownloadDate, Integer vanID, Integer psmID) throws Exception { | ||
jdbcTemplate = getJdbcTemplate(); | ||
// Validate schema and table names | ||
if (!isValidSchemaName(schema) || !isValidTableName(table)) { | ||
throw new RuntimeException("Invalid schema or table name provided."); | ||
} | ||
List<Map<String, Object>> resultSetList = new ArrayList<>(); | ||
StringBuilder baseQueryBuilder = new StringBuilder(" SELECT ").append(columnNames).append(" FROM ").append(schema).append(".").append(table); | ||
List<Object> params = new ArrayList<>(); | ||
@@ -186,4 +190,13 @@ | ||
} | ||
|
||
// End of Data Download Repository | ||
private boolean isValidSchemaName(String schema) { | ||
// Example validation: check against a predefined list or pattern | ||
return schema != null && schema.matches("[a-zA-Z0-9_]+"); | ||
} | ||
|
||
private boolean isValidTableName(String table) { | ||
// Example validation: check against a predefined list or pattern | ||
return table != null && table.matches("[a-zA-Z0-9_]+"); | ||
} | ||
} |
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
String query = "SELECT " + columnNames + " FROM " + schema + "." + table + whereClause + " LIMIT ? OFFSET ?"; | ||
logger.debug("Fetching batch for beneficiary details. Query: {}, Limit: {}, Offset: {}", query, limit, offset); | ||
try { | ||
return jdbcTemplate.queryForList(query, limit, offset); |
Check failure
Code scanning / CodeQL
Query built from user-controlled sources
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
β»οΈ Duplicate comments (1)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (1)
238-240
: Make batch size configurable and add transaction managementThe hardcoded batch size should be configurable, and the entire batch processing should be wrapped in a transaction for data consistency.
+@Value("${sync.batch.size:1000}") +private int batchSize; +@Transactional public String update_I_BeneficiaryDetails_for_processed_in_batches(SyncUploadDataDigester syncUploadDataDigester) { // ... existing code ... - int limit = 1000; + int limit = batchSize; // ... rest of implementation }
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π Files selected for processing (2)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
(1 hunks)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
(1 hunks)
π§° Additional context used
𧬠Code Graph Analysis (1)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (5)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (1)
Service
(17-175)src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java (1)
Service
(68-437)src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerTransactionalImpl.java (1)
Service
(60-285)src/main/java/com/iemr/mmu/service/dataSyncActivity/DataSyncRepository.java (1)
Service
(43-123)src/main/java/com/iemr/mmu/data/syncActivity_syncLayer/SyncUploadDataDigester.java (1)
SyncUploadDataDigester
(27-121)
π Additional comments (7)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (3)
19-30
: Good practice: Lazy initialization of JdbcTemplateThe lazy initialization pattern ensures DataSource is available when needed. This is a solid improvement over direct injection.
35-88
: Enhanced error handling and logging implementationThe refactored method shows significant improvements:
- Proper parameterized query building
- Comprehensive exception handling with specific logging
- Clear debug messages for troubleshooting
161-172
: Verify SQL injection safeguards in getBatchForBenDetailsAlthough
limit
andoffset
are passed as parameters, bothcolumnNames
andwhereClause
are still concatenated directly into your SQL string. Any untrusted or malformed input there can open the door to SQL injection.Please confirm that all dynamic parts are strictly controlled or sanitized before they reach this method. In particular:
- DataSyncRepositoryCentral.java (lines 161β172)
getBatchForBenDetails concatenatescolumnNames
andwhereClause
into"SELECT " + columnNames + " FROM " + schema + "." + table + whereClause + " LIMIT ? OFFSET ?"
.- GetDataFromVanAndSyncToDBImpl.java
Review howproblematicWhereClause
(thewhereClause
passed in) is constructed. Ensure it never contains unchecked user input or string fragments that could be manipulated.- SyncUploadDataDigester.java (or wherever
getServerColumns()
is implemented)
Verify that the list of columns returned is built from a safe, whitelist-driven source (not from arbitrary input).If any of these values can vary based on external or user-supplied data, switch to parameterized queries or implement a strict whitelist/sanitization layer for column names and clauses before concatenation.
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (4)
29-43
: Well-structured table grouping approachThe static TABLE_GROUPS map provides good organization for batch synchronization. The grouping by data characteristics (master, transactional, high-volume) is logical.
45-139
: Improved architecture with token support and group-based processingThe refactored method adds proper token handling and implements group-based table synchronization. The error handling and logging are comprehensive.
287-449
: Comprehensive generic sync implementationThe generic table sync method provides thorough handling of insert/update logic with proper error handling and logging. The facility ID processing for different table types is well-implemented.
512-528
: Useful helper method for debugging failed recordsThe getFailedRecords method provides valuable debugging information for batch operation failures, helping identify specific records that failed processing.
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
Outdated
Show resolved
Hide resolved
System.out.println("Executing batch operation for table: " + tableName + ". Query type: " + (query.startsWith("INSERT") ? "INSERT" : "UPDATE") + ". Number of records: " + syncDataList.size()); | ||
try { | ||
// Start batch insert/update | ||
int[] i = jdbcTemplate.batchUpdate(query, syncDataList); |
Check failure
Code scanning / CodeQL
Query built from user-controlled sources
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 3 months ago
The best way to fix the problem is to use parameterized SQL queries to eliminate the risk of SQL injection. The schema name and table name should not be concatenated directly into the query string. Instead, these values should be validated or restricted to a predefined list of acceptable values. Alternatively, dynamic queries can be avoided entirely by using predefined queries or stored procedures.
Steps for fixing the problem:
- Validate or restrict
schemaName
andtableName
to a predefined list of acceptable values. - Replace the dynamically constructed query with a parameterized query that uses placeholders (
?
) for variable values. - Pass the actual values as parameters during query execution using the
PreparedStatement
or equivalent mechanisms supported byJdbcTemplate
.
-
Copy modified lines R231-R237 -
Copy modified lines R240-R249
@@ -228,19 +228,25 @@ | ||
} | ||
|
||
private String getqueryFor_M_BeneficiaryRegIdMapping(String schemaName, String tableName) { | ||
StringBuilder queryBuilder = new StringBuilder(" UPDATE "); | ||
queryBuilder.append(schemaName).append(".").append(tableName); | ||
queryBuilder.append(" SET "); | ||
queryBuilder.append("Provisioned = true, SyncedDate = now(), syncedBy = ?"); | ||
queryBuilder.append(" WHERE "); | ||
queryBuilder.append(" BenRegId = ? "); | ||
queryBuilder.append(" AND "); | ||
queryBuilder.append(" BeneficiaryID = ? "); | ||
queryBuilder.append(" AND "); | ||
queryBuilder.append(" VanID = ? "); | ||
return queryBuilder.toString(); | ||
// Validate schemaName and tableName against a predefined whitelist | ||
if (!isValidSchemaName(schemaName) || !isValidTableName(tableName)) { | ||
throw new IllegalArgumentException("Invalid schema or table name."); | ||
} | ||
// Use parameterized query placeholders | ||
return "UPDATE " + schemaName + "." + tableName + | ||
" SET Provisioned = true, SyncedDate = now(), syncedBy = ? WHERE BenRegId = ? AND BeneficiaryID = ? AND VanID = ?"; | ||
} | ||
|
||
private boolean isValidSchemaName(String schemaName) { | ||
// Implement validation logic for schema names (e.g., match against a whitelist) | ||
return Arrays.asList("validSchema1", "validSchema2").contains(schemaName); | ||
} | ||
|
||
private boolean isValidTableName(String tableName) { | ||
// Implement validation logic for table names (e.g., match against a whitelist) | ||
return Arrays.asList("m_beneficiaryregidmapping", "t_indent", "t_patientissue").contains(tableName); | ||
} | ||
|
||
public String update_I_BeneficiaryDetails_for_processed_in_batches(SyncUploadDataDigester syncUploadDataDigester) { | ||
logger.info("Processing update_I_BeneficiaryDetails_for_processed_in_batches for table: {}", syncUploadDataDigester.getTableName()); | ||
List<Object[]> syncData = new ArrayList<>(); |
if (params.isEmpty()) { | ||
resultSetList = jdbcTemplate.queryForList(finalQuery); | ||
} else { | ||
resultSetList = jdbcTemplate.queryForList(finalQuery, params.toArray()); |
Check failure
Code scanning / CodeQL
Query built from user-controlled sources
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 3 months ago
To fix the issue, the table
parameter should not be directly appended to the SQL query string. Instead, the application should validate the table
parameter against a predefined list of allowed table names. This ensures that only legitimate table names can be used in the query. Additionally, the query should use parameterized placeholders for all other dynamic values to prevent SQL injection.
Steps to fix:
- Introduce a validation mechanism to ensure the
table
parameter matches a predefined list of allowed table names. - Replace the dynamic query construction with a safer approach that uses parameterized queries for all dynamic values.
- Update the
getMasterDataFromTable
method to include the validation logic and modify the query construction accordingly.
-
Copy modified lines R123-R128
@@ -122,2 +122,8 @@ | ||
List<Map<String, Object>> resultSetList = new ArrayList<>(); | ||
// Validate the table name against a predefined list of allowed table names | ||
List<String> allowedTables = Arrays.asList("Table1", "Table2", "Table3"); // Replace with actual table names | ||
if (!allowedTables.contains(table)) { | ||
throw new IllegalArgumentException("Invalid table name: " + table); | ||
} | ||
|
||
StringBuilder baseQueryBuilder = new StringBuilder(" SELECT ").append(columnNames).append(" FROM ").append(schema).append(".").append(table); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
β»οΈ Duplicate comments (4)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (2)
173-186
: Critical: SQL injection vulnerability in getBatchForBenDetails method.The method constructs SQL queries by directly concatenating user-controlled parameters (
schema
,table
,columnNames
,whereClause
) without validation, creating SQL injection vulnerabilities.The existing fix suggestions from past reviews should be implemented:
- Validate
schema
andtable
parameters against a whitelist- Avoid direct concatenation of
whereClause
- use parameterized queries instead- Add input validation for
columnNames
public List<Map<String, Object>> getBatchForBenDetails(String schema, String table, String columnNames, String whereClause, int limit, int offset) { jdbcTemplate = getJdbcTemplate(); + // Validate schema and table names + if (!isValidIdentifier(schema) || !isValidIdentifier(table)) { + throw new IllegalArgumentException("Invalid schema or table name"); + } String query = "SELECT " + columnNames + " FROM " + schema + "." + table + whereClause + " LIMIT ? OFFSET ?"; // ... rest of method } +private boolean isValidIdentifier(String identifier) { + return identifier != null && identifier.matches("^[a-zA-Z_][a-zA-Z0-9_]*$"); +}
150-152
: Sanitize user-controlled data in log statements.Logging user-controlled data directly can lead to log injection attacks.
Sanitize the logged values or use structured logging:
-logger.info("Select query central: {}", finalQuery); -logger.info("Last Downloaded Date: {}", lastDownloadDate); -logger.info("Query Params: {}", params); +logger.info("Select query central executed for table: {}", table); +logger.info("Last Downloaded Date: {}", lastDownloadDate); +logger.info("Query parameter count: {}", params.size());src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (2)
244-283
: Critical: Fix infinite loop and incomplete batch processing logic.The current implementation has several critical issues that will cause system hangs:
- Infinite loop:
while(true)
with no proper exit condition- Incomplete processing: Fetches batches but never processes them
- Incorrect exit logic:
totalProcessed
is checked but never incremented- Missing batch handling: The fetched batch data is never actually processed
public String update_I_BeneficiaryDetails_for_processed_in_batches(SyncUploadDataDigester syncUploadDataDigester) { logger.info("Processing update_I_BeneficiaryDetails_for_processed_in_batches for table: {}", syncUploadDataDigester.getTableName()); List<Object[]> syncData = new ArrayList<>(); String query = getQueryFor_I_BeneficiaryDetails(syncUploadDataDigester.getSchemaName(), syncUploadDataDigester.getTableName()); int limit = 1000; int offset = 0; int totalProcessed = 0; String whereClause = " WHERE Processed <> 'P' AND VanID IS NOT NULL "; while (true) { List<Map<String, Object>> batch; try { batch = dataSyncRepositoryCentral.getBatchForBenDetails( syncUploadDataDigester.getSchemaName(), syncUploadDataDigester.getTableName(), syncUploadDataDigester.getServerColumns(), whereClause, limit, offset); } catch (Exception e) { logger.error("Error fetching batch for i_beneficiarydetails: {}", e.getMessage(), e); return "Error fetching data for i_beneficiarydetails: " + e.getMessage(); } + // Exit condition: no more data to process + if (batch == null || batch.isEmpty()) { + break; + } + + // Process the batch data + for (Map<String, Object> record : batch) { + if (record.get("BeneficiaryDetailsId") != null && record.get("VanID") != null) { + Object[] objArr = new Object[3]; + objArr[0] = syncUploadDataDigester.getSyncedBy(); + objArr[1] = record.get("BeneficiaryDetailsId"); + objArr[2] = record.get("VanID"); + syncData.add(objArr); + } + } + + // Execute batch update + if (!syncData.isEmpty()) { + try { + int[] results = dataSyncRepositoryCentral.syncDataToCentralDB( + syncUploadDataDigester.getSchemaName(), + syncUploadDataDigester.getTableName(), + null, + query, + syncData); + totalProcessed += results.length; + syncData.clear(); // Clear for next batch + } catch (Exception e) { + logger.error("Error processing batch: {}", e.getMessage(), e); + return "Error processing batch: " + e.getMessage(); + } + } + + offset += limit; } - if (totalProcessed > 0 || syncData.isEmpty()) { - logger.info("Finished processing i_beneficiarydetails. Total records processed: {}", totalProcessed); - return "data sync passed"; - } else { - logger.error("No records were processed for i_beneficiarydetails or an unknown error occurred."); - return "No data processed or sync failed for i_beneficiarydetails."; - } + logger.info("Finished processing i_beneficiarydetails. Total records processed: {}", totalProcessed); + return "data sync passed"; }
244-283
: Consider adding transaction management for batch operations.Large batch operations should be wrapped in transactions to ensure data consistency and provide rollback capability in case of failures.
+@Transactional public String update_I_BeneficiaryDetails_for_processed_in_batches( SyncUploadDataDigester syncUploadDataDigester) { // existing implementation }
Also consider making the batch size configurable through application properties instead of hardcoding it to 1000.
π§Ή Nitpick comments (1)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (1)
29-43
: Consider making table groups configurable.The hardcoded table groups might need updates when new tables are added or when table categorization changes.
Consider moving table group configuration to application properties or a database configuration table for better maintainability:
# application.properties sync.table.groups.1=m_beneficiaryregidmapping,m_another_master_table sync.table.groups.2=t_indent,t_indentorder,t_indentissue,t_stocktransfer,t_itemstockentry sync.table.groups.3=i_beneficiarydetails,t_patientissue,t_physicalstockentry,t_stockadjustment,t_saitemmapping,t_patientreturn,t_facilityconsumption,t_itemstockexit
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π Files selected for processing (3)
pom.xml
(1 hunks)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
(1 hunks)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
(1 hunks)
π§° Additional context used
πͺ GitHub Check: SonarCloud
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
[notice] 150-150: Logging should not be vulnerable to injection attacks
Change this code to not log user-controlled data.See more on SonarQube Cloud
[notice] 151-151: Logging should not be vulnerable to injection attacks
Change this code to not log user-controlled data.See more on SonarQube Cloud
[notice] 152-152: Logging should not be vulnerable to injection attacks
Change this code to not log user-controlled data.See more on SonarQube Cloud
[failure] 159-159: Database queries should not be vulnerable to injection attacks
Change this code to not construct SQL queries directly from user-controlled data.See more on SonarQube Cloud
[failure] 161-161: Database queries should not be vulnerable to injection attacks
Change this code to not construct SQL queries directly from user-controlled data.See more on SonarQube Cloud
β° Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (java)
π Additional comments (1)
pom.xml (1)
50-66
: Remove commented exclusions and verify logging dependency compatibility.Consider deleting the commented-out
<exclusions>
block to keep the POM clean. With that exclusion removed, Spring Bootβs default logging starter will be active alongside your explicit SLF4J API and SLF4J Simple dependenciesβplease manually verify this setup doesnβt introduce conflicting logging implementations.Affected section:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> - <!-- <exclusions> - <exclusion> - <groupId>org.springframework.boot</groupId> - <artifactId>spring-boot-starter-logging</artifactId> - </exclusion> - </exclusions> --> </dependency>Please run locally to confirm a single logging framework is resolved at build time:
mvn dependency:tree | grep -i "log\|slf4j"
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Show resolved
Hide resolved
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
Fixed
Show fixed
Hide fixed
|
* Fix the issue in retrieving the Casesheet Print Data for Cancer Screening (#96) * fix: change the return type to object to get the details * fix: remove commented lines * fix: add the column for NumberperWeek to store and fetch the data (#94) * Update version in pom.xml to 3.4.0 * chore: add Lombok @DaTa to BenClinicalObservations (#97) * fix: add file path in cancer gynecological examination (#98) * Fix the data sync issue (#93) * fix: Data Sync batch processing for large data * fix: use parameterized query * fix: revert the updated query * fix: add token if it is missing while calling restTemplate * fix: update the properties * fix: sync group wise * fix: enable logger in pom.xml * fix: coderabbit comments * fix: remove logger and replace the license * fix: remove the logs * fix: resolve code scanning alert * fix: resolve code scanning alert * fix: resolve code scanning alert * fix: resolve code scanning alert * fix: add comment for code violation * fix: use syncuploaddataDigester class to load the deatils * fix: add syncuploaddigestor in implementation file too * fix: sonarcube comments * fix: add functionality to save the file ID's uploaded from doctor screen (#99) * story: amm-1668 task - 1754 * story: amm-1754 updated response including father name and phone no of the beneficiary (#102) * fix: amm-1754 changing the query to get the expected response similar to hwc * fix: amm-1754 compilation error fix * fix: amm-1754 argument issue fix * fix: amm-1754 argument issue fix * fix: amm-1754 argument issue fix * Save the files uploaded from Doctor Screen (#100) * fix: add file path in cancer gynecological examination * fix: save the files uploaded from the doctor portal * fix: get file names in the response of gynecological examination * fix: cherry-pick the commits from develop * fix: cherry-pick commits from develop * Fix the Download Masters issue (#103) * fix: resolve the conflicts * fix: fix the issue in download masters table * fix: remove the validation (#105) * fix: replace the old working code (#106) * Fix the datasync upload issue (#107) * fix: add the schemas * fix: remove logger * fix: revert the old code for repository * Fixing the datasync from local to central (#110) * fix: datasync from local to central * fix: fix the token * fix: remove the token for server authorization (#111) * Fix the datasync Demographics Issue (#112) * fix: remove condition for i_beneficiarydetails * fix: add logs * fix: add logs * fix: remove db_iemr * fix: add log for show column names too * fix: add date-format condition * fix: change valid column name * fix: change valid column name * fix: change valid column name * fix: change valid column name * fix: update insert query * fix: update cleaned column list * fix: date conversion * fix: conversion date-time * fix: add date conversion * fix: logs added * fix: new logger * fix: revert the date condition * fix: revert insert code * fix: revert insert code * fix: date format issue * fix: logs add * fix: log for group and group lsit * fix: clean the code --------- Co-authored-by: vishwab1 <vishwanath@navadhiti.com> * Fix the token issue for Ben-gen id generation (#114) * fix: update server authorization for bengen * fix: update server authorization for bengen * fix: replace authorization for local api call (#116) * fix: add logs (#117) * Fix the BenGen ID Issue (#118) * fix: add logs to check the identity-api * fix: add logs * fix: add logs --------- Co-authored-by: Vanitha S <116701245+vanitha1822@users.noreply.github.com> Co-authored-by: Amoghavarsh <93114621+5Amogh@users.noreply.github.com> Co-authored-by: 5Amogh <amoghavarsh@navadhiti.com> Co-authored-by: Vanitha <vanitha@navadhiti.com>
* Fix the issue in retrieving the Casesheet Print Data for Cancer Screening (#96) * fix: change the return type to object to get the details * fix: remove commented lines * fix: add the column for NumberperWeek to store and fetch the data (#94) * Update version in pom.xml to 3.4.0 * chore: add Lombok @DaTa to BenClinicalObservations (#97) * fix: add file path in cancer gynecological examination (#98) * Fix the data sync issue (#93) * fix: Data Sync batch processing for large data * fix: use parameterized query * fix: revert the updated query * fix: add token if it is missing while calling restTemplate * fix: update the properties * fix: sync group wise * fix: enable logger in pom.xml * fix: coderabbit comments * fix: remove logger and replace the license * fix: remove the logs * fix: resolve code scanning alert * fix: resolve code scanning alert * fix: resolve code scanning alert * fix: resolve code scanning alert * fix: add comment for code violation * fix: use syncuploaddataDigester class to load the deatils * fix: add syncuploaddigestor in implementation file too * fix: sonarcube comments * fix: add functionality to save the file ID's uploaded from doctor screen (#99) * story: amm-1668 task - 1754 * story: amm-1754 updated response including father name and phone no of the beneficiary (#102) * fix: amm-1754 changing the query to get the expected response similar to hwc * fix: amm-1754 compilation error fix * fix: amm-1754 argument issue fix * fix: amm-1754 argument issue fix * fix: amm-1754 argument issue fix * Save the files uploaded from Doctor Screen (#100) * fix: add file path in cancer gynecological examination * fix: save the files uploaded from the doctor portal * fix: get file names in the response of gynecological examination * fix: cherry-pick the commits from develop * fix: cherry-pick commits from develop * Fix the Download Masters issue (#103) * fix: resolve the conflicts * fix: fix the issue in download masters table * fix: remove the validation (#105) * fix: replace the old working code (#106) * Fix the datasync upload issue (#107) * fix: add the schemas * fix: remove logger * fix: revert the old code for repository * Fixing the datasync from local to central (#110) * fix: datasync from local to central * fix: fix the token * fix: remove the token for server authorization (#111) * Fix the datasync Demographics Issue (#112) * fix: remove condition for i_beneficiarydetails * fix: add logs * fix: add logs * fix: remove db_iemr * fix: add log for show column names too * fix: add date-format condition * fix: change valid column name * fix: change valid column name * fix: change valid column name * fix: change valid column name * fix: update insert query * fix: update cleaned column list * fix: date conversion * fix: conversion date-time * fix: add date conversion * fix: logs added * fix: new logger * fix: revert the date condition * fix: revert insert code * fix: revert insert code * fix: date format issue * fix: logs add * fix: log for group and group lsit * fix: clean the code --------- Co-authored-by: vishwab1 <vishwanath@navadhiti.com> * Fix the token issue for Ben-gen id generation (#114) * fix: update server authorization for bengen * fix: update server authorization for bengen * fix: replace authorization for local api call (#116) * fix: add logs (#117) * Fix the BenGen ID Issue (#118) * fix: add logs to check the identity-api * fix: add logs * fix: add logs * fix: add logs * fix: add logs for checking * fix: update the prepare statement * fix: add log * fix: add log * fix: add logs * fix: add Sync Result * fix: add logs * fix: add syncResults * fix: add syncResults * fix: update processed flag * fix: update the response * fix: update the exception block * fix: upload fix * fix: address duplication issue * fix: datasync issue * fix: remove reason and add dataaccess exception * fix: revert the server exception * fix: remove unwanted code * fix: exception message * fix: fix the error message * fix: update the version --------- Co-authored-by: Amoghavarsh <93114621+5Amogh@users.noreply.github.com> Co-authored-by: 5Amogh <amoghavarsh@navadhiti.com> Co-authored-by: vishwab1 <vishwanath@navadhiti.com>
π Description
JIRA ID:
AMM-1352
The data sync fails when handling large volumes of data. Implement batch queries to resolve this issue.
β Type of Change
Summary by CodeRabbit
New Features
Refactor
Chores