Skip to content

Conversation

vanitha1822
Copy link
Contributor

@vanitha1822 vanitha1822 commented Jul 2, 2025

πŸ“‹ Description

JIRA ID:

AMM-1352

The data sync fails when handling large volumes of data. Implement batch queries to resolve this issue.

βœ… Type of Change

  • 🐞 Bug fix (non-breaking change which resolves an issue)

Summary by CodeRabbit

  • New Features

    • Introduced batch processing for beneficiary details with improved synchronization efficiency.
    • Added support for paginated data retrieval to handle large datasets effectively.
    • Enhanced data synchronization endpoint to accept and utilize JWT tokens from requests.
  • Refactor

    • Improved JWT token extraction logic for API requests to ensure consistent authentication handling.
    • Added detailed logging for authorization and token management during data synchronization.
    • Refined synchronization logic with better error handling, modularization, and detailed logging.
    • Enhanced robustness and maintainability of data synchronization methods with improved logging and exception handling.
  • Chores

    • Enhanced database connection pool settings for better performance and reliability.
    • Increased logging verbosity for database connection operations.
    • Updated logging dependencies to re-enable default Spring Boot logging with explicit SLF4J support.

Copy link

coderabbitai bot commented Jul 2, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

A new batch retrieval method was added to the data synchronization repository. The data sync service was enhanced to process beneficiary details in batches, marking them as processed after updates. The controller was updated to extract a JWT token from request cookies and pass it along during sync. The utility for creating REST requests was refactored to improve JWT token extraction from request headers and cookies.

Changes

File(s) Change Summary
.../service/dataSyncLayerCentral/DataSyncRepositoryCentral.java Refactored methods for clarity and error handling; added getBatchForBenDetails for paginated batch data retrieval.
.../service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java Added token parameter to syncDataToServer; added batch processing for beneficiary details; added table grouping and modular sync methods; enhanced logging and error handling.
.../controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java Modified dataSyncToServer to accept HttpServletRequest, extract JWT token from cookies, and pass it to sync method.
.../utils/RestTemplateUtil.java Refactored JWT token extraction logic to always get request attributes; added check for "JwtToken" header; added headers logging.
application.properties Added Tomcat JDBC pool properties: max wait, test on borrow, validation query and interval; set pool logging to DEBUG.

Suggested reviewers

  • devikasuresh20

Possibly related PRs

Poem

πŸ₯• Hopping through code with a joyful cheer,

New batches of data now appear!
Tokens extracted from cookies with care,
Syncing the details, no time to spare.
Updates in batches, smooth as can be,
In the garden of code, we're happy as a bunny! πŸ‡

✨ Finishing Touches
πŸ§ͺ Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch datasync_1352

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share
πŸͺ§ Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@vanitha1822 vanitha1822 requested review from drtechie and vishwab1 July 2, 2025 09:40
@vanitha1822 vanitha1822 self-assigned this Jul 2, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

πŸ“œ Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 4d9fc8d and 54c7ad0.

πŸ“’ Files selected for processing (2)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (1 hunks)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Package-test
  • GitHub Check: style-check / checkstyle
  • GitHub Check: Build
  • GitHub Check: Analyze (java)
πŸ”‡ Additional comments (2)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (2)

63-71: Code logic looks correct for the new table handling.

The conditional logic properly routes "i_beneficiarydetails" table to the new batch processing method, which should help with the data sync issue for large volumes.


354-364: Query construction is correct and secure.

The UPDATE query is properly constructed using StringBuilder with parameterized values, which prevents SQL injection vulnerabilities.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (3)

76-76: Make batch size configurable instead of hardcoded.

The hardcoded batch size persists from previous reviews. This should be configurable through application properties for better maintainability and performance tuning.

+@Value("${sync.batch.size:500}")
+private int batchSize;
+
-int batchSize = 500;
+// Use the injected batchSize field instead

80-88: Add safeguards to prevent infinite loops.

The while(true) loop lacks proper termination safeguards and could run indefinitely if there are issues with data processing or the batch method returns inconsistent results.

+private static final int MAX_ITERATIONS = 10000; // Safety limit
+
public void syncTableFromVan(String schema, String tableName, String columnNames, String whereClause, String insertQuery) {
    int batchSize = 500;
    int offset = 0;
-   boolean hasMoreData = true;
+   int iterations = 0;

-   while (hasMoreData) {
+   while (iterations < MAX_ITERATIONS) {
        List<Map<String, Object>> batch = dataSyncRepositoryCentral.getBatchForTable(
                schema, tableName, columnNames, whereClause, batchSize, offset);

        if (batch == null || batch.isEmpty()) {
            logger.info("No more records to sync for table: {}", tableName);
            System.out.println("Test:No more records to sync for table: " + tableName);
            break;
        }
        
        // ... rest of processing logic
        
        offset += batchSize;
+       iterations++;
    }
+   
+   if (iterations >= MAX_ITERATIONS) {
+       throw new RuntimeException("Batch processing exceeded maximum iterations limit");
+   }
}

75-103: Add transaction management for batch operations.

Large batch operations should be wrapped in transactions to ensure data consistency and provide rollback capability in case of failures.

+import org.springframework.transaction.annotation.Transactional;

+@Transactional
public void syncTableFromVan(String schema, String tableName, String columnNames, String whereClause, String insertQuery) {
    // existing implementation
}

Also consider adding proper error handling and logging for batch processing operations to aid in debugging sync issues.

🧹 Nitpick comments (3)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (1)

57-57: Standardize logging approach.

The code uses both SLF4J logger and System.out.println for similar messages. This creates inconsistency and makes log management difficult.

Choose one approach for consistency:

  • Use SLF4J logger for production logging
  • Use System.out.println only for temporary debugging
logger.info("Starting sync for table: {}", tableName);
-System.out.println("Test:Starting sync for table: " + tableName);

Also applies to: 62-62, 67-67, 86-86, 100-100

src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java (2)

139-140: Consider the debugging approach for production readiness.

Multiple System.out.println statements have been added throughout the file, often alongside commented-out logger statements. While this is useful for debugging, consider the following for production:

  1. Temporary vs Permanent: If these are temporary debugging statements, ensure they're removed before production deployment
  2. Log Level Management: Consider using logger with DEBUG level instead of System.out.println for better log management
  3. Performance Impact: System.out.println can impact performance in high-throughput scenarios

For production deployment, consider:

-System.out.println("Piramal: SyncData="+new Gson().toJson(syncData));
+logger.debug("SyncData: {}", new Gson().toJson(syncData));

Also applies to: 144-145, 153-154, 162-166, 174-175, 283-283, 308-310, 317-318, 334-334, 352-354, 379-380, 384-385, 387-387, 425-425, 428-428


155-155: Performance consideration for JSON serialization in debug statements.

The debug statements are performing JSON serialization with new Gson().toJson() which can be expensive for large datasets, especially since they're executed in production code paths.

Consider conditional serialization or using logger's lazy evaluation:

-System.out.println("Piramal: SyncData="+new Gson().toJson(syncData));
+if (logger.isDebugEnabled()) {
+    logger.debug("SyncData: {}", new Gson().toJson(syncData));
+}

Also applies to: 165-166, 309-310, 334-334, 384-385, 425-425

πŸ“œ Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between ee6d459 and 7ab2343.

πŸ“’ Files selected for processing (3)
  • src/main/java/com/iemr/mmu/service/dataSyncActivity/DataSyncRepository.java (1 hunks)
  • src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java (7 hunks)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (1 hunks)
🧰 Additional context used
πŸͺ› GitHub Actions: Package
src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java

[warning] Checkstyle warnings: Some methods exceed max allowed length; multiple lines longer than 120 characters; 'if' and 'else' constructs must use braces; naming conventions violations

src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java

[error] 81-81: Compilation error: cannot find symbol method getBatchForTable(String, String, String, String, int, int) in DataSyncRepositoryCentral

πŸͺ› GitHub Actions: Build On Pull Request
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java

[error] 81-81: Compilation error: cannot find symbol method getBatchForTable(String, String, String, String, int, int) in DataSyncRepositoryCentral

πŸͺ› GitHub Actions: Call Checkstyle
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java

[error] 81-81: Compilation error: cannot find symbol method getBatchForTable(String, String, String, String, int, int) in variable dataSyncRepositoryCentral of type DataSyncRepositoryCentral

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (java)
πŸ”‡ Additional comments (1)
src/main/java/com/iemr/mmu/service/dataSyncActivity/DataSyncRepository.java (1)

88-88: Debug output added for result set visibility.

The debug print statement helps trace the data being retrieved during synchronization operations, which aligns with the overall debugging approach in this PR.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java (1)

26-37: Remove commented code to improve code cleanliness.

The commented code should be removed as it's no longer needed and clutters the codebase.

-	//  if (jwtToken == null || jwtToken.isEmpty()) {
-    //         ServletRequestAttributes attrs =
-    //                 (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
-    //         if (attrs != null) {
-    //             HttpServletRequest request = attrs.getRequest();
-    //             try {
-    //                 jwtToken = CookieUtil.getJwtTokenFromCookie(request);
-    //             } catch (Exception e) {
-    //                 logger.error("Error while getting JWT token from cookie: {}", e.getMessage());
-    //             }
-    //         }
-    //     }
src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java (2)

66-66: Remove unnecessary blank line.

The blank line adds no value and should be removed for cleaner code.

-		

66-66: Remove test prefix from production logging.

The log statement contains a "test" prefix which should be removed from production code.

-		logger.info("test: vanto server auth="+Authorization);
+		logger.debug("Authorization header received for data sync");

Also consider changing the log level to DEBUG as authorization details are sensitive.

πŸ“œ Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 3ea78f0 and 4357ca1.

πŸ“’ Files selected for processing (3)
  • src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java (2 hunks)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (3 hunks)
  • src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java (1)
src/main/java/com/iemr/mmu/utils/response/OutputResponse.java (1)
  • OutputResponse (37-206)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Analyze (java)
  • GitHub Check: Package-test
πŸ”‡ Additional comments (4)
src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java (4)

39-39: Import addition looks good.

The import for CookieUtil is properly added and used in the method implementation.


43-43: Import addition looks good.

The import for HttpServletRequest is properly added and used in the method signature.


63-63: Method signature change is appropriate.

The addition of HttpServletRequest request parameter enables JWT token extraction from cookies, which aligns with the PR objective of fixing data sync issues.


71-71: Service method call update is correct.

The addition of the jwtToken parameter to the service method call properly passes the extracted token for further processing.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

πŸ“œ Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 4357ca1 and 5bdf92b.

πŸ“’ Files selected for processing (2)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (3 hunks)
  • src/main/resources/application.properties (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (java)

logger.debug("Checking record existence query: {} with params: {}", query, Arrays.toString(queryParams));

try {
List<Map<String, Object>> resultSet = jdbcTemplate.queryForList(query, queryParams);

Check failure

Code scanning / CodeQL

Query built from user-controlled sources

This query depends on a [user-provided value](1).

try {
if (params.isEmpty()) {
resultSetList = jdbcTemplate.queryForList(finalQuery);

Check failure

Code scanning / CodeQL

Query built from user-controlled sources

This query depends on a [user-provided value](1).

Copilot Autofix

AI 3 months ago

To address the SQL injection vulnerability, the fix involves validating and sanitizing all user-controlled inputs before incorporating them into the query. Specifically:

  1. Validation: Ensure that schema and table names match predefined acceptable values or conform to strict patterns (e.g., alphanumeric characters only).
  2. Parameterization: Use placeholders (?) for all dynamic parts of the query, and bind values using PreparedStatement or equivalent methods. While JdbcTemplate does not support parameterization for table or schema names, validation can mitigate the risk.

Changes will be made in DataSyncRepositoryCentral.java to validate schema and table names against predefined lists or patterns before constructing the query. If validation fails, an exception will be thrown.


Suggested changeset 1
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java b/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
--- a/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
+++ b/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
@@ -119,6 +119,10 @@
     public List<Map<String, Object>> getMasterDataFromTable(String schema, String table, String columnNames,
                                                             String masterType, Timestamp lastDownloadDate, Integer vanID, Integer psmID) throws Exception {
         jdbcTemplate = getJdbcTemplate();
+        // Validate schema and table names
+        if (!isValidSchemaName(schema) || !isValidTableName(table)) {
+            throw new RuntimeException("Invalid schema or table name provided.");
+        }
         List<Map<String, Object>> resultSetList = new ArrayList<>();
         StringBuilder baseQueryBuilder = new StringBuilder(" SELECT ").append(columnNames).append(" FROM ").append(schema).append(".").append(table);
         List<Object> params = new ArrayList<>();
@@ -186,4 +190,13 @@
     }
 
     // End of Data Download Repository
+    private boolean isValidSchemaName(String schema) {
+        // Example validation: check against a predefined list or pattern
+        return schema != null && schema.matches("[a-zA-Z0-9_]+");
+    }
+
+    private boolean isValidTableName(String table) {
+        // Example validation: check against a predefined list or pattern
+        return table != null && table.matches("[a-zA-Z0-9_]+");
+    }
 }
\ No newline at end of file
EOF
@@ -119,6 +119,10 @@
public List<Map<String, Object>> getMasterDataFromTable(String schema, String table, String columnNames,
String masterType, Timestamp lastDownloadDate, Integer vanID, Integer psmID) throws Exception {
jdbcTemplate = getJdbcTemplate();
// Validate schema and table names
if (!isValidSchemaName(schema) || !isValidTableName(table)) {
throw new RuntimeException("Invalid schema or table name provided.");
}
List<Map<String, Object>> resultSetList = new ArrayList<>();
StringBuilder baseQueryBuilder = new StringBuilder(" SELECT ").append(columnNames).append(" FROM ").append(schema).append(".").append(table);
List<Object> params = new ArrayList<>();
@@ -186,4 +190,13 @@
}

// End of Data Download Repository
private boolean isValidSchemaName(String schema) {
// Example validation: check against a predefined list or pattern
return schema != null && schema.matches("[a-zA-Z0-9_]+");
}

private boolean isValidTableName(String table) {
// Example validation: check against a predefined list or pattern
return table != null && table.matches("[a-zA-Z0-9_]+");
}
}
Copilot is powered by AI and may make mistakes. Always verify output.
Unable to commit as this autofix suggestion is now outdated
String query = "SELECT " + columnNames + " FROM " + schema + "." + table + whereClause + " LIMIT ? OFFSET ?";
logger.debug("Fetching batch for beneficiary details. Query: {}, Limit: {}, Offset: {}", query, limit, offset);
try {
return jdbcTemplate.queryForList(query, limit, offset);

Check failure

Code scanning / CodeQL

Query built from user-controlled sources

This query depends on a [user-provided value](1).
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (1)

238-240: Make batch size configurable and add transaction management

The hardcoded batch size should be configurable, and the entire batch processing should be wrapped in a transaction for data consistency.

+@Value("${sync.batch.size:1000}")
+private int batchSize;

+@Transactional
public String update_I_BeneficiaryDetails_for_processed_in_batches(SyncUploadDataDigester syncUploadDataDigester) {
    // ... existing code ...
-    int limit = 1000;
+    int limit = batchSize;
    // ... rest of implementation
}
πŸ“œ Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 5bdf92b and acb09a0.

πŸ“’ Files selected for processing (2)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (1 hunks)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (5)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (1)
  • Service (17-175)
src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java (1)
  • Service (68-437)
src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerTransactionalImpl.java (1)
  • Service (60-285)
src/main/java/com/iemr/mmu/service/dataSyncActivity/DataSyncRepository.java (1)
  • Service (43-123)
src/main/java/com/iemr/mmu/data/syncActivity_syncLayer/SyncUploadDataDigester.java (1)
  • SyncUploadDataDigester (27-121)
πŸ”‡ Additional comments (7)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (3)

19-30: Good practice: Lazy initialization of JdbcTemplate

The lazy initialization pattern ensures DataSource is available when needed. This is a solid improvement over direct injection.


35-88: Enhanced error handling and logging implementation

The refactored method shows significant improvements:

  • Proper parameterized query building
  • Comprehensive exception handling with specific logging
  • Clear debug messages for troubleshooting

161-172: Verify SQL injection safeguards in getBatchForBenDetails

Although limit and offset are passed as parameters, both columnNames and whereClause are still concatenated directly into your SQL string. Any untrusted or malformed input there can open the door to SQL injection.

Please confirm that all dynamic parts are strictly controlled or sanitized before they reach this method. In particular:

  • DataSyncRepositoryCentral.java (lines 161–172)
    getBatchForBenDetails concatenates columnNames and whereClause into "SELECT " + columnNames + " FROM " + schema + "." + table + whereClause + " LIMIT ? OFFSET ?".
  • GetDataFromVanAndSyncToDBImpl.java
    Review how problematicWhereClause (the whereClause passed in) is constructed. Ensure it never contains unchecked user input or string fragments that could be manipulated.
  • SyncUploadDataDigester.java (or wherever getServerColumns() is implemented)
    Verify that the list of columns returned is built from a safe, whitelist-driven source (not from arbitrary input).

If any of these values can vary based on external or user-supplied data, switch to parameterized queries or implement a strict whitelist/sanitization layer for column names and clauses before concatenation.

src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (4)

29-43: Well-structured table grouping approach

The static TABLE_GROUPS map provides good organization for batch synchronization. The grouping by data characteristics (master, transactional, high-volume) is logical.


45-139: Improved architecture with token support and group-based processing

The refactored method adds proper token handling and implements group-based table synchronization. The error handling and logging are comprehensive.


287-449: Comprehensive generic sync implementation

The generic table sync method provides thorough handling of insert/update logic with proper error handling and logging. The facility ID processing for different table types is well-implemented.


512-528: Useful helper method for debugging failed records

The getFailedRecords method provides valuable debugging information for batch operation failures, helping identify specific records that failed processing.

System.out.println("Executing batch operation for table: " + tableName + ". Query type: " + (query.startsWith("INSERT") ? "INSERT" : "UPDATE") + ". Number of records: " + syncDataList.size());
try {
// Start batch insert/update
int[] i = jdbcTemplate.batchUpdate(query, syncDataList);

Check failure

Code scanning / CodeQL

Query built from user-controlled sources

This query depends on a [user-provided value](1).

Copilot Autofix

AI 3 months ago

The best way to fix the problem is to use parameterized SQL queries to eliminate the risk of SQL injection. The schema name and table name should not be concatenated directly into the query string. Instead, these values should be validated or restricted to a predefined list of acceptable values. Alternatively, dynamic queries can be avoided entirely by using predefined queries or stored procedures.

Steps for fixing the problem:

  1. Validate or restrict schemaName and tableName to a predefined list of acceptable values.
  2. Replace the dynamically constructed query with a parameterized query that uses placeholders (?) for variable values.
  3. Pass the actual values as parameters during query execution using the PreparedStatement or equivalent mechanisms supported by JdbcTemplate.

Suggested changeset 1
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
Outside changed files

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java b/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
--- a/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
+++ b/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
@@ -228,19 +228,25 @@
     }
 
     private String getqueryFor_M_BeneficiaryRegIdMapping(String schemaName, String tableName) {
-        StringBuilder queryBuilder = new StringBuilder(" UPDATE ");
-        queryBuilder.append(schemaName).append(".").append(tableName);
-        queryBuilder.append(" SET ");
-        queryBuilder.append("Provisioned = true, SyncedDate = now(), syncedBy = ?");
-        queryBuilder.append(" WHERE ");
-        queryBuilder.append(" BenRegId = ? ");
-        queryBuilder.append(" AND ");
-        queryBuilder.append(" BeneficiaryID = ? ");
-        queryBuilder.append(" AND ");
-        queryBuilder.append(" VanID = ? ");
-        return queryBuilder.toString();
+        // Validate schemaName and tableName against a predefined whitelist
+        if (!isValidSchemaName(schemaName) || !isValidTableName(tableName)) {
+            throw new IllegalArgumentException("Invalid schema or table name.");
+        }
+        // Use parameterized query placeholders
+        return "UPDATE " + schemaName + "." + tableName +
+               " SET Provisioned = true, SyncedDate = now(), syncedBy = ? WHERE BenRegId = ? AND BeneficiaryID = ? AND VanID = ?";
     }
 
+    private boolean isValidSchemaName(String schemaName) {
+        // Implement validation logic for schema names (e.g., match against a whitelist)
+        return Arrays.asList("validSchema1", "validSchema2").contains(schemaName);
+    }
+
+    private boolean isValidTableName(String tableName) {
+        // Implement validation logic for table names (e.g., match against a whitelist)
+        return Arrays.asList("m_beneficiaryregidmapping", "t_indent", "t_patientissue").contains(tableName);
+    }
+
     public String update_I_BeneficiaryDetails_for_processed_in_batches(SyncUploadDataDigester syncUploadDataDigester) {
         logger.info("Processing update_I_BeneficiaryDetails_for_processed_in_batches for table: {}", syncUploadDataDigester.getTableName());
     List<Object[]> syncData = new ArrayList<>();
EOF
@@ -228,19 +228,25 @@
}

private String getqueryFor_M_BeneficiaryRegIdMapping(String schemaName, String tableName) {
StringBuilder queryBuilder = new StringBuilder(" UPDATE ");
queryBuilder.append(schemaName).append(".").append(tableName);
queryBuilder.append(" SET ");
queryBuilder.append("Provisioned = true, SyncedDate = now(), syncedBy = ?");
queryBuilder.append(" WHERE ");
queryBuilder.append(" BenRegId = ? ");
queryBuilder.append(" AND ");
queryBuilder.append(" BeneficiaryID = ? ");
queryBuilder.append(" AND ");
queryBuilder.append(" VanID = ? ");
return queryBuilder.toString();
// Validate schemaName and tableName against a predefined whitelist
if (!isValidSchemaName(schemaName) || !isValidTableName(tableName)) {
throw new IllegalArgumentException("Invalid schema or table name.");
}
// Use parameterized query placeholders
return "UPDATE " + schemaName + "." + tableName +
" SET Provisioned = true, SyncedDate = now(), syncedBy = ? WHERE BenRegId = ? AND BeneficiaryID = ? AND VanID = ?";
}

private boolean isValidSchemaName(String schemaName) {
// Implement validation logic for schema names (e.g., match against a whitelist)
return Arrays.asList("validSchema1", "validSchema2").contains(schemaName);
}

private boolean isValidTableName(String tableName) {
// Implement validation logic for table names (e.g., match against a whitelist)
return Arrays.asList("m_beneficiaryregidmapping", "t_indent", "t_patientissue").contains(tableName);
}

public String update_I_BeneficiaryDetails_for_processed_in_batches(SyncUploadDataDigester syncUploadDataDigester) {
logger.info("Processing update_I_BeneficiaryDetails_for_processed_in_batches for table: {}", syncUploadDataDigester.getTableName());
List<Object[]> syncData = new ArrayList<>();
Copilot is powered by AI and may make mistakes. Always verify output.
Unable to commit as this autofix suggestion is now outdated
if (params.isEmpty()) {
resultSetList = jdbcTemplate.queryForList(finalQuery);
} else {
resultSetList = jdbcTemplate.queryForList(finalQuery, params.toArray());

Check failure

Code scanning / CodeQL

Query built from user-controlled sources

This query depends on a [user-provided value](1).

Copilot Autofix

AI 3 months ago

To fix the issue, the table parameter should not be directly appended to the SQL query string. Instead, the application should validate the table parameter against a predefined list of allowed table names. This ensures that only legitimate table names can be used in the query. Additionally, the query should use parameterized placeholders for all other dynamic values to prevent SQL injection.

Steps to fix:

  1. Introduce a validation mechanism to ensure the table parameter matches a predefined list of allowed table names.
  2. Replace the dynamic query construction with a safer approach that uses parameterized queries for all dynamic values.
  3. Update the getMasterDataFromTable method to include the validation logic and modify the query construction accordingly.

Suggested changeset 1
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java b/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
--- a/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
+++ b/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
@@ -122,2 +122,8 @@
         List<Map<String, Object>> resultSetList = new ArrayList<>();
+        // Validate the table name against a predefined list of allowed table names
+        List<String> allowedTables = Arrays.asList("Table1", "Table2", "Table3"); // Replace with actual table names
+        if (!allowedTables.contains(table)) {
+            throw new IllegalArgumentException("Invalid table name: " + table);
+        }
+
         StringBuilder baseQueryBuilder = new StringBuilder(" SELECT ").append(columnNames).append(" FROM ").append(schema).append(".").append(table);
EOF
@@ -122,2 +122,8 @@
List<Map<String, Object>> resultSetList = new ArrayList<>();
// Validate the table name against a predefined list of allowed table names
List<String> allowedTables = Arrays.asList("Table1", "Table2", "Table3"); // Replace with actual table names
if (!allowedTables.contains(table)) {
throw new IllegalArgumentException("Invalid table name: " + table);
}

StringBuilder baseQueryBuilder = new StringBuilder(" SELECT ").append(columnNames).append(" FROM ").append(schema).append(".").append(table);
Copilot is powered by AI and may make mistakes. Always verify output.
Unable to commit as this autofix suggestion is now outdated
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (2)

173-186: Critical: SQL injection vulnerability in getBatchForBenDetails method.

The method constructs SQL queries by directly concatenating user-controlled parameters (schema, table, columnNames, whereClause) without validation, creating SQL injection vulnerabilities.

The existing fix suggestions from past reviews should be implemented:

  1. Validate schema and table parameters against a whitelist
  2. Avoid direct concatenation of whereClause - use parameterized queries instead
  3. Add input validation for columnNames
public List<Map<String, Object>> getBatchForBenDetails(String schema, String table, String columnNames,
        String whereClause, int limit, int offset) {
    jdbcTemplate = getJdbcTemplate();
+   // Validate schema and table names
+   if (!isValidIdentifier(schema) || !isValidIdentifier(table)) {
+       throw new IllegalArgumentException("Invalid schema or table name");
+   }
    String query = "SELECT " + columnNames + " FROM " + schema + "." + table + whereClause + " LIMIT ? OFFSET ?";
    // ... rest of method
}

+private boolean isValidIdentifier(String identifier) {
+    return identifier != null && identifier.matches("^[a-zA-Z_][a-zA-Z0-9_]*$");
+}

150-152: Sanitize user-controlled data in log statements.

Logging user-controlled data directly can lead to log injection attacks.

Sanitize the logged values or use structured logging:

-logger.info("Select query central: {}", finalQuery);
-logger.info("Last Downloaded Date: {}", lastDownloadDate);
-logger.info("Query Params: {}", params);
+logger.info("Select query central executed for table: {}", table);
+logger.info("Last Downloaded Date: {}", lastDownloadDate);
+logger.info("Query parameter count: {}", params.size());
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (2)

244-283: Critical: Fix infinite loop and incomplete batch processing logic.

The current implementation has several critical issues that will cause system hangs:

  1. Infinite loop: while(true) with no proper exit condition
  2. Incomplete processing: Fetches batches but never processes them
  3. Incorrect exit logic: totalProcessed is checked but never incremented
  4. Missing batch handling: The fetched batch data is never actually processed
public String update_I_BeneficiaryDetails_for_processed_in_batches(SyncUploadDataDigester syncUploadDataDigester) {
    logger.info("Processing update_I_BeneficiaryDetails_for_processed_in_batches for table: {}", syncUploadDataDigester.getTableName());
    List<Object[]> syncData = new ArrayList<>();
    
    String query = getQueryFor_I_BeneficiaryDetails(syncUploadDataDigester.getSchemaName(),
            syncUploadDataDigester.getTableName());
    
    int limit = 1000;
    int offset = 0;
    int totalProcessed = 0;
    
    String whereClause = " WHERE Processed <> 'P' AND VanID IS NOT NULL ";
    
    while (true) {
        List<Map<String, Object>> batch;
        try {
            batch = dataSyncRepositoryCentral.getBatchForBenDetails(
                    syncUploadDataDigester.getSchemaName(),
                    syncUploadDataDigester.getTableName(),
                    syncUploadDataDigester.getServerColumns(),
                    whereClause,
                    limit,
                    offset);
        } catch (Exception e) {
            logger.error("Error fetching batch for i_beneficiarydetails: {}", e.getMessage(), e);
            return "Error fetching data for i_beneficiarydetails: " + e.getMessage();
        }
        
+       // Exit condition: no more data to process
+       if (batch == null || batch.isEmpty()) {
+           break;
+       }
+       
+       // Process the batch data
+       for (Map<String, Object> record : batch) {
+           if (record.get("BeneficiaryDetailsId") != null && record.get("VanID") != null) {
+               Object[] objArr = new Object[3];
+               objArr[0] = syncUploadDataDigester.getSyncedBy();
+               objArr[1] = record.get("BeneficiaryDetailsId");
+               objArr[2] = record.get("VanID");
+               syncData.add(objArr);
+           }
+       }
+       
+       // Execute batch update
+       if (!syncData.isEmpty()) {
+           try {
+               int[] results = dataSyncRepositoryCentral.syncDataToCentralDB(
+                       syncUploadDataDigester.getSchemaName(),
+                       syncUploadDataDigester.getTableName(),
+                       null,
+                       query,
+                       syncData);
+               totalProcessed += results.length;
+               syncData.clear(); // Clear for next batch
+           } catch (Exception e) {
+               logger.error("Error processing batch: {}", e.getMessage(), e);
+               return "Error processing batch: " + e.getMessage();
+           }
+       }
+       
+       offset += limit;
    }
    
-   if (totalProcessed > 0 || syncData.isEmpty()) {
-       logger.info("Finished processing i_beneficiarydetails. Total records processed: {}", totalProcessed);
-       return "data sync passed";
-   } else {
-       logger.error("No records were processed for i_beneficiarydetails or an unknown error occurred.");
-       return "No data processed or sync failed for i_beneficiarydetails.";
-   }
+   logger.info("Finished processing i_beneficiarydetails. Total records processed: {}", totalProcessed);
+   return "data sync passed";
}

244-283: Consider adding transaction management for batch operations.

Large batch operations should be wrapped in transactions to ensure data consistency and provide rollback capability in case of failures.

+@Transactional
public String update_I_BeneficiaryDetails_for_processed_in_batches(
        SyncUploadDataDigester syncUploadDataDigester) {
    // existing implementation
}

Also consider making the batch size configurable through application properties instead of hardcoding it to 1000.

🧹 Nitpick comments (1)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (1)

29-43: Consider making table groups configurable.

The hardcoded table groups might need updates when new tables are added or when table categorization changes.

Consider moving table group configuration to application properties or a database configuration table for better maintainability:

# application.properties
sync.table.groups.1=m_beneficiaryregidmapping,m_another_master_table
sync.table.groups.2=t_indent,t_indentorder,t_indentissue,t_stocktransfer,t_itemstockentry
sync.table.groups.3=i_beneficiarydetails,t_patientissue,t_physicalstockentry,t_stockadjustment,t_saitemmapping,t_patientreturn,t_facilityconsumption,t_itemstockexit
πŸ“œ Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between acb09a0 and f2134dd.

πŸ“’ Files selected for processing (3)
  • pom.xml (1 hunks)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (1 hunks)
  • src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (1 hunks)
🧰 Additional context used
πŸͺ› GitHub Check: SonarCloud
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java

[notice] 150-150: Logging should not be vulnerable to injection attacks

Change this code to not log user-controlled data.

See more on SonarQube Cloud


[notice] 151-151: Logging should not be vulnerable to injection attacks

Change this code to not log user-controlled data.

See more on SonarQube Cloud


[notice] 152-152: Logging should not be vulnerable to injection attacks

Change this code to not log user-controlled data.

See more on SonarQube Cloud


[failure] 159-159: Database queries should not be vulnerable to injection attacks

Change this code to not construct SQL queries directly from user-controlled data.

See more on SonarQube Cloud


[failure] 161-161: Database queries should not be vulnerable to injection attacks

Change this code to not construct SQL queries directly from user-controlled data.

See more on SonarQube Cloud

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (java)
πŸ”‡ Additional comments (1)
pom.xml (1)

50-66: Remove commented exclusions and verify logging dependency compatibility.

Consider deleting the commented-out <exclusions> block to keep the POM clean. With that exclusion removed, Spring Boot’s default logging starter will be active alongside your explicit SLF4J API and SLF4J Simple dependenciesβ€”please manually verify this setup doesn’t introduce conflicting logging implementations.

Affected section:

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter</artifactId>
-       <!-- <exclusions>
-           <exclusion>
-               <groupId>org.springframework.boot</groupId>
-               <artifactId>spring-boot-starter-logging</artifactId>
-           </exclusion>
-       </exclusions> -->
    </dependency>

Please run locally to confirm a single logging framework is resolved at build time:

mvn dependency:tree | grep -i "log\|slf4j"

@vanitha1822 vanitha1822 changed the base branch from develop to release-3.4.0 July 23, 2025 05:53
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
1 Security Hotspot

See analysis details on SonarQube Cloud

@vanitha1822 vanitha1822 merged commit d878d4b into release-3.4.0 Jul 30, 2025
1 of 3 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Aug 25, 2025
10 tasks
snehar-nd pushed a commit that referenced this pull request Aug 25, 2025
* Fix the issue in retrieving the Casesheet Print Data for Cancer Screening (#96)

* fix: change the return type to object to get the details

* fix: remove commented lines

* fix: add the column for NumberperWeek to store and fetch the data (#94)

* Update version in pom.xml to 3.4.0

* chore: add Lombok @DaTa to BenClinicalObservations (#97)

* fix: add file path in cancer gynecological examination (#98)

* Fix the data sync issue  (#93)

* fix: Data Sync batch processing for large data

* fix: use parameterized query

* fix: revert the updated query

* fix: add token if it is missing while calling restTemplate

* fix: update the properties

* fix: sync group wise

* fix: enable logger in pom.xml

* fix: coderabbit comments

* fix: remove logger and replace the license

* fix: remove the logs

* fix: resolve code scanning alert

* fix: resolve code scanning alert

* fix: resolve code scanning alert

* fix: resolve code scanning alert

* fix: add comment for code violation

* fix: use syncuploaddataDigester class to load the deatils

* fix: add syncuploaddigestor in implementation file too

* fix: sonarcube comments

* fix: add functionality to save the file ID's uploaded from doctor screen (#99)

* story: amm-1668 task - 1754

* story: amm-1754 updated response including father name and phone no of the beneficiary (#102)

* fix: amm-1754 changing the query to get the expected response similar to hwc

* fix: amm-1754 compilation error fix

* fix: amm-1754 argument issue fix

* fix: amm-1754 argument issue fix

* fix: amm-1754 argument issue fix

* Save the files uploaded from Doctor Screen (#100)

* fix: add file path in cancer gynecological examination

* fix: save the files uploaded from the doctor portal

* fix: get file names in the response of gynecological examination

* fix: cherry-pick the commits from develop

* fix: cherry-pick commits from develop

* Fix the Download Masters issue (#103)

* fix: resolve the conflicts

* fix: fix the issue in download masters table

* fix: remove the validation (#105)

* fix: replace the old working code (#106)

* Fix the datasync upload issue (#107)

* fix: add the schemas

* fix: remove logger

* fix: revert the old code for repository

* Fixing the datasync from local to central (#110)

* fix: datasync from local to central

* fix: fix the token

* fix: remove the token for server authorization (#111)

* Fix the datasync Demographics Issue (#112)

* fix: remove condition for i_beneficiarydetails

* fix: add logs

* fix: add logs

* fix: remove db_iemr

* fix: add log for show column names too

* fix: add date-format condition

* fix: change valid column name

* fix: change valid column name

* fix: change valid column name

* fix: change valid column name

* fix: update insert query

* fix: update cleaned column list

* fix: date conversion

* fix: conversion date-time

* fix: add date conversion

* fix: logs added

* fix: new logger

* fix: revert the date condition

* fix: revert insert code

* fix: revert insert code

* fix: date format issue

* fix: logs add

* fix: log for group and group lsit

* fix: clean the code

---------

Co-authored-by: vishwab1 <vishwanath@navadhiti.com>

* Fix the token issue for Ben-gen id generation (#114)

* fix: update server authorization for bengen

* fix: update server authorization for bengen

* fix: replace authorization for local api call (#116)

* fix: add logs (#117)

* Fix the BenGen ID Issue (#118)

* fix: add logs to check the identity-api

* fix: add logs

* fix: add logs

---------

Co-authored-by: Vanitha S <116701245+vanitha1822@users.noreply.github.com>
Co-authored-by: Amoghavarsh <93114621+5Amogh@users.noreply.github.com>
Co-authored-by: 5Amogh <amoghavarsh@navadhiti.com>
Co-authored-by: Vanitha <vanitha@navadhiti.com>
vanitha1822 added a commit that referenced this pull request Sep 24, 2025
* Fix the issue in retrieving the Casesheet Print Data for Cancer Screening (#96)

* fix: change the return type to object to get the details

* fix: remove commented lines

* fix: add the column for NumberperWeek to store and fetch the data (#94)

* Update version in pom.xml to 3.4.0

* chore: add Lombok @DaTa to BenClinicalObservations (#97)

* fix: add file path in cancer gynecological examination (#98)

* Fix the data sync issue  (#93)

* fix: Data Sync batch processing for large data

* fix: use parameterized query

* fix: revert the updated query

* fix: add token if it is missing while calling restTemplate

* fix: update the properties

* fix: sync group wise

* fix: enable logger in pom.xml

* fix: coderabbit comments

* fix: remove logger and replace the license

* fix: remove the logs

* fix: resolve code scanning alert

* fix: resolve code scanning alert

* fix: resolve code scanning alert

* fix: resolve code scanning alert

* fix: add comment for code violation

* fix: use syncuploaddataDigester class to load the deatils

* fix: add syncuploaddigestor in implementation file too

* fix: sonarcube comments

* fix: add functionality to save the file ID's uploaded from doctor screen (#99)

* story: amm-1668 task - 1754

* story: amm-1754 updated response including father name and phone no of the beneficiary (#102)

* fix: amm-1754 changing the query to get the expected response similar to hwc

* fix: amm-1754 compilation error fix

* fix: amm-1754 argument issue fix

* fix: amm-1754 argument issue fix

* fix: amm-1754 argument issue fix

* Save the files uploaded from Doctor Screen (#100)

* fix: add file path in cancer gynecological examination

* fix: save the files uploaded from the doctor portal

* fix: get file names in the response of gynecological examination

* fix: cherry-pick the commits from develop

* fix: cherry-pick commits from develop

* Fix the Download Masters issue (#103)

* fix: resolve the conflicts

* fix: fix the issue in download masters table

* fix: remove the validation (#105)

* fix: replace the old working code (#106)

* Fix the datasync upload issue (#107)

* fix: add the schemas

* fix: remove logger

* fix: revert the old code for repository

* Fixing the datasync from local to central (#110)

* fix: datasync from local to central

* fix: fix the token

* fix: remove the token for server authorization (#111)

* Fix the datasync Demographics Issue (#112)

* fix: remove condition for i_beneficiarydetails

* fix: add logs

* fix: add logs

* fix: remove db_iemr

* fix: add log for show column names too

* fix: add date-format condition

* fix: change valid column name

* fix: change valid column name

* fix: change valid column name

* fix: change valid column name

* fix: update insert query

* fix: update cleaned column list

* fix: date conversion

* fix: conversion date-time

* fix: add date conversion

* fix: logs added

* fix: new logger

* fix: revert the date condition

* fix: revert insert code

* fix: revert insert code

* fix: date format issue

* fix: logs add

* fix: log for group and group lsit

* fix: clean the code

---------

Co-authored-by: vishwab1 <vishwanath@navadhiti.com>

* Fix the token issue for Ben-gen id generation (#114)

* fix: update server authorization for bengen

* fix: update server authorization for bengen

* fix: replace authorization for local api call (#116)

* fix: add logs (#117)

* Fix the BenGen ID Issue (#118)

* fix: add logs to check the identity-api

* fix: add logs

* fix: add logs

* fix: add logs

* fix: add logs for checking

* fix: update the prepare statement

* fix: add log

* fix: add log

* fix: add logs

* fix: add Sync Result

* fix: add logs

* fix: add syncResults

* fix: add syncResults

* fix: update processed flag

* fix: update the response

* fix: update the exception block

* fix: upload fix

* fix: address duplication issue

* fix: datasync issue

* fix: remove reason and add dataaccess exception

* fix: revert the server exception

* fix: remove unwanted code

* fix: exception message

* fix: fix the error message

* fix: update the version

---------

Co-authored-by: Amoghavarsh <93114621+5Amogh@users.noreply.github.com>
Co-authored-by: 5Amogh <amoghavarsh@navadhiti.com>
Co-authored-by: vishwab1 <vishwanath@navadhiti.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants