-
Notifications
You must be signed in to change notification settings - Fork 28
Move code from Release 3.4.0 to Main #119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
β¦ning (#96) * fix: change the return type to object to get the details * fix: remove commented lines
* fix: Data Sync batch processing for large data * fix: use parameterized query * fix: revert the updated query * fix: add token if it is missing while calling restTemplate * fix: update the properties * fix: sync group wise * fix: enable logger in pom.xml * fix: coderabbit comments * fix: remove logger and replace the license * fix: remove the logs * fix: resolve code scanning alert * fix: resolve code scanning alert * fix: resolve code scanning alert * fix: resolve code scanning alert * fix: add comment for code violation * fix: use syncuploaddataDigester class to load the deatils * fix: add syncuploaddigestor in implementation file too * fix: sonarcube comments
story: amm-1668 task - 1754
* fix: add file path in cancer gynecological examination * fix: save the files uploaded from the doctor portal * fix: get file names in the response of gynecological examination
* fix: resolve the conflicts * fix: fix the issue in download masters table
* fix: add the schemas * fix: remove logger * fix: revert the old code for repository
* fix: datasync from local to central * fix: fix the token
* fix: remove condition for i_beneficiarydetails * fix: add logs * fix: add logs * fix: remove db_iemr * fix: add log for show column names too * fix: add date-format condition * fix: change valid column name * fix: change valid column name * fix: change valid column name * fix: change valid column name * fix: update insert query * fix: update cleaned column list * fix: date conversion * fix: conversion date-time * fix: add date conversion * fix: logs added * fix: new logger * fix: revert the date condition * fix: revert insert code * fix: revert insert code * fix: date format issue * fix: logs add * fix: log for group and group lsit * fix: clean the code --------- Co-authored-by: vishwab1 <vishwanath@navadhiti.com>
* fix: update server authorization for bengen * fix: update server authorization for bengen
* fix: add logs to check the identity-api * fix: add logs * fix: add logs
WalkthroughDependency and logging updates in pom.xml; new SLF4J usage. Token/header handling adjusted to support a literal "datasync" flow. Data sync subsystem significantly refactored with new repository, validation, batching, and logging. Domain updates add numberperWeek to tobacco habits and new fields in beneficiary flow, plus file metadata handling in cancer screening and quick consultation entities. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant API as MMU API
participant Util as RestTemplateUtil
participant Central as Central Server
participant VanDB as Van DB
Note over API,Central: Upload flow with "datasync" token
Client->>API: POST /datasync/upload (Authorization, jwtToken)
API->>Util: createRequestEntity(body, Authorization, "datasync")
Util-->>API: HttpEntity (Authorization as raw, no cookie)
API->>Central: POST upload payload
Central-->>API: 200/response
API->>VanDB: Update processed flags (schema/table specific)
API-->>Client: Status
Note over API,Central: Download and local import
Client->>API: POST /datasync/download (Authorization)
API->>Central: GET data (headers: ServerAuthorization, token="datasync")
Central-->>API: data batch
API->>API: Build local import request
API->>VanDB: Import data locally
VanDB-->>API: Import result
API-->>Client: Download/import status
sequenceDiagram
autonumber
participant Van as Van Sync Orchestrator
participant Repo as DataSyncRepositoryCentral
participant RepoDL as DataSyncRepositoryCentralDownload
participant DB as Central DB
Note over Van,Repo: Batched, validated central sync
Van->>Repo: getBatchForBenDetails(digester, where, limit, offset)
Repo->>DB: SELECT (validated schema/table/columns)
DB-->>Repo: Rows
Repo-->>Van: Batch
Van->>Repo: checkRecordIsAlreadyPresentOrNot(...)
Repo->>DB: SELECT EXISTS
DB-->>Repo: 0/1
alt Not present
Van->>Repo: syncDataToCentralDB(INSERT,..., batch)
else Present
Van->>Repo: syncDataToCentralDB(UPDATE,..., batch)
end
Repo->>DB: batchUpdate
DB-->>Repo: counts
Repo-->>Van: Result
Van->>Van: Aggregate failures, mark processed
Estimated code review effortπ― 5 (Critical) | β±οΈ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. π§ ast-grep (0.38.6)src/main/java/com/iemr/mmu/service/common/transaction/CommonNurseServiceImpl.javaTip π Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. β¨ Finishing Touches
π§ͺ Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. πͺ§ TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 21
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (4)
src/main/java/com/iemr/mmu/utils/JwtUserIdValidationFilter.java (1)
109-118
: Bypass risk widened: treating βjava/β user agents as mobile skips JWT validation.In this branch, if UA matches and
Authorization
exists, the filter proceeds without validating any token. Adding"java/"
further broadens the bypass: any client spoofing UA can skip JWT checks.Apply this diff to require validating the JWT from
Authorization
for mobile/Java clients:- } else { - String userAgent = request.getHeader("User-Agent"); - if (userAgent != null && isMobileClient(userAgent) && authHeader != null) { - try { - UserAgentContext.setUserAgent(userAgent); - filterChain.doFilter(servletRequest, servletResponse); - } finally { - UserAgentContext.clear(); - } - return; - } - } + } else { + String userAgent = request.getHeader("User-Agent"); + if (userAgent != null && isMobileClient(userAgent) && authHeader != null) { + // Expect 'Authorization: Bearer <token>' and validate it + String bearer = authHeader.startsWith("Bearer ") ? authHeader.substring(7) : authHeader; + if (jwtAuthenticationUtil.validateUserIdAndJwtToken(bearer)) { + try { + UserAgentContext.setUserAgent(userAgent); + filterChain.doFilter(servletRequest, servletResponse); + } finally { + UserAgentContext.clear(); + } + return; + } + } + }And restrict UA matching to okhttp only (drop
"java/"
), or remove UA heuristic entirely:- return userAgent.contains("okhttp") || userAgent.contains("java/"); // iOS (custom clients) + return userAgent.contains("okhttp");Also applies to: 149-155
src/main/java/com/iemr/mmu/data/benFlowStatus/BeneficiaryFlowStatus.java (2)
324-350
: Add missing assignments for visitDate and benVisitDate in the extended constructorThe newly added constructor in BeneficiaryFlowStatus.java accepts both
visitDate
andbenVisitDate
parameters but never assigns them to their corresponding fieldsβonlyserviceDate
is set. As a result, calls togetVisitDate()
orgetBenVisitDate()
will returnnull
, leading to confusing or broken behavior downstream.Please update the constructor (around line 324) as follows:
public BeneficiaryFlowStatus(Long benFlowID, Long benRegID, - Timestamp visitDate, + Timestamp visitDate, String benName, String age, Integer ageVal, Short genderID, String genderName, String villageName, String districtName, Long beneficiaryID, String servicePoint, String VisitReason, String VisitCategory, Long benVisitID, - Timestamp regDate, - Timestamp benVisitDate, + Timestamp regDate, + Timestamp benVisitDate, Long visitCode, Timestamp consultationDate, String fatherName, String preferredPhoneNum) { this.benFlowID = benFlowID; this.beneficiaryRegID = benRegID; - this.serviceDate = benVisitDate; + this.visitDate = visitDate; + this.benVisitDate = benVisitDate; + this.serviceDate = benVisitDate; this.beneficiaryName = benName; this.age = age; this.ageVal = ageVal; // β¦rest unchangedβ¦ }β’ Assign
this.visitDate = visitDate;
sogetVisitDate()
returns the provided value.
β’ Assignthis.benVisitDate = benVisitDate;
sogetBenVisitDate()
returns the provided value.
β’ Keepthis.serviceDate = benVisitDate;
as before.Ensure any code paths or tests that rely on the getters for these fields now observe the intended timestamps.
363-373
: Mismatch Between SELECT and Constructor Parameters Requires Immediate FixThe repositoryβs JPA query for getBenDetailsForLeftSidePanel returns 20 columns, but the
BeneficiaryFlowStatus
constructor invocation in getBeneficiaryFlowStatusForLeftPanel expects 21 values. This will lead to anArrayIndexOutOfBoundsException
at runtime.Action items:
- Align the query and the mapping code:
β’ In src/main/java/com/iemr/mmu/repo/benFlowStatus/BeneficiaryFlowStatusRepo.java (around lines 76β79), verify the SELECT list and add or remove the missing field so that the total column count matches the constructor parameters.
β’ In src/main/java/com/iemr/mmu/data/benFlowStatus/BeneficiaryFlowStatus.java (around lines 363β373), update the constructor invocation to use the correct number of indices (0 through N-1).- Add a fast-fail check before mapping:
Replacefor (Object[] objArr : objList) { if (objArr.length != EXPECTED_COLUMN_COUNT) { throw new IllegalStateException( "Expected " + EXPECTED_COLUMN_COUNT + " columns for left-panel mapping, got " + objArr.length); } // β¦construct using objArr[0] .. objArr[EXPECTED_COLUMN_COUNT-1] }EXPECTED_COLUMN_COUNT
with the final, agreed-upon column count.- As a long-term improvement, consider using a JPQL constructor expression or a dedicated DTO projection to eliminate positional-mapping brittleness and mismatches.
src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerImpl.java (1)
333-371
: Sensitive tokens and full requests/responses are logged β remove to prevent credential leakageMultiple
logger.info
lines print Authorization/ServerAuthorization and entireHttpEntity
/ResponseEntity
objects. These contain secrets (bearer/basic tokens, cookies, and potentially PII). This is a high-severity leak in logs.
- Do not log tokens.
- Do not log full request/response objects.
- If absolutely needed, log minimal, redacted metadata (e.g., HTTP status, known safe IDs).
Also,
i1
is unused.Apply this diff to redact logs and remove the unused variable:
@@ - int i = 0, i1 = 0; + int i = 0; @@ - logger.info("Authorization before calling local api="+Authorization); - logger.info("Import url="+benImportUrlLocal); + logger.info("Calling central ben ID generation; import URL configured."); @@ - logger.info("Authorization: " + Authorization); - logger.info("ServerAuthorization: " + ServerAuthorization); + // Avoid logging Authorization headers (secrets) @@ - i = 1; - logger.info("Request to benImporturllocal: " + request1); + i = 1; + logger.info("Dispatching import request to local identity service."); @@ - logger.info("Response from benImportUrlLocal: " + response1); + logger.info("Received response from local identity service with status: {}", + (response1 != null ? response1.getStatusCode() : "null")); @@ - } catch (Exception e) { - logger.error("Error while generating catch UNIQUE_ID at central server: " + e.getMessage()); - throw new Exception("Error while generating catch UNIQUE_ID at central server: " + e.getMessage()); + } catch (Exception e) { + logger.error("Error while generating UNIQUE_ID at central server", e); + throw new Exception("Error while generating UNIQUE_ID at central server: " + e.getMessage()); }Also applies to: 341-356
π§Ή Nitpick comments (37)
src/main/java/com/iemr/mmu/repo/location/DistrictBlockMasterRepo.java (1)
39-40
: Refine query readability and deployment compatibility
- The
getUserservicerolemapping
method correctly usesInteger userId
, matching both the service layer signature and the underlying viewβs column type. Converting toLong
would require cascading changes through the service interface and controller, with no immediate benefit.- To avoid schema-resolution issues in multi-schema environments, qualify the view name or confirm that
v_userservicerolemapping
is present in the default schema. For example:FROM your_schema.v_userservicerolemapping- For clarity and broader SQL compatibility, replace the
IS FALSE
predicate on aTINYINT(1)
column with a standard comparison:(OrUserServciceRoleDeleted = FALSE
= 0
if you prefer numeric literals.)Optional diff to enhance readability:
- @Query(value = " SELECT distinct StateID, StateName,WorkingDistrictID,WorkingDistrictName,blockid,blockname,villageid,villagename FROM v_userservicerolemapping WHERE UserID = :userId and UserServciceRoleDeleted is false",nativeQuery = true) - public List<Object[]> getUserservicerolemapping(@Param("userId") Integer userId); + @Query( + value = "SELECT DISTINCT StateID, StateName, WorkingDistrictID, WorkingDistrictName, " + + "blockid, blockname, villageid, villagename " + + "FROM v_userservicerolemapping " + + "WHERE UserID = :userId " + + " AND UserServciceRoleDeleted = FALSE", + nativeQuery = true + ) + public List<Object[]> getUserservicerolemapping(@Param("userId") Integer userId);src/main/resources/application.properties (1)
52-56
: Reduce borrow-time validation overhead; validate idle connections instead.
testOnBorrow=true
validates on every checkout and can add latency under load. Prefer validating while idle with periodic eviction runs.Apply:
spring.datasource.tomcat.testOnBorrow=true spring.datasource.tomcat.validationQuery=SELECT 1 spring.datasource.tomcat.validationInterval=30000 # 30 sec logging.level.org.apache.tomcat.jdbc.pool=DEBUG +spring.datasource.tomcat.testWhileIdle=true +spring.datasource.tomcat.timeBetweenEvictionRunsMillis=30000Also consider moving
logging.level.org.apache.tomcat.jdbc.pool=DEBUG
to a dev/test profile to avoid verbose pool logs in production.src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java (2)
65-67
: Guard cookie header for datasync and null-safety consistently.Leverage the same
isDataSync
guard and null/empty checks to avoid adding a cookie in datasync scenarios and to keep the logic centralized.Apply this diff:
- if (jwtToken != null && !jwtToken.isEmpty() && !jwtToken.equalsIgnoreCase("datasync")) { - headers.add(HttpHeaders.COOKIE, "Jwttoken=" + jwtToken); - } + if (jwtToken != null && !jwtToken.isEmpty() && !"datasync".equalsIgnoreCase(jwtToken)) { + headers.add(HttpHeaders.COOKIE, "Jwttoken=" + jwtToken); + }
38-69
: Optional: Use HttpHeaders for clarity and stronger typing.
HttpHeaders
is purpose-built for HTTP metadata; it reads a bit cleaner and enables additional conveniences. This is optional and wonβt change behavior.Apply this diff:
- MultiValueMap<String, String> headers = new LinkedMultiValueMap<>(); + HttpHeaders headers = new HttpHeaders();src/main/java/com/iemr/mmu/data/doctor/CancerGynecologicalExamination.java (4)
40-46
: Avoid Lombok @DaTa on JPA entities; duplicate import present.
- @DaTa generates equals/hashCode/toString across all fields, which is discouraged on JPA entities (can trigger lazy-loading, proxy issues, and noisy logs).
import lombok.Data;
appears twice.Prefer explicit getters/setters (already present) or use @Getter/@Setter only. Remove the duplicate import.
Apply this diff:
-import lombok.Data; - - -import lombok.Data; +// Prefer fine-grained Lombok on JPA entities, or rely on explicit methods below +// import lombok.Getter; +// import lombok.Setter;And at class declaration:
-@Data +// @Getter +// @SetterIf you keep @Getter/@Setter, add their imports and consider a custom equals/hashCode based on the primary key (ID).
102-106
: fileIDs field: JSON annotations look fine; confirm dual-serializer behavior.This field carries both Gson (@expose) and Jackson (@JsonProperty) annotations. If both serializers are in play in different layers, thatβs okay; otherwise, itβs extra noise. Also consider
@JsonInclude(Include.NON_EMPTY)
to suppress empty arrays in responses.Would you like me to add
@JsonInclude(Include.NON_EMPTY)
here and onfiles
?
111-114
: Transient files payload: OK, but watch toString exposure if @DaTa remains.If @DaTa stays,
toString()
will includefiles
, which may dump file paths to logs. If thatβs a concern, either remove @DaTa (recommended) or add@ToString.Exclude
.Diff (only if @DaTa stays):
-@Expose -@Transient -private ArrayList<Map<String, String>> files; +@Expose +@Transient +@lombok.ToString.Exclude +private ArrayList<Map<String, String>> files;
442-448
: Redundant getters/setters if @DaTa remains.You have explicit
getFiles
/setFiles
plus @DaTa, which also generates them. Keeping both is harmless but noisy. Either remove @DaTa (preferred for JPA) or drop the manual accessors.Apply this diff if keeping @DaTa:
-public ArrayList<Map<String, String>> getFiles() { - return files; -} - -public void setFiles(ArrayList<Map<String, String>> files) { - this.files = files; -} +// Accessors generated by Lombok @Datasrc/main/java/com/iemr/mmu/service/cancerScreening/CSServiceImpl.java (3)
598-602
: Join only non-empty fileIDs; avoid persisting empty/invalid entries.Current code sets
filePath
even iffileIDs
is empty or contains blanks, resulting in""
or leading/trailing commas. Filter and join defensively.Apply this diff:
- if (cancerGynecologicalExamination.getFileIDs() != null) { - cancerGynecologicalExamination.setFilePath( - String.join(",", cancerGynecologicalExamination.getFileIDs())); - } + if (cancerGynecologicalExamination.getFileIDs() != null) { + List<String> ids = cancerGynecologicalExamination.getFileIDs() + .stream() + .filter(s -> s != null && !s.isBlank()) + .map(String::trim) + .toList(); + if (!ids.isEmpty()) { + cancerGynecologicalExamination.setFilePath(String.join(",", ids)); + } + }
986-990
: Mirror the same validation in the save path.Keep update/save behavior consistent to avoid divergent data shapes across operations.
Apply the same filtering as suggested above for the save path:
- if (cancerGynecologicalExamination.getFileIDs() != null) { - cancerGynecologicalExamination.setFilePath( - String.join(",", cancerGynecologicalExamination.getFileIDs())); - } + if (cancerGynecologicalExamination.getFileIDs() != null) { + List<String> ids = cancerGynecologicalExamination.getFileIDs() + .stream() + .filter(s -> s != null && !s.isBlank()) + .map(String::trim) + .toList(); + if (!ids.isEmpty()) { + cancerGynecologicalExamination.setFilePath(String.join(",", ids)); + } + }
978-992
: Optional: Validate that IDs do not contain commas.Since youβre using a comma delimiter, an ID with a comma would corrupt the encoding. Either validate and reject such values or switch to a safer representation (JSON array in DB or a delimiter unlikely to occur).
Would you like me to add a small validator to strip/reject IDs containing commas?
Also applies to: 592-606
src/main/java/com/iemr/mmu/service/dataSyncActivity/DataSyncRepository.java (2)
92-96
: Reduce select-query logs to debug to avoid noisy logs in production.These fire on every fetch and include the full query. Use debug level unless youβre troubleshooting.
Apply this diff:
- logger.info("Select Query started:"); - logger.info("Table Name: {}", table); - logger.info("Select Query: {}", baseQuery); + logger.debug("Select query started"); + logger.debug("Table: {}", table); + logger.debug("Query: {}", baseQuery);
108-108
: Use equalsIgnoreCase for clarity.
tableName.toLowerCase().equals("i_ben_flow_outreach")
can be simplified and avoids a tmp allocation.Apply this diff:
- if (tableName != null && tableName.toLowerCase().equals("i_ben_flow_outreach")) { + if (tableName != null && tableName.equalsIgnoreCase("i_ben_flow_outreach")) {src/main/java/com/iemr/mmu/utils/JwtUserIdValidationFilter.java (2)
74-75
: Remove unused variable and unify header casing.
jwtTokenFromHeader = request.getHeader("Jwttoken");
is never used, and header name casing is inconsistent with the validated header ("JwtToken"). Clean up and standardize on one header name.Apply this diff:
- // Log headers for debugging - String jwtTokenFromHeader = request.getHeader("Jwttoken"); + // (header retrieval handled below)Optionally, standardize the validated header to
Jwttoken
(cookie name parity) orAuthorization: Bearer ...
.
45-45
: CORS allow-headers list is overly duplicated by case.HTTP header names are case-insensitive; listing multiple case permutations is unnecessary and noisy. Keep a canonical set.
Apply this diff:
- response.setHeader("Access-Control-Allow-Headers", "Authorization, Content-Type, Accept, Jwttoken,serverAuthorization, ServerAuthorization, serverauthorization, Serverauthorization"); + response.setHeader("Access-Control-Allow-Headers", "Authorization, Content-Type, Accept, Jwttoken, ServerAuthorization");src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java (3)
229-232
: No-op JSON serialization β either log or removeThis line computes a pretty-printed JSON and discards it.
Replace with an actual log (debug) or delete:
- objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(response); + logger.debug("Sync failed summary: {}", objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(response));
276-291
: Be careful with logging sampled records; could contain PIILogging βSample recordβ even at debug can leak beneficiary details. Consider either:
- log only keys/column names and counts, or
- guard behind trace-level, or
- sanitize values (mask phone numbers, names).
Example tweak:
- if (!resultSetList.isEmpty()) { - logger.debug("Sample record: {}", resultSetList.get(0)); - } + if (logger.isTraceEnabled() && !resultSetList.isEmpty()) { + logger.trace("Sample record keys: {}", resultSetList.get(0).keySet()); + }
349-357
: Guard processed-flag update with stricter response validation and add minimal error loggingCurrently only a body field βstatusCodeβ drives updates. Also check HTTP status and log failures to help operators.
- if (response != null && response.hasBody()) { + if (response != null && response.getStatusCode().is2xxSuccessful() && response.hasBody()) { JSONObject obj = new JSONObject(response.getBody()); if (obj != null && obj.has("statusCode") && obj.getInt("statusCode") == 200) { StringBuilder vanSerialNos = getVanSerialNoListForSyncedData(vanAutoIncColumnName, dataToBesync); i = dataSyncRepository.updateProcessedFlagInVan(schemaName, tableName, vanSerialNos, vanAutoIncColumnName, user); + } else { + logger.error("Unexpected upload response body for {}.{}: {}", schemaName, tableName, response.getBody()); } }src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentralDownload.java (3)
64-77
: Existence check can be simpler and faster; add LIMIT/EXISTS and parameterize values.
- Current query selects the auto-inc column, loads all matches into a list, then checks
size()
. PreferSELECT 1 ... LIMIT 1
(orEXISTS
) andqueryForObject(Boolean/Integer.class)
for O(1) semantics.- Also consider validating
vanAutoIncColumnName
via whitelist; itβs concatenated into SELECT.Apply:
- StringBuilder queryBuilder = new StringBuilder("SELECT "); - queryBuilder.append(vanAutoIncColumnName); - queryBuilder.append(" FROM "); - queryBuilder.append(schemaName+"."+tableName); + StringBuilder queryBuilder = new StringBuilder("SELECT 1 FROM "); + queryBuilder.append(schemaName + "." + tableName); ... - List<Map<String, Object>> resultSet = jdbcTemplate.queryForList(query, queryParams); - if (resultSet != null && resultSet.size() > 0) - return 1; - else - return 0; + Integer one = jdbcTemplate.query( + query + " LIMIT 1", + ps -> { + for (int idx = 0; idx < queryParams.length; idx++) ps.setObject(idx + 1, queryParams[idx]); + }, + rs -> rs.next() ? 1 : 0 + ); + return one != null ? one : 0;Also applies to: 86-99, 100-108
78-85
: Replace long chain of equalsIgnoreCase with a Set membership check.Readability and maintenance suffer with a long
||
chain. ASet<String>
check is clearer and faster.- if ((tableName.equalsIgnoreCase("t_patientissue") || tableName.equalsIgnoreCase("t_physicalstockentry") - || tableName.equalsIgnoreCase("t_stockadjustment") || tableName.equalsIgnoreCase("t_saitemmapping") - || tableName.equalsIgnoreCase("t_stocktransfer") || tableName.equalsIgnoreCase("t_patientreturn") - || tableName.equalsIgnoreCase("t_facilityconsumption") || tableName.equalsIgnoreCase("t_indent") - || tableName.equalsIgnoreCase("t_indentorder") || tableName.equalsIgnoreCase("t_indentissue") - || tableName.equalsIgnoreCase("t_itemstockentry") || tableName.equalsIgnoreCase("t_itemstockexit")) - && syncFacilityID > 0) { + if (ALLOWED_TABLES.contains(tableName.toLowerCase()) && syncFacilityID > 0) {
50-53
: Avoid creating a new JdbcTemplate each call.
getJdbcTemplate()
returns a new instance every time, but the class also stores it in a field. Lazily initialize once and reuse.private JdbcTemplate getJdbcTemplate() { - return new JdbcTemplate(dataSource); + if (jdbcTemplate == null) { + jdbcTemplate = new JdbcTemplate(dataSource); + } + return jdbcTemplate; }src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetMasterDataFromCentralForVanImpl.java (2)
67-67
: Typo in log message."ger master data=" β "get master data=" for clarity.
- logger.info("ger master data="+ tableDetails.getSchemaName()); + logger.info("get master data = {}", tableDetails.getSchemaName());
43-47
: Remove Unused DataSyncRepositoryCentral InjectionConfirmed that
dataSyncRepositoryCentral
is injected but never referenced elsewhere in GetMasterDataFromCentralForVanImpl.java (only its declaration at lines 40β42) β it can be safely removed to reduce confusion and unnecessary wiring.Recommended changes:
- Delete the unused field and its
@Autowired
annotation.- Remove the corresponding import (e.g.
com.iemr.mmu.repository.dataSyncLayerCentral.DataSyncRepositoryCentral
).Suggested diff:
--- a/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetMasterDataFromCentralForVanImpl.java +++ b/src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetMasterDataFromCentralForVanImpl.java @@ -40,6 +40,3 @@ public class GetMasterDataFromCentralForVanImpl implements GetMasterDataFromCentralForVan { - @Autowired - private DataSyncRepositoryCentral dataSyncRepositoryCentral; - @Autowired private DataSyncRepositoryCentralDownload dataSyncRepositoryCentralDownload;src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerTransactionalImpl.java (1)
112-113
: Confirm schema change and handle return codes
- All five invocations of
updateProcessedFlagToCentral
now use the"db_iemr_sync"
schema (lines 112, 139, 167, 194, 222 in DownloadDataFromServerTransactionalImpl.java). Please verify that the central API indeed expects processed-flag updates againstdb_iemr_sync
rather thandb_iemr
.- Each call assigns its result to a local
int updateFlag
which is never used. You should either:
- Check and act on the return code (e.g., log failures, throw or abort on
updateFlag <= 0
), or- Drop the unused variable if the return value is not needed.
Locations to update:
- src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerTransactionalImpl.java: lines 112, 139, 167, 194, 222
Proposed cleanup (optional refactor β drop unused variable):
- int updateFlag = updateProcessedFlagToCentral("db_iemr_sync", "t_indent", ids, ServerAuthorization, token); + updateProcessedFlagToCentral("db_iemr_sync", "t_indent", ids, ServerAuthorization, token);Or handle the return code explicitly:
int updateFlag = updateProcessedFlagToCentral(schema, tableName, ids, auth, token); if (updateFlag <= 0) { logger.error("Processed-flag update failed for {} (ids={}): return={}", tableName, ids, updateFlag); throw new SyncException("Failed to mark records as processed in central"); }src/main/java/com/iemr/mmu/repo/nurse/anc/BenPersonalHabitRepo.java (2)
42-45
: Index shift from adding numberperWeek β verify all Object[] consumers are updatedAdding
numberperWeek
betweennumberperDay
andDate(tobaccoUseDuration)
changes the indices for downstream mappers. I spot the corresponding adjustments in BenPersonalHabit.getPersonalDetails(...) and constructors, but please re-check any other consumers (e.g., service-layer DTO builders) that read from this query to prevent off-by-one bugs at runtime.Optionally consider Spring Data projections (DTO/interface-based) to avoid fragile index-based mappings.
54-56
: Added numberperWeek to habit-detail SELECT β mapping looks consistentThe field order aligns with its usage in BenPersonalHabit.getPersonalDetails(...) (indices 0..21). Good catch on placing
numberperWeek
right afternumberperDay
.Projection-based mapping would make this more resilient to future column insertions.
src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java (1)
39-44
: HttpServletRequest injected but unused; keep API minimal or wire it through intentionally
HttpServletRequest
andCookieUtil
are added but not used. If the goal is to let RestTemplateUtil fetch JWT from cookies, RequestContextHolder already provides the request without adding a parameter here. Otherwise, if you intend to read something from the request (e.g., client IP, headers), please use it, or drop it to keep the endpoint signature stable.Apply this diff to remove the unused parameter/imports:
@@ -import com.iemr.mmu.utils.CookieUtil; +// import com.iemr.mmu.utils.CookieUtil; // not used @@ -import jakarta.servlet.http.HttpServletRequest; +// import jakarta.servlet.http.HttpServletRequest; // not used @@ - public String dataSyncToServer(@RequestBody String requestOBJ, - @RequestHeader(value = "Authorization") String Authorization, HttpServletRequest request) { + public String dataSyncToServer(@RequestBody String requestOBJ, + @RequestHeader(value = "Authorization") String Authorization) {Also applies to: 63-66
src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerImpl.java (1)
189-196
: Sentinel "datasync" path correctly engages RestTemplateUtilβs non-Bearer behaviorPassing
"datasync"
ensures Authorization is forwarded as-is and avoids cookie header injection. Consider centralizing the sentinel into a shared constant to avoid stringly-typed usage across the codebase.You can define a public constant (e.g.,
RestTemplateUtil.DATASYNC_SENTINEL = "datasync"
) and reuse it here and elsewhere.src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (2)
90-112
: Column-name validator misses common function patterns
isValidColumnNamesList
only special-casesdate_format(...)
. Elsewhere in the project and JPQL, you also useDate(column)
and sometimes aliases. If future callers pass expressions likeDate(...)
or aliased columns, this will fail validation. Either extend the validator (supportdate(
and optionalAS <alias>
), or enforce plain column lists in callers.I can provide a hardened parser if you plan to allow a small, safe subset of SQL functions.
115-152
: Type-safety nit: parameters treated as strings for numeric columns
vanSerialNo
andvanID
are passed asString
and bound as such. If DB columns are numeric, prefer numeric types to avoid implicit casts that can hinder index usage.- public int checkRecordIsAlreadyPresentOrNot(String schemaName, String tableName, String vanSerialNo, String vanID, + public int checkRecordIsAlreadyPresentOrNot(String schemaName, String tableName, long vanSerialNo, long vanID, String vanAutoIncColumnName, int syncFacilityID) { @@ - params.add(vanSerialNo); + params.add(vanSerialNo); @@ - params.add(vanID); + params.add(vanID);(Cascade this change to call sites if feasible.)
src/main/java/com/iemr/mmu/data/anc/BenPersonalHabit.java (2)
594-596
: Null and format handling for numberperWeek from DTO mapCasting via
new Short(String)
is fine given the guard, but considerShort.parseShort(...)
for clarity and to avoid boxing patterns; also handle invalid numeric strings defensively if the payload is not strictly validated upstream.- benPersonalHabit.setNumberperWeek(new Short(tobaccoInfo.get("numberperWeek"))); + benPersonalHabit.setNumberperWeek(Short.parseShort(tobaccoInfo.get("numberperWeek")));
753-756
: Key naming inconsistency nit in mapsWithin the same method, tobacco keys use mixed casing (
OtherTobaccoUseType
vs earlierotherTobaccoUseType
). This predates the change but will continue to surprise API consumers. Consider normalizing keys in a follow-up to reduce front-end conditionals.src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (5)
42-42
: Remove unused constant SERVER_COLUMNS_NOT_REQUIREDThe constant
SERVER_COLUMNS_NOT_REQUIRED
is set tonull
and only used once at line 472. Consider removing this constant and directly passingnull
at the usage site, or provide a meaningful value if it serves a specific purpose.- private static final String SERVER_COLUMNS_NOT_REQUIRED = null; // Renamed for clarity
At line 472, directly use
null
:- SERVER_COLUMNS_NOT_REQUIRED, queryUpdate, syncDataListUpdate); + null, queryUpdate, syncDataListUpdate);
329-337
: Improve robustness of SQL function parsingThe current logic for extracting column names from SQL functions like
date_format()
is fragile. It assumes a specific format and may fail with nested functions or different SQL expressions.Consider using a more robust approach with regex or a dedicated parser:
- if (key.startsWith("date_format(") && key.endsWith(")")) { - int start = key.indexOf("(") + 1; - int end = key.indexOf(","); - if (end > start) { - cleanKey = key.substring(start, end).trim(); - } else { - // Fallback if format is unexpected - cleanKey = key.substring(start, key.indexOf(")")).trim(); - } + // Handle SQL functions more generically + if (key.contains("(") && key.endsWith(")")) { + // Extract the column name from SQL functions like date_format(column, format) + Pattern pattern = Pattern.compile("\\w+\\s*\\(\\s*([^,)]+)"); + Matcher matcher = pattern.matcher(key); + if (matcher.find()) { + cleanKey = matcher.group(1).trim(); + } else { + logger.warn("Unable to extract column name from SQL expression: {}", key); + cleanKey = key; // Use original as fallback + }
123-130
: Redundant table lookup in sync logicThe code iterates through
dataToBesync
to find a map with a matchingtableName
, butsyncTableName
is already known fromsyncUploadDataDigester.getTableName()
. This lookup appears unnecessary and could lead to confusion.Simplify the logic by directly checking if the table exists in any group:
- for (Map<String, Object> map : dataToBesync) { - if (map.get("tableName") != null - && map.get("tableName").toString().equalsIgnoreCase(syncTableName)) { - syncSuccess = syncTablesInGroup(syncUploadDataDigester.getSchemaName(), syncTableName, - syncUploadDataDigester); - foundInGroup = true; - break; - } - } + // Check if the table exists in any predefined group + for (List<String> tables : TABLE_GROUPS.values()) { + if (tables.stream().anyMatch(t -> t.equalsIgnoreCase(syncTableName))) { + syncSuccess = syncTablesInGroup(syncUploadDataDigester.getSchemaName(), syncTableName, + syncUploadDataDigester); + foundInGroup = true; + break; + } + }
355-394
: Consider extracting facility-based processing logicThe switch statement for facility-based 'Processed' status updates is quite long and could be extracted into a separate method for better maintainability.
Extract the facility processing logic:
private void applyFacilityBasedProcessing(Map<String, Object> record, String tableName, Integer facilityID) { if (facilityID == null) { return; } switch (tableName.toLowerCase()) { case "t_indent": case "t_indentorder": if (matchesFacilityID(record, "FromFacilityID", facilityID)) { record.put("Processed", "P"); } break; case "t_indentissue": if (matchesFacilityID(record, "ToFacilityID", facilityID)) { record.put("Processed", "P"); } break; case "t_stocktransfer": if (matchesFacilityID(record, "TransferToFacilityID", facilityID)) { record.put("Processed", "P"); } break; case "t_itemstockentry": if (matchesFacilityID(record, "FacilityID", facilityID)) { record.put("Processed", "P"); } break; default: // No specific facility ID logic for other tables break; } } private boolean matchesFacilityID(Map<String, Object> record, String fieldName, Integer expectedID) { if (record.containsKey(fieldName) && record.get(fieldName) instanceof Number) { return ((Number) record.get(fieldName)).intValue() == expectedID; } return false; }Then replace lines 350-395 with:
applyFacilityBasedProcessing(cleanRecord, syncTableName, facilityIDFromDigester);
261-261
: Validate or Parameterize thewhereClause
ingetBatchForBenDetails
Although schema, table, and column names are validated before building the query, the
whereClause
parameter is injected directly into the SQL string without any validation or parameter binding. This design introduces a latent SQLβinjection riskβtoday the clause is a hardcoded constant, but the public method signature accepts any string.β’ In DataSyncRepositoryCentral.java (around line 218):
String query = String.format( "SELECT %s FROM %s.%s %s LIMIT ? OFFSET ?", columnNames, schema, table, whereClause ); // NOSONARβ
whereClause
isnβt validated against a whitelist or parameterized.β’ In GetDataFromVanAndSyncToDBImpl.java (line 261):
String problematicWhereClause = " WHERE Processed <> 'P' AND VanID IS NOT NULL ";β Current usage is safe, but passing arbitrary clauses remains possible.
Recommendations:
- Restrict the
whereClause
API:
β’ Change the method to accept only specific filter enums or a builder object instead of raw SQL.
β’ Or validate the clause against a strict regex/whitelist of allowed columns and operators.- Use preparedβstatement placeholders for any dynamic values within the clause (e.g.,
"WHERE Processed <> ? AND VanID IS NOT NULL"
with params["P"]
).- Remove the
NOSONAR
suppression once the clause is validated/parameterized.
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π‘ Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
π Files selected for processing (26)
pom.xml
(3 hunks)src/main/java/com/iemr/mmu/controller/dataSyncActivity/StartSyncActivity.java
(2 hunks)src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java
(2 hunks)src/main/java/com/iemr/mmu/data/anc/BenPersonalHabit.java
(9 hunks)src/main/java/com/iemr/mmu/data/benFlowStatus/BeneficiaryFlowStatus.java
(3 hunks)src/main/java/com/iemr/mmu/data/doctor/CancerGynecologicalExamination.java
(4 hunks)src/main/java/com/iemr/mmu/data/quickConsultation/BenClinicalObservations.java
(1 hunks)src/main/java/com/iemr/mmu/repo/benFlowStatus/BeneficiaryFlowStatusRepo.java
(1 hunks)src/main/java/com/iemr/mmu/repo/location/DistrictBlockMasterRepo.java
(1 hunks)src/main/java/com/iemr/mmu/repo/nurse/BenVisitDetailRepo.java
(1 hunks)src/main/java/com/iemr/mmu/repo/nurse/anc/BenPersonalHabitRepo.java
(2 hunks)src/main/java/com/iemr/mmu/service/cancerScreening/CSNurseServiceImpl.java
(3 hunks)src/main/java/com/iemr/mmu/service/cancerScreening/CSServiceImpl.java
(2 hunks)src/main/java/com/iemr/mmu/service/common/transaction/CommonNurseServiceImpl.java
(2 hunks)src/main/java/com/iemr/mmu/service/dataSyncActivity/DataSyncRepository.java
(4 hunks)src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerImpl.java
(3 hunks)src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerTransactionalImpl.java
(5 hunks)src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java
(5 hunks)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java
(2 hunks)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentralDownload.java
(1 hunks)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
(1 hunks)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetMasterDataFromCentralForVanImpl.java
(3 hunks)src/main/java/com/iemr/mmu/service/ncdscreening/NCDScreeningServiceImpl.java
(1 hunks)src/main/java/com/iemr/mmu/utils/JwtUserIdValidationFilter.java
(4 hunks)src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java
(3 hunks)src/main/resources/application.properties
(2 hunks)
π§° Additional context used
𧬠Code graph analysis (9)
src/main/java/com/iemr/mmu/controller/dataSyncLayerCentral/MMUDataSyncVanToServer.java (1)
src/main/java/com/iemr/mmu/utils/response/OutputResponse.java (1)
OutputResponse
(37-206)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentralDownload.java (1)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetMasterDataFromCentralForVanImpl.java (1)
Service
(38-75)
src/main/java/com/iemr/mmu/controller/dataSyncActivity/StartSyncActivity.java (1)
src/main/java/com/iemr/mmu/utils/response/OutputResponse.java (1)
OutputResponse
(37-206)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (1)
src/main/java/com/iemr/mmu/data/syncActivity_syncLayer/SyncUploadDataDigester.java (1)
SyncUploadDataDigester
(27-121)
src/main/java/com/iemr/mmu/data/quickConsultation/BenClinicalObservations.java (1)
src/main/java/com/iemr/mmu/data/doctor/CancerGynecologicalExamination.java (1)
Entity
(44-451)
src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java (1)
src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java (1)
RestTemplateUtil
(35-70)
src/main/java/com/iemr/mmu/service/cancerScreening/CSNurseServiceImpl.java (3)
src/main/java/com/iemr/mmu/service/cancerScreening/CSServiceImpl.java (1)
Service
(70-1251)src/main/java/com/iemr/mmu/service/common/transaction/CommonNurseServiceImpl.java (1)
Service
(128-4000)src/main/java/com/iemr/mmu/service/common/transaction/CommonServiceImpl.java (1)
Service
(89-847)
src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerImpl.java (1)
src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java (1)
RestTemplateUtil
(35-70)
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java (3)
src/main/java/com/iemr/mmu/service/dataSyncActivity/DataSyncRepository.java (1)
Service
(46-140)src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (1)
Service
(40-240)src/main/java/com/iemr/mmu/data/syncActivity_syncLayer/SyncUploadDataDigester.java (1)
SyncUploadDataDigester
(27-121)
π Additional comments (13)
src/main/resources/application.properties (1)
10-10
: spring.datasource.tomcat.max-wait is valid in Spring Boot 3.2.2
As documented in the Spring Boot 3.2.2 reference (DataSource Configuration β Tomcat-specific settings),spring.datasource.tomcat.max-wait
is a supported property that maps toDataSource#setMaxWait(long)
, so your setting of60000
ms will be applied at runtime. (docs.spring.io)src/main/java/com/iemr/mmu/repo/nurse/BenVisitDetailRepo.java (1)
71-72
: No remaining incorrect List<Objects[]> usages detected β approving changesThe search for
List<Objects[]>
returned no matches, confirming all projections now correctly useList<Object[]>
.src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java (1)
38-69
: Call-site audit complete: ensure βdatasyncβ flag is intentionalWe scanned all 40+ invocations of
RestTemplateUtil.createRequestEntity
:
- Data-sync paths in DownloadDataFromServerImpl (lines 189, 337) and UploadDataToServerImpl (line 344) correctly pass the literal
"datasync"
.- Non-sync flows (TeleConsultationServiceImpl, RegistrarServiceImpl, CommonDoctorServiceImpl, CSCarestreamServiceImpl, etc.) uniformly pass either the real JWT
token
or""
to trigger cookie-based auth.However, in DownloadDataFromServerTransactionalImpl (lines 247 & 271), the third argument is the variable
token
rather than the"datasync"
literal.
β’ Please verify whethertoken
is guaranteed to equal"datasync"
in this context.
β’ If not, update those calls to pass"datasync"
explicitly to avoid unintentional bearer semantics and cookie injection.Relevant locations:
- src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerTransactionalImpl.java:247
- src/main/java/com/iemr/mmu/service/dataSyncActivity/DownloadDataFromServerTransactionalImpl.java:271
src/main/java/com/iemr/mmu/repo/benFlowStatus/BeneficiaryFlowStatusRepo.java (1)
76-79
: Confirmed column-order alignment β The 21 fields in the@Query
inBeneficiaryFlowStatusRepo.java
line 76β79 exactly line up with the 21 parameters (indices 0β20) of theBeneficiaryFlowStatus(...)
constructor used ingetBeneficiaryFlowStatusForLeftPanel
inBeneficiaryFlowStatus.java
(verified via sed output).To guard against future βoffβbyβoneβ errors, consider an optional refactor:
β’ Switch to a JPQL constructor expression returning a dedicated DTO, for example:@Query( "SELECT new com.iemr.mmu.view.LeftPanelDTO(" + "t.benFlowID, t.beneficiaryRegID, β¦ , t.preferredPhoneNum" + ") FROM BeneficiaryFlowStatus t " + "WHERE t.beneficiaryRegID = :benRegID AND t.benFlowID = :benFlowID" )This will eliminate fragile
Object[]
index casts.
β’ If you choose to keep theObject[]
approach, please add a comment above the@Query
explicitly documenting the expected column order.src/main/java/com/iemr/mmu/service/cancerScreening/CSNurseServiceImpl.java (2)
748-748
: Type fix to List<Object[]> is correctSwitching from List<Objects[]> to List<Object[]> eliminates a generics misuse and compiles cleanly with the repository signature.
66-74
: AES decryption bean wiring β verified and available across all profilesThe AESEncryptionDecryption class in
src/main/java/com/iemr/mmu/utils/AESEncryption/AESEncryptionDecryption.java
is annotated with@Component
, so it will be picked up by Springβs component scan. The main application class (com.iemr.mmu.Application
) uses@SpringBootApplication
in thecom.iemr.mmu
root package, which coverscom.iemr.mmu.utils.AESEncryption
and its subpackages. No@Profile
or conditional annotations were found on AESEncryptionDecryption, so the bean will be instantiated in all deployment profiles.LGTM.
src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java (2)
162-174
: Nice: batch sync logging improves observabilityThe added info/debug logs around batch counts and acknowledgements are helpful for triaging sync issues.
319-322
: Optional Cleanup: Remove Unusedtoken
ParameterIt looks like the
token
argument on syncDataToServer(...) is never actually used inside the method, which can lead to confusion for callers and future maintainers.β’ In src/main/java/com/iemr/mmu/service/dataSyncActivity/UploadDataToServerImpl.java
β RemoveString token
from the method signature (lines 319β322).
β Drop the lasttoken
argument from both internal calls at lines 169 and 188.β’ Verify that no other code in
syncDataToServer
(for example in anycreateRequestEntity
invocation) references this parameter.If you find that an invariant value (e.g.
"datasync"
) is meant to stay hard-coded, itβs safe to remove the parameter entirely. If instead you do need a dynamic token later, thread it through into the request creation.src/main/java/com/iemr/mmu/service/common/transaction/CommonNurseServiceImpl.java (1)
1088-1092
: Column added: ensure downstream consumers can handle the new key.The new output column "Number Per Week" (key:
numberperWeek
) is additive, but clients parsing a fixed column set may need updates.Do UI/report consumers expect and render
numberperWeek
? If needed, bump response version or notify consumers.src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/DataSyncRepositoryCentral.java (1)
56-77
: Hardcoded schema/table whitelists β confirm coverage for all environmentsWhitelisting is great for safety; please confirm all runtime schemas/tables used by sync flows are captured, especially for multi-tenant or APL vs non-APL deployments. Missing entries will throw IllegalArgumentException at runtime.
Would you like a script to enumerate distinct schema/table values seen in production payloads to validate the whitelist?
src/main/java/com/iemr/mmu/data/anc/BenPersonalHabit.java (3)
340-351
: Accessor methods for numberperWeek β OKGetters/setters are straightforward and consistent with the field type.
651-670
: Constructor and index remapping β consistent with repository SELECT order
- The tobacco constructor adds
numberperWeek
in the expected slot;- The wide constructor used by getPersonalDetails(...) maps indices 6..21 correctly relative to the repo query;
- The header object
personalDetails
mapping for indices 0..5,12,19 matches the SELECT.LGTM.
Also applies to: 706-725, 732-742
96-99
: Missing DB migration fornumberperWeek
columnI ran a repository-wide search for any SQL/XML/YAML/YML migration that adds or alters the
NumberperWeek
column ont_BenPersonalHabit
but found no references. Please manually verify that:
- A migration (e.g., Flyway SQL, Liquibase changelog, or equivalent) exists which adds
NumberperWeek
to thet_BenPersonalHabit
table.- That migration has been applied (via DDL or migration tooling) in all target environments (development, QA, staging, production).
<!-- <exclusions> | ||
<exclusion> | ||
<groupId>org.springframework.boot</groupId> | ||
<artifactId>spring-boot-starter-logging</artifactId> | ||
</exclusion> | ||
</exclusions> | ||
</exclusions> --> | ||
</dependency> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π‘ Verification agent
π§© Analysis chain
Re-enabling Spring Boot default logging while adding slf4j-simple will cause multiple SLF4J bindings
With spring-boot-starter-logging active (Logback binding), adding slf4j-simple introduces two bindings and noisy runtime warnings. Prefer keeping Bootβs Logback (especially since logback-ecs-encoder is used) and remove slf4j-simple.
Apply the diffs below (also removes duplicate slf4j blocks later):
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
- <!-- <exclusions>
- <exclusion>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-logging</artifactId>
- </exclusion>
- </exclusions> -->
</dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-api</artifactId>
- <version>${slf4j.version}</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-simple</artifactId>
- <version>${slf4j.version}</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-api</artifactId>
- <version>${slf4j.version}</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-simple</artifactId>
- <version>${slf4j.version}</version>
- </dependency>
If you truly need to override the binding, exclude spring-boot-starter-logging from all starters and add a single SLF4J binding β but that would break logback-ecs-encoder usage.
Search for multiple SLF4J bindings and the presence of slf4j.version property:
π Script executed:
#!/bin/bash
rg -n "slf4j-simple|logback-classic|slf4j.version" -C2 pom.xml
Length of output: 689
Remove duplicate SLF4J bindings and rely on Spring Bootβs Logback
To avoid multiple SLF4J bindings (Logback + slf4j-simple) and the resulting runtime warnings, remove your explicit SLF4J dependencies and let Spring Bootβs default starter-logging (Logback + logback-ecs-encoder) handle logging. Based on the grep output, youβve declared two identical blocks of slf4j-api
+ slf4j-simple
(around lines 58β66 and 177β185), and you also have a commented-out <exclusions>
block under spring-boot-starter
(lines 50β56).
Please apply the following diffs to pom.xml
:
--- a/pom.xml
+++ b/pom.xml
@@ -50,9 +50,6 @@
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
- <!-- <exclusions>
- <exclusion>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-logging</artifactId>
- </exclusion>
- </exclusions> -->
</dependency>
--- a/pom.xml
+++ b/pom.xml
@@ -56,11 +56,3 @@
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-simple</artifactId>
- <version>${slf4j.version}</version>
- </dependency>
--- a/pom.xml
+++ b/pom.xml
@@ -177,11 +177,3 @@
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-simple</artifactId>
- <version>${slf4j.version}</version>
- </dependency>
After this change, only Logback (via spring-boot-starter-logging
) will serve as the SLF4J binding, eliminating duplicate bindings and preserving logback-ecs-encoder functionality.
Committable suggestion skipped: line range outside the PR's diff.
π€ Prompt for AI Agents
In pom.xml around lines 50β56, 58β66 and 177β185: there are two duplicate
explicit slf4j-api + slf4j-simple dependency blocks and a commented-out
exclusions block under spring-boot-starter; remove both slf4j-api/slf4j-simple
dependency sections (the ones around lines 58β66 and 177β185) so slf4j-simple is
no longer on the classpath, and ensure you do not exclude
spring-boot-starter-logging (leave the exclusion commented/removed) so Spring
Bootβs Logback (and logback-ecs-encoder) remains the sole SLF4J binding; finally
scan the POM to confirm no other slf4j-simple declarations remain.
<dependency> | ||
<groupId>org.slf4j</groupId> | ||
<artifactId>slf4j-api</artifactId> | ||
<version>${slf4j.version}</version> | ||
</dependency> | ||
<dependency> | ||
<groupId>org.slf4j</groupId> | ||
<artifactId>slf4j-simple</artifactId> | ||
<version>${slf4j.version}</version> | ||
</dependency> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π‘ Verification agent
π§© Analysis chain
Undefined ${slf4j.version} property
The POM references ${slf4j.version} but no such property is defined locally. Rely on Spring Bootβs dependency management (remove explicit versions) or define the property in .
Minimal change: remove explicit versions and let Boot manage slf4j-api (if you keep it at all):
- <version>${slf4j.version}</version>
Given Boot already brings slf4j-api transitively, the safest is to remove both slf4j-api and slf4j-simple dependencies entirely.
π Script executed:
#!/bin/bash
rg -n "<slf4j.version>|slf4j.version" pom.xml
Length of output: 217
Fix undefined ${slf4j.version}
references in pom.xml
The ${slf4j.version}
property is not defined in this POM, resulting in unresolved references at multiple locations. You must either define this property under <properties>
or remove the explicit <version>
elements to leverage Spring Bootβs dependency management (or remove these dependencies entirely if you rely on Bootβs default logging).
Affected locations:
- pom.xml line 60 (slf4j-api dependency)
- pom.xml line 65 (slf4j-simple dependency)
- pom.xml line 179 (slf4j-api duplicate)
- pom.xml line 184 (slf4j-simple duplicate)
Suggested fixes:
- Remove explicit versions so Bootβs dependency management supplies the correct SLF4J API version (and relies on logback for binding):
--- pom.xml
@@ -58,12 +58,8 @@
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
- <version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
- <version>${slf4j.version}</version>
</dependency>
@@ -177,12 +173,8 @@
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
- <version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
- <version>${slf4j.version}</version>
</dependency>
- Or define the property in your
<properties>
section if you require a specific SLF4J version:
<properties>
<!-- add this (choose an appropriate version) -->
<slf4j.version>1.7.36</slf4j.version>
</properties>
- Alternatively, since Spring Bootβs
spring-boot-starter-logging
already brings in SLF4J API and logback, you can remove bothslf4j-api
andslf4j-simple
dependencies altogether unless you explicitly need the simple binding.
Please address these changes to avoid build failures and ensure consistent dependency management.
π€ Prompt for AI Agents
In pom.xml around lines 57 to 66 (and duplicates at ~179-184), the pom
references ${slf4j.version} which is undefined and causes unresolved properties;
fix by either removing the explicit <version> elements for slf4j-api and
slf4j-simple so Spring Boot's dependency management supplies the correct SLF4J
version (recommended), or add a concrete slf4j.version property under
<properties> with the desired version, or remove both dependencies entirely if
you rely on spring-boot-starter-logging; apply the same change to the duplicate
entries later in the file to keep consistency.
<dependency> | ||
<groupId>org.slf4j</groupId> | ||
<artifactId>slf4j-api</artifactId> | ||
<version>${slf4j.version}</version> | ||
</dependency> | ||
<dependency> | ||
<groupId>org.slf4j</groupId> | ||
<artifactId>slf4j-simple</artifactId> | ||
<version>${slf4j.version}</version> | ||
</dependency> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate SLF4J dependencies β remove to avoid conflicts
This second slf4j-api/slf4j-simple block duplicates the earlier one. Remove both blocks as noted to avoid version skew and binding conflicts.
π€ Prompt for AI Agents
In pom.xml around lines 176 to 185, there are duplicate SLF4J dependency entries
(slf4j-api and slf4j-simple) that replicate earlier declarations; remove this
duplicate block so each SLF4J artifact is declared only once to prevent version
skew and logging binding conflicts, ensuring the remaining single declarations
use the intended ${slf4j.version} property.
logger.info("Authorization from controller="+ authorization); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove/mask Authorization header from logs.
Logging Authorization/JWT leaks credentials into logs. Mask or drop entirely.
- logger.info("Authorization from controller="+ authorization);
+ // Avoid logging secrets. If absolutely necessary, log only a hash or prefix.
+ if (logger.isDebugEnabled()) {
+ logger.debug("Authorization header received (masked)");
+ }
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
logger.info("Authorization from controller="+ authorization); | |
// Avoid logging secrets. If absolutely necessary, log only a hash or prefix. | |
if (logger.isDebugEnabled()) { | |
logger.debug("Authorization header received (masked)"); | |
} |
π€ Prompt for AI Agents
In src/main/java/com/iemr/mmu/controller/dataSyncActivity/StartSyncActivity.java
around lines 183-184, the code logs the full Authorization header which leaks
JWT/credentials; remove logging of the raw header or replace it with a redacted
value (e.g. log only presence or scheme and a short masked token like "Bearer
*****" or keep first N chars then "...[REDACTED]"); update the logger.info call
to never include the raw authorization string and ensure any future logs follow
the same masking pattern.
import lombok.Data; | ||
|
||
@Entity | ||
@Data | ||
@Table(name = "t_benclinicalobservation") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π οΈ Refactor suggestion
Avoid Lombok @DaTa on JPA entities; it changes equals/hashCode semantics and risks PII in toString.
@Data
generates equals/hashCode across all fields, which is risky for entities (identity changes post-persist, subtle bugs in Sets/Maps).@Data
also generatestoString
including all fields; this class holds patient data and may leak PII in logs.
Apply this change to narrow Lombok impact:
-import lombok.Data;
+import lombok.Getter;
+import lombok.Setter;
+import lombok.EqualsAndHashCode;
+import lombok.ToString;
@Entity
-@Data
+@Getter
+@Setter
+@EqualsAndHashCode(onlyExplicitlyIncluded = true)
+@ToString(exclude = {"clinicalObservation","otherSymptoms","otherSymptomsSCTCode","otherSymptomsSCTTerm",
+ "significantFindings","processed","createdBy","modifiedBy","vehicalNo",
+ "syncedBy","reservedForChange"})
@Table(name = "t_benclinicalobservation")
Additionally, explicitly include the primary key in equality to keep entity identity stable:
// Add just above the ID field declaration
@EqualsAndHashCode.Include
I can push a follow-up patch across all entity classes where @Data
was introduced to apply a consistent, safe pattern. Want me to do that?
π€ Prompt for AI Agents
In
src/main/java/com/iemr/mmu/data/quickConsultation/BenClinicalObservations.java
around lines 37-41, remove the class-level Lombok @Data and replace it with
safer, explicit Lombok annotations: add @Getter, @Setter and @NoArgsConstructor,
add @EqualsAndHashCode(onlyExplicitlyIncluded = true) on the class, and mark the
primary key field with @EqualsAndHashCode.Include immediately above the ID
declaration; do not generate a default toString (omit @ToString) to avoid
logging PII.
String[] columnsArr = null; | ||
if (serverColumns != null) | ||
columnsArr = serverColumns.split(","); | ||
|
||
StringBuilder preparedStatementSetter = new StringBuilder(); | ||
|
||
StringBuilder queryBuilder = new StringBuilder(" UPDATE "); | ||
queryBuilder.append(schemaName).append(".").append(tableName); | ||
queryBuilder.append(" SET "); | ||
queryBuilder.append(preparedStatementSetter); | ||
queryBuilder.append(" WHERE VanSerialNo = ? "); | ||
|
||
if (Arrays.asList("t_patientissue", "t_physicalstockentry", "t_stockadjustment", "t_saitemmapping", | ||
"t_stocktransfer", "t_patientreturn", "t_facilityconsumption", "t_indent", | ||
"t_indentorder", "t_indentissue", "t_itemstockentry", "t_itemstockexit") | ||
.contains(tableName.toLowerCase())) { | ||
queryBuilder.append(" AND SyncFacilityID = ? "); | ||
} else { | ||
queryBuilder.append(" AND VanID = ? "); | ||
} | ||
return queryBuilder.toString(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical bug: Empty SET clause in UPDATE query
The preparedStatementSetter
StringBuilder is initialized but never populated with column assignments, resulting in an invalid UPDATE query with an empty SET clause.
The UPDATE query builder is missing the logic to construct the SET clause:
public String getQueryToUpdateDataToServerDB(String schemaName, String serverColumns, String tableName) {
String[] columnsArr = null;
if (serverColumns != null)
columnsArr = serverColumns.split(",");
StringBuilder preparedStatementSetter = new StringBuilder();
+
+ if (columnsArr != null && columnsArr.length > 0) {
+ for (int i = 0; i < columnsArr.length; i++) {
+ preparedStatementSetter.append(columnsArr[i].trim()).append(" = ?");
+ if (i < columnsArr.length - 1) {
+ preparedStatementSetter.append(", ");
+ }
+ }
+ }
StringBuilder queryBuilder = new StringBuilder(" UPDATE ");
queryBuilder.append(schemaName).append(".").append(tableName);
queryBuilder.append(" SET ");
queryBuilder.append(preparedStatementSetter);
queryBuilder.append(" WHERE VanSerialNo = ? ");
π€ Prompt for AI Agents
In
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
around lines 516 to 536, the preparedStatementSetter StringBuilder is never
populated, producing an UPDATE with an empty SET clause; populate it by
splitting serverColumns (guarding null) into columnsArr, iterate columnsArr and
for each column (trimmed) that is not a key used in WHERE (VanSerialNo, VanID,
SyncFacilityID) append "columnName = ?" to preparedStatementSetter separated by
commas (use a boolean firstFlag or check length to avoid leading/trailing
commas), then use that preparedStatementSetter in the queryBuilder as already
done; also handle the case where no updatable columns remain (throw or return
null/raise error) to avoid generating an invalid query.
// Statement.SUCCESS_NO_INFO | ||
// usually indicates a failure or success without specific row count. | ||
// A common return value for success is 1 (for one row updated/inserted). | ||
if (results[k] < 1) { // Assuming 1 means success, and anything else (0, -2, etc.) means failure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π οΈ Refactor suggestion
Incorrect assumption about batch update return values
The comment states that a return value of 1 means success, but JDBC batch update can return different values: Statement.SUCCESS_NO_INFO
(-2) for successful execution without row count, or the actual number of affected rows (which could be 0 for valid no-op updates).
Update the failure detection logic to properly handle JDBC batch update return codes:
- if (results[k] < 1) { // Assuming 1 means success, and anything else (0, -2, etc.) means failure
+ // Check for explicit failure. SUCCESS_NO_INFO (-2) and positive values indicate success
+ if (results[k] == Statement.EXECUTE_FAILED) {
Import the Statement class at the top of the file:
import java.sql.Statement;
π€ Prompt for AI Agents
In
src/main/java/com/iemr/mmu/service/dataSyncLayerCentral/GetDataFromVanAndSyncToDBImpl.java
around line 547, the code incorrectly treats any return value other than 1 as
failure; update the logic to treat only Statement.EXECUTE_FAILED as failure and
treat non-negative counts or Statement.SUCCESS_NO_INFO as success, and add the
required import java.sql.Statement at the top of the file.
if (requestOBJ.has("visitDetails") && !requestOBJ.get("visitDetails").isJsonNull()) { | ||
JsonObject visitWrapperObj = requestOBJ.getAsJsonObject("visitDetails"); | ||
JsonObject visitDetailsObj = visitWrapperObj.getAsJsonObject("visitDetails"); | ||
|
||
if (visitDetailsObj.has("fileIDs") && visitDetailsObj.get("fileIDs").isJsonArray()) { | ||
JsonArray fileIDs = visitDetailsObj.getAsJsonArray("fileIDs"); | ||
StringBuilder fileIDBuilder = new StringBuilder(); | ||
for (JsonElement fileIdElement : fileIDs) { | ||
if (!fileIdElement.isJsonNull()) { | ||
fileIDBuilder.append(fileIdElement.getAsString()).append(","); | ||
} | ||
} | ||
|
||
if (fileIDBuilder.length() > 0) { | ||
fileIDBuilder.setLength(fileIDBuilder.length() - 1); | ||
|
||
benVisitDetailRepo.updateFileID( | ||
fileIDBuilder.toString(), | ||
commonUtilityClass.getBeneficiaryRegID(), | ||
commonUtilityClass.getVisitCode() | ||
); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π οΈ Refactor suggestion
Guard nested visitDetails to prevent NPE; sanitize/trim IDs and avoid trailing commas before DB update.
visitWrapperObj.getAsJsonObject("visitDetails")
can NPE if the innervisitDetails
is missing/null, even when the outer wrapper exists.- Building a CSV with unconditional trailing comma and then trimming is fine, but using conditional separators avoids edge cases and extra allocations.
- Validate regID/visitCode before issuing the update.
Apply:
- if (requestOBJ.has("visitDetails") && !requestOBJ.get("visitDetails").isJsonNull()) {
- JsonObject visitWrapperObj = requestOBJ.getAsJsonObject("visitDetails");
- JsonObject visitDetailsObj = visitWrapperObj.getAsJsonObject("visitDetails");
-
- if (visitDetailsObj.has("fileIDs") && visitDetailsObj.get("fileIDs").isJsonArray()) {
- JsonArray fileIDs = visitDetailsObj.getAsJsonArray("fileIDs");
- StringBuilder fileIDBuilder = new StringBuilder();
- for (JsonElement fileIdElement : fileIDs) {
- if (!fileIdElement.isJsonNull()) {
- fileIDBuilder.append(fileIdElement.getAsString()).append(",");
- }
- }
-
- if (fileIDBuilder.length() > 0) {
- fileIDBuilder.setLength(fileIDBuilder.length() - 1);
-
- benVisitDetailRepo.updateFileID(
- fileIDBuilder.toString(),
- commonUtilityClass.getBeneficiaryRegID(),
- commonUtilityClass.getVisitCode()
- );
- }
- }
- }
+ if (requestOBJ.has("visitDetails") && !requestOBJ.get("visitDetails").isJsonNull()) {
+ JsonObject visitWrapperObj = requestOBJ.getAsJsonObject("visitDetails");
+ if (visitWrapperObj.has("visitDetails") && !visitWrapperObj.get("visitDetails").isJsonNull()) {
+ JsonObject visitDetailsObj = visitWrapperObj.getAsJsonObject("visitDetails");
+ if (visitDetailsObj.has("fileIDs") && visitDetailsObj.get("fileIDs").isJsonArray()) {
+ JsonArray fileIDs = visitDetailsObj.getAsJsonArray("fileIDs");
+ StringBuilder csv = new StringBuilder();
+ for (JsonElement el : fileIDs) {
+ if (el != null && !el.isJsonNull()) {
+ String id = el.getAsString().trim();
+ if (!id.isEmpty()) {
+ if (csv.length() > 0) csv.append(',');
+ csv.append(id);
+ }
+ }
+ }
+ if (csv.length() > 0
+ && commonUtilityClass.getBeneficiaryRegID() != null
+ && commonUtilityClass.getVisitCode() != null) {
+ benVisitDetailRepo.updateFileID(
+ csv.toString(),
+ commonUtilityClass.getBeneficiaryRegID(),
+ commonUtilityClass.getVisitCode()
+ );
+ }
+ }
+ }
+ }
Very similar file-ID aggregation logic exists in updateNurseNCDScreeningDetails
. Consider extracting a small utility (e.g., FileIdUtils.joinNonEmpty(JsonArray) or a shared service method) to keep this behavior consistent across modules and avoid drift.
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (requestOBJ.has("visitDetails") && !requestOBJ.get("visitDetails").isJsonNull()) { | |
JsonObject visitWrapperObj = requestOBJ.getAsJsonObject("visitDetails"); | |
JsonObject visitDetailsObj = visitWrapperObj.getAsJsonObject("visitDetails"); | |
if (visitDetailsObj.has("fileIDs") && visitDetailsObj.get("fileIDs").isJsonArray()) { | |
JsonArray fileIDs = visitDetailsObj.getAsJsonArray("fileIDs"); | |
StringBuilder fileIDBuilder = new StringBuilder(); | |
for (JsonElement fileIdElement : fileIDs) { | |
if (!fileIdElement.isJsonNull()) { | |
fileIDBuilder.append(fileIdElement.getAsString()).append(","); | |
} | |
} | |
if (fileIDBuilder.length() > 0) { | |
fileIDBuilder.setLength(fileIDBuilder.length() - 1); | |
benVisitDetailRepo.updateFileID( | |
fileIDBuilder.toString(), | |
commonUtilityClass.getBeneficiaryRegID(), | |
commonUtilityClass.getVisitCode() | |
); | |
} | |
} | |
} | |
if (requestOBJ.has("visitDetails") && !requestOBJ.get("visitDetails").isJsonNull()) { | |
JsonObject visitWrapperObj = requestOBJ.getAsJsonObject("visitDetails"); | |
if (visitWrapperObj.has("visitDetails") && !visitWrapperObj.get("visitDetails").isJsonNull()) { | |
JsonObject visitDetailsObj = visitWrapperObj.getAsJsonObject("visitDetails"); | |
if (visitDetailsObj.has("fileIDs") && visitDetailsObj.get("fileIDs").isJsonArray()) { | |
JsonArray fileIDs = visitDetailsObj.getAsJsonArray("fileIDs"); | |
StringBuilder csv = new StringBuilder(); | |
for (JsonElement el : fileIDs) { | |
if (el != null && !el.isJsonNull()) { | |
String id = el.getAsString().trim(); | |
if (!id.isEmpty()) { | |
if (csv.length() > 0) csv.append(','); | |
csv.append(id); | |
} | |
} | |
} | |
if (csv.length() > 0 | |
&& commonUtilityClass.getBeneficiaryRegID() != null | |
&& commonUtilityClass.getVisitCode() != null) { | |
benVisitDetailRepo.updateFileID( | |
csv.toString(), | |
commonUtilityClass.getBeneficiaryRegID(), | |
commonUtilityClass.getVisitCode() | |
); | |
} | |
} | |
} | |
} |
π€ Prompt for AI Agents
In src/main/java/com/iemr/mmu/service/ncdscreening/NCDScreeningServiceImpl.java
around lines 1185 to 1208, guard the nested visitDetails access and sanitize
file IDs before updating the DB: check that visitWrapperObj.has("visitDetails")
and that visitWrapperObj.get("visitDetails") is not null and is a JsonObject
before calling getAsJsonObject; iterate fileIDs using a boolean-first-sep or
StringJoiner to avoid creating/trim trailing commas and skip null/blank entries
and trim each ID; before calling benVisitDetailRepo.updateFileID validate
commonUtilityClass.getBeneficiaryRegID() and getVisitCode() are
non-null/non-empty and only call update when IDs and regID/visitCode are valid;
and extract this aggregation logic into a shared utility (e.g.,
FileIdUtils.joinNonEmpty(JsonArray) or a small service method) so
updateNurseNCDScreeningDetails can reuse it.
headers.add(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE + ";charset=utf-8"); | ||
|
||
if (authorization != null && !authorization.isEmpty()) { | ||
logger.info("token: {}", jwtToken); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not log JWTs or secrets.
logger.info("token: {}", jwtToken);
risks leaking sensitive credentials to logs. Remove it or log only a deterministic, non-sensitive marker (e.g., source of token or masked prefix).
Apply this diff:
-logger.info("token: {}", jwtToken);
+// Avoid logging raw tokens; consider logging a non-sensitive marker if needed.
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
logger.info("token: {}", jwtToken); | |
// Avoid logging raw tokens; consider logging a non-sensitive marker if needed. |
π€ Prompt for AI Agents
In src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java around line 42, the
code currently logs the full JWT (logger.info("token: {}", jwtToken)); which
leaks sensitive credentials; remove this full-token log and instead either omit
logging entirely or log a non-sensitive marker such as the token source or a
masked prefix (e.g., first N characters + "...") so only a deterministic,
non-sensitive value is written to logs; ensure no full token is concatenated or
included in any log messages.
if (authorization != null && !authorization.isEmpty() && !jwtToken.equalsIgnoreCase("datasync")) { | ||
headers.add(HttpHeaders.AUTHORIZATION, "Bearer " + authorization); | ||
} | ||
|
||
if(authorization != null && !authorization.isEmpty() && jwtToken.equalsIgnoreCase("datasync")) | ||
{ | ||
headers.add(HttpHeaders.AUTHORIZATION, authorization); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π οΈ Refactor suggestion
Possible NullPointerException and duplicated auth logic; normalize the βdatasyncβ branch.
jwtToken.equalsIgnoreCase("datasync")
will NPE when jwtToken
is null. Also, the two auth branches can be simplified and made safer. Introduce a boolean guard and avoid adding a Bearer prefix when the value already contains one.
Apply this diff:
- if (authorization != null && !authorization.isEmpty() && !jwtToken.equalsIgnoreCase("datasync")) {
- headers.add(HttpHeaders.AUTHORIZATION, "Bearer " + authorization);
- }
-
- if(authorization != null && !authorization.isEmpty() && jwtToken.equalsIgnoreCase("datasync"))
- {
- headers.add(HttpHeaders.AUTHORIZATION, authorization);
- }
+ final boolean isDataSync = "datasync".equalsIgnoreCase(jwtToken);
+ if (authorization != null && !authorization.isEmpty()) {
+ if (isDataSync) {
+ // For datasync, pass through the provided header as-is
+ headers.add(HttpHeaders.AUTHORIZATION, authorization);
+ } else {
+ // For regular flows, ensure Bearer prefix exactly once
+ String value = authorization.startsWith("Bearer ") ? authorization : ("Bearer " + authorization);
+ headers.add(HttpHeaders.AUTHORIZATION, value);
+ }
+ }
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (authorization != null && !authorization.isEmpty() && !jwtToken.equalsIgnoreCase("datasync")) { | |
headers.add(HttpHeaders.AUTHORIZATION, "Bearer " + authorization); | |
} | |
if(authorization != null && !authorization.isEmpty() && jwtToken.equalsIgnoreCase("datasync")) | |
{ | |
headers.add(HttpHeaders.AUTHORIZATION, authorization); | |
} | |
final boolean isDataSync = "datasync".equalsIgnoreCase(jwtToken); | |
if (authorization != null && !authorization.isEmpty()) { | |
if (isDataSync) { | |
// For datasync, pass through the provided header as-is | |
headers.add(HttpHeaders.AUTHORIZATION, authorization); | |
} else { | |
// For regular flows, ensure Bearer prefix exactly once | |
String value = authorization.startsWith("Bearer ") ? authorization : ("Bearer " + authorization); | |
headers.add(HttpHeaders.AUTHORIZATION, value); | |
} | |
} |
π€ Prompt for AI Agents
In src/main/java/com/iemr/mmu/utils/RestTemplateUtil.java around lines 43 to 51,
the current auth logic risks a NullPointerException using
jwtToken.equalsIgnoreCase("datasync") and duplicates branches; replace with a
single guarded flow: first ensure authorization != null &&
!authorization.isEmpty(), compute boolean isDataSync =
"datasync".equalsIgnoreCase(jwtToken) to avoid NPE, then if isDataSync add
authorization as-is, else add "Bearer " + authorization only if authorization
does not already start with "Bearer " (case-insensitive) to avoid
double-prefixing.
π Description
JIRA ID:
Please provide a summary of the change and the motivation behind it. Include relevant context and details.
β Type of Change
βΉοΈ Additional Information
Please describe how the changes were tested, and include any relevant screenshots, logs, or other information that provides additional context.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes