Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,8 @@ protected Optional<DataField> transformDataField(String fieldId, Case useCase) {
return this.transformCaseFieldField(caseField, (com.netgrif.application.engine.objects.petrinet.domain.dataset.CaseField) netField);
} else if (netField instanceof com.netgrif.application.engine.objects.petrinet.domain.dataset.FilterField) {
return this.transformFilterFieldField(caseField, (com.netgrif.application.engine.objects.petrinet.domain.dataset.FilterField) netField);
} else if (netField instanceof com.netgrif.application.engine.objects.petrinet.domain.dataset.StringCollectionField) {
return this.transformStringCollectionField(caseField, (com.netgrif.application.engine.objects.petrinet.domain.dataset.StringCollectionField) netField);
} else {
String string = caseField.getValue().toString();
if (string == null)
Expand Down Expand Up @@ -127,6 +129,14 @@ protected Optional<DataField> transformFilterFieldField(com.netgrif.application.
return Optional.of(new com.netgrif.application.engine.adapter.spring.elastic.domain.FilterField(dataField.getValue().toString(),allowedNets, filterMetadata));
}

protected Optional<DataField> transformStringCollectionField(com.netgrif.application.engine.objects.workflow.domain.DataField dataField, com.netgrif.application.engine.objects.petrinet.domain.dataset.StringCollectionField netField) {
if (dataField.getValue() != null && dataField.getValue() instanceof Collection && !((Collection<?>) dataField.getValue()).isEmpty()) {
String[] values = ((Collection<?>) dataField.getValue()).toArray(new String[0]);
return Optional.of(new com.netgrif.application.engine.adapter.spring.elastic.domain.StringCollectionField(values));
}
return Optional.empty();
}
Comment on lines +132 to +138
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

⚠️ Potential issue

Unsafe cast and null-element handling in transformStringCollectionField

Direct toArray(new String[0]) will throw if the collection contains non-String elements; nulls are not filtered. Make it type-safe and robust.

Apply this diff:

-    if (dataField.getValue() != null && dataField.getValue() instanceof Collection && !((Collection<?>) dataField.getValue()).isEmpty()) {
-        String[] values = ((Collection<?>) dataField.getValue()).toArray(new String[0]);
-        return Optional.of(new com.netgrif.application.engine.adapter.spring.elastic.domain.StringCollectionField(values));
-    }
-    return Optional.empty();
+    if (!(dataField.getValue() instanceof Collection<?> c) || c.isEmpty()) {
+        return Optional.empty();
+    }
+    List<String> values = c.stream()
+            .filter(Objects::nonNull)
+            .filter(String.class::isInstance)
+            .map(String.class::cast)
+            .toList();
+    if (values.isEmpty()) {
+        return Optional.empty();
+    }
+    return Optional.of(new com.netgrif.application.engine.adapter.spring.elastic.domain.StringCollectionField(values.toArray(new String[0])));

I can add unit tests covering: null/empty collections, mixed-type collections (ensure filtering), and successful mapping.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
protected Optional<DataField> transformStringCollectionField(com.netgrif.application.engine.objects.workflow.domain.DataField dataField, com.netgrif.application.engine.objects.petrinet.domain.dataset.StringCollectionField netField) {
if (dataField.getValue() != null && dataField.getValue() instanceof Collection && !((Collection<?>) dataField.getValue()).isEmpty()) {
String[] values = ((Collection<?>) dataField.getValue()).toArray(new String[0]);
return Optional.of(new com.netgrif.application.engine.adapter.spring.elastic.domain.StringCollectionField(values));
}
return Optional.empty();
}
protected Optional<DataField> transformStringCollectionField(
com.netgrif.application.engine.objects.workflow.domain.DataField dataField,
com.netgrif.application.engine.objects.petrinet.domain.dataset.StringCollectionField netField) {
// If value is not a non-empty Collection, bail out
if (!(dataField.getValue() instanceof Collection<?> c) || c.isEmpty()) {
return Optional.empty();
}
// Filter out nulls and non-String instances
List<String> values = c.stream()
.filter(Objects::nonNull)
.filter(String.class::isInstance)
.map(String.class::cast)
.toList();
// If no valid String elements remain, bail out
if (values.isEmpty()) {
return Optional.empty();
}
// Build the elastic StringCollectionField from the filtered array
return Optional.of(
new com.netgrif.application.engine.adapter.spring.elastic.domain.StringCollectionField(
values.toArray(new String[0])
)
);
}
🤖 Prompt for AI Agents
In
application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseMappingService.java
around lines 132 to 138, the current transformStringCollectionField does an
unsafe cast using toArray(new String[0]) and doesn't handle nulls or non-String
elements; change it to iterate the collection safely, filter out nulls and
non-String elements, convert remaining elements to String (e.g., via
String.valueOf or cast after instanceof), collect into a String[] and only
return Optional.of(...) when the resulting array is non-empty; if the input is
null/empty or filtering yields no elements, return Optional.empty().


protected Optional<DataField> transformEnumerationMapField
(com.netgrif.application.engine.objects.workflow.domain.DataField enumMap, EnumerationMapField netField) {
Map<String, I18nString> options = this.getFieldOptions(enumMap, netField);
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
package com.netgrif.application.engine.elastic.service;

import co.elastic.clients.elasticsearch._types.FieldValue;
import co.elastic.clients.elasticsearch._types.mapping.FieldType;
import co.elastic.clients.elasticsearch._types.query_dsl.BoolQuery;
import co.elastic.clients.elasticsearch._types.query_dsl.QueryStringQuery;
import co.elastic.clients.elasticsearch._types.query_dsl.TermsQueryField;
Expand Down Expand Up @@ -138,7 +139,7 @@ public Page<Case> search(List<CaseSearchRequest> requests, LoggedUser user, Page
// TODO: impersonation
// LoggedUser loggedOrImpersonated = user.getSelfOrImpersonated();
LoggedUser loggedOrImpersonated = user;
pageable = resolveUnmappedSortAttributes(pageable);
// pageable = resolveUnmappedSortAttributes(pageable);
NativeQuery query = buildQuery(requests, loggedOrImpersonated, pageable, locale, isIntersection);
List<Case> casePage;
long total;
Expand Down Expand Up @@ -190,11 +191,22 @@ protected NativeQuery buildQuery(List<CaseSearchRequest> requests, LoggedUser us
BinaryOperator<BoolQuery.Builder> reductionOperation = isIntersection ? (a, b) -> a.must(b.build()._toQuery()) : (a, b) -> a.should(b.build()._toQuery());
BoolQuery.Builder query = singleQueries.stream().reduce(new BoolQuery.Builder(), reductionOperation);

NativeQueryBuilder builder = new NativeQueryBuilder();
return builder
NativeQueryBuilder builder = new NativeQueryBuilder()
.withQuery(query.build()._toQuery())
.withPageable(pageable)
.build();
.withPageable(PageRequest.of(pageable.getPageNumber(), pageable.getPageSize()));

for (org.springframework.data.domain.Sort.Order o : pageable.getSort()) {
builder.withSort(s -> s.field(f -> f
.field(o.getProperty())
.order(o.isAscending()
? co.elastic.clients.elasticsearch._types.SortOrder.Asc
: co.elastic.clients.elasticsearch._types.SortOrder.Desc)
.unmappedType(FieldType.Keyword)
.missing("_last")
));
}
Comment on lines +198 to +207
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Hard-coding unmappedType=Keyword can break date/numeric sorts (e.g., creationDate).

Elastic expects unmapped_type to match the field’s actual mapping. Using Keyword for dates/numbers can yield incorrect ordering or errors. Also handle _score via score sort, not field sort.

Apply this diff to select unmappedType per field and support _score:

-        for (org.springframework.data.domain.Sort.Order o : pageable.getSort()) {
-            builder.withSort(s -> s.field(f -> f
-                    .field(o.getProperty())
-                    .order(o.isAscending()
-                            ? co.elastic.clients.elasticsearch._types.SortOrder.Asc
-                            : co.elastic.clients.elasticsearch._types.SortOrder.Desc)
-                    .unmappedType(FieldType.Keyword)
-                    .missing("_last")
-            ));
-        }
+        for (org.springframework.data.domain.Sort.Order o : pageable.getSort()) {
+            String prop = o.getProperty();
+            if ("_score".equals(prop)) {
+                builder.withSort(s -> s.score(sc -> sc.order(o.isAscending()
+                        ? co.elastic.clients.elasticsearch._types.SortOrder.Asc
+                        : co.elastic.clients.elasticsearch._types.SortOrder.Desc)));
+                continue;
+            }
+            builder.withSort(s -> s.field(f -> f
+                    .field(prop)
+                    .order(o.isAscending()
+                            ? co.elastic.clients.elasticsearch._types.SortOrder.Asc
+                            : co.elastic.clients.elasticsearch._types.SortOrder.Desc)
+                    .unmappedType(resolveSortUnmappedType(prop))
+                    .missing("_last")));
+        }

Then add this helper in the class:

// Heuristic mapping; extend as needed to cover your mapped fields
private FieldType resolveSortUnmappedType(String property) {
    if (property.endsWith(".keyword")) return FieldType.Keyword;
    if ("creationDate".equals(property) || property.endsWith("Date")) return FieldType.Date;
    if ("priority".equals(property) || property.endsWith(".priority")) return FieldType.Long;
    if (property.equals("_id") || property.endsWith("Id") || property.endsWith("Name")) return FieldType.Keyword;
    return FieldType.Keyword; // safe default for string-ish fields
}
🤖 Prompt for AI Agents
In
application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseService.java
around lines 198 to 207, the code hard-codes unmappedType=Keyword for all sorts
and doesn't handle score sorts; change the sort builder to pick an unmappedType
based on the field property (use a helper that returns FieldType.Date for
date-like fields, FieldType.Long for numeric/priority, FieldType.Keyword for
ids/names/strings, etc.) and special-case "_score" to emit a score sort rather
than a field sort; also add the suggested resolveSortUnmappedType(String) helper
method in the class and call it when building the sort to supply the correct
unmappedType per field.


return builder.build();
}

protected BoolQuery.Builder buildSingleQuery(CaseSearchRequest request, LoggedUser user, Locale locale) {
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
package com.netgrif.application.engine.objects.elastic.domain;


import lombok.Data;
import lombok.EqualsAndHashCode;
import lombok.NoArgsConstructor;

@Data
@NoArgsConstructor
@EqualsAndHashCode(callSuper = true)
public abstract class StringCollectionField extends TextField {

public String[] collectionValue;

public StringCollectionField(String[] values) {
super(values);
this.collectionValue = values;
}
Comment on lines +13 to +18
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Encapsulation of collectionValue

Public mutable array invites accidental external mutation. Prefer at least protected to limit exposure (the adapter uses an accessor anyway).

Apply this minimal change:

-    public String[] collectionValue;
+    protected String[] collectionValue;

If feasible, consider returning an unmodifiable copy in the getter to prevent aliasing.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public String[] collectionValue;
public StringCollectionField(String[] values) {
super(values);
this.collectionValue = values;
}
protected String[] collectionValue;
public StringCollectionField(String[] values) {
super(values);
this.collectionValue = values;
}
🤖 Prompt for AI Agents
In
nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/StringCollectionField.java
around lines 13–18, the public mutable array collectionValue exposes internal
state; change its visibility to protected (or private) and update the
constructor to defensively copy the incoming String[] (e.g., Arrays.copyOf) to
avoid aliasing. Also adjust or add the getter to return a defensive copy or an
unmodifiable list/array to prevent external mutation, and update any callers
(the adapter) to use the accessor rather than direct field access.

}
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ public class BooleanField extends com.netgrif.application.engine.objects.elastic

public BooleanField(Boolean value) {
super(value);
this.booleanValue = value;
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,13 @@
import lombok.NoArgsConstructor;
import org.springframework.data.annotation.Id;
import org.springframework.data.annotation.Version;
import org.springframework.data.elasticsearch.annotations.DateFormat;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;
import org.springframework.data.elasticsearch.annotations.*;

import java.time.LocalDateTime;
import java.util.Map;
import java.util.Set;

import static org.springframework.data.elasticsearch.annotations.FieldType.Flattened;
import static org.springframework.data.elasticsearch.annotations.FieldType.Keyword;
import static org.springframework.data.elasticsearch.annotations.FieldType.*;

@NoArgsConstructor
@Document(indexName = "#{@elasticCaseIndex}")
Expand All @@ -29,10 +25,30 @@ public void update(ElasticCase useCase) {
}

@Id
@Field(type = Keyword)
public String getId() {
return super.getId();
}
Comment on lines +28 to 31
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Mapping changes: plan safe rollout (templates, reindex, aliases)

  • Update index templates to reflect new/changed mappings.
  • Create a new index (vN+1), _reindex from old, then atomically switch the write/read alias.
  • Backfill and verify dashboards/queries for id vs caseId, and title/authorName .keyword usage.

Also applies to: 33-40, 42-45, 47-50, 67-70, 82-89

🤖 Prompt for AI Agents
In
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.java
around lines 28-31 (and similarly for ranges 33-40, 42-45, 47-50, 67-70, 82-89),
the mapping change (e.g., annotating getters with @Field(type = Keyword))
requires a safe rollout: update your Elasticsearch index templates to include
the new mappings, create a new index version (vN+1) with the updated template,
reindex data from the old index into the new one, atomically switch the
application read/write alias to the new index, and then verify/backfill
dashboards and queries to use the correct field names (`id` vs `caseId`,
`title`/`authorName` and their `.keyword` variants) so consumers are not broken
by the mapping change.


@MultiField(
mainField = @Field(type = Text),
otherFields = {
@InnerField(suffix = "keyword", type = Keyword)
})
public String getTitle() {
return super.getTitle();
}

@Field(type = Keyword)
public String getVisualId() {
return super.getVisualId();
}

@Field(type = Keyword)
public String getCaseId() {
return super.getId();
}

@Version
public Long getVersion() {
return super.getVersion();
Expand All @@ -48,7 +64,7 @@ public String getProcessId() {
return super.getProcessId();
}

@Field(type = FieldType.Date, format = DateFormat.date_hour_minute_second_millis)
@Field(type = Date, format = DateFormat.date_hour_minute_second_millis)
public LocalDateTime getCreationDate() {
return super.getCreationDate();
}
Comment on lines +67 to 70
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Date format change: ensure backward compatibility

Switching to DateFormat.date_hour_minute_second_millis may break reads/writes if existing indices stored creationDate with date_optional_time. Support both formats if mixed data exists or clients send varying precisions.

Suggested change:

-    @Field(type = Date, format = DateFormat.date_hour_minute_second_millis)
+    @Field(type = Date, format = {
+        DateFormat.date_hour_minute_second_millis,
+        DateFormat.date_optional_time
+    })
     public LocalDateTime getCreationDate() {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@Field(type = Date, format = DateFormat.date_hour_minute_second_millis)
public LocalDateTime getCreationDate() {
return super.getCreationDate();
}
@Field(type = Date, format = {
DateFormat.date_hour_minute_second_millis,
DateFormat.date_optional_time
})
public LocalDateTime getCreationDate() {
return super.getCreationDate();
}
🤖 Prompt for AI Agents
In
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.java
around lines 67-70, the @Field annotation currently forces
DateFormat.date_hour_minute_second_millis which can break compatibility with
existing indices using date_optional_time; change the mapping to accept both
formats by switching to DateFormat.custom and providing a pattern that includes
both formats separated by "||" (i.e., include the millisecond ISO pattern and
the strict_date_optional_time/date_optional_time variants) so reads/writes
tolerate mixed precisions and legacy values.

Expand All @@ -63,7 +79,11 @@ public String getAuthorRealm() {
return super.getAuthorRealm();
}

@Field(type = Keyword)
@MultiField(
mainField = @Field(type = Text),
otherFields = {
@InnerField(suffix = "keyword", type = Keyword)
})
public String getAuthorName() {
return super.getAuthorName();
}
Comment on lines +82 to 89
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

AuthorName multi-field mapping — LGTM

Covers full-text search and exact filters/sorts. If you need case-insensitive sorting, consider a normalizer-backed keyword subfield in the index settings.

🤖 Prompt for AI Agents
In
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.java
around lines 82 to 89, the current MultiField mapping on getAuthorName()
provides a Text main field and a keyword subfield but does not support
case-insensitive sorting; if you need case-insensitive sorts, add an additional
InnerField (e.g., suffix "keyword_normalized") of type Keyword that uses a
normalizer, and update the Elasticsearch index settings to define that
normalizer (lowercase filter) so the new keyword subfield can be used for
case-insensitive exact matching and sorting.

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
package com.netgrif.application.engine.adapter.spring.elastic.domain;

import lombok.Data;
import lombok.EqualsAndHashCode;
import lombok.NoArgsConstructor;
import org.springframework.data.elasticsearch.annotations.Field;

import static org.springframework.data.elasticsearch.annotations.FieldType.Keyword;
import static org.springframework.data.elasticsearch.annotations.FieldType.Text;

@Data
@NoArgsConstructor
@EqualsAndHashCode(callSuper = true)
public class StringCollectionField extends com.netgrif.application.engine.objects.elastic.domain.StringCollectionField {

public StringCollectionField(String[] values) {
super(values);
}

@Override
@Field(type = Text)
public String[] getFulltextValue() {
return super.getFulltextValue();
}

@Override
@Field(type = Keyword)
public String[] getCollectionValue() {
return super.getCollectionValue();
}
Comment on lines +26 to +30
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Plan reindex if changing existing mappings.
If any existing index used nested for this field, switch via alias → build new index → reindex → alias swap.


}
Loading