Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NIFI-3248: Improvement of GetSolr Processor #2199

Closed
wants to merge 3 commits into from

Conversation

JohannesDaniel
Copy link
Contributor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

For all changes:

  • Is there a JIRA ticket associated with this PR? Is it referenced
    in the commit message?

  • Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.

  • Has your PR been rebased against the latest commit within the target branch (typically master)?

  • Is your initial contribution a single, squashed commit?

For code changes:

  • Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder?
  • Have you written or updated unit tests to verify your changes?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly?
  • If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly?
  • If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties?

For documentation related changes:

  • Have you ensured that format looks appropriate for the output in which it is rendered?

Note:

Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.

Copy link
Member

@ijokarumawak ijokarumawak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JohannesDaniel Thanks for your contribution! This PR looks great overall, lots of enhancement and unit tests!

I posted several comments by just looking at the changed lines of code. I haven't played with it against a live Solr instance, but will do. Then share more feedback if I find any.

Please check the comments and let us know your thoughts. Thanks!

@@ -275,7 +275,7 @@ protected final boolean isBasicAuthEnabled() {
}

@Override
protected final Collection<ValidationResult> customValidate(ValidationContext context) {
protected Collection<ValidationResult> customValidate(ValidationContext context) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we add another protected method to override at sub-classes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did within class GetSolr

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I imagine the reason why this customValidate is marked with final is that because the original author wanted to avoid sub-classes skip executing validation code implemented here. You implemented within GetSolr, and call super.customValidate from there, so it should be fine, but other sub-class can forget to call super.customValidate if we remove final keyword.

So, I thought it might be safer approach to add an abstract method, such as additionalCustomValidate at SolrProcessor, then call it from customValidate, and let sub-classes implement custom validation in it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. by doing so, i will also have to add this method to PutSolrContentStream

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. Then the method can be a non-abstract method at SolrProcessor that does nothing.

<field name="string_single" type="string" indexed="true" stored="true" />
<field name="string_multi" type="string" indexed="true" stored="true" multiValued="true"/>

<uniqueKey>id</uniqueKey>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if Solr doc doesn't have an uniqueKey? Does this processor still work without uniqueKey??

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the uniqueKey field has to be part of the sorting. Well-configured Solr indexes always include this kind of field as many things will not work properly without this field. Actually, I have never seen a Solr index without this (and I have seen a lot ... ;).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with that, most indices have unique key. But just asked it because it is not mandatory to have an unique key according to Solr documentation. Then I prefer to state that unique key is required for this processor to work properly in NiFi documentation.

@@ -172,157 +203,196 @@ protected void init(final ProcessorInitializationContext context) {

@Override
public void onPropertyModified(PropertyDescriptor descriptor, String oldValue, String newValue) {
lastEndDatedRef.set(UNINITIALIZED_LAST_END_DATE_VALUE);
clearState.set(true);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably we'd like to clear state only when following properties get changed? It would be a bad UX if state is cleared when user re-configure batch size.

  • SOLR_TYPE
  • SOLR_LOCATION
  • COLLECTION
  • SOLR_QUERY
  • DATE_FIELD
  • RETURN_FIELDS

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, no problem

descriptors.add(SOLR_QUERY);
descriptors.add(RETURN_FIELDS);
descriptors.add(SORT_CLAUSE);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it safe to remove an existing property? The existing code should not sort result anyway, or should store last sorted field value to paginate properly when docs with the same date split more than one page. So I think it's safe..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be save as the sorting only affects documents indexed after lastEndDate (documents indexed earlier are excluded by filter query)

@InputRequirement(Requirement.INPUT_FORBIDDEN)
@CapabilityDescription("Queries Solr and outputs the results as a FlowFile")
@CapabilityDescription("Queries Solr and outputs the results as a FlowFile in the format of XML or using a Record Writer")
@Stateful(scopes = {Scope.LOCAL}, description = "Stores latest date of Date Field so that the same data will not be fetched multiple times.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GetSolr used to use local file to store lastEndDate. We need migration code so that lastEndDate to be taken over to managed state when there's no state but the lastEndDate file exists.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you really think that it is required to read the file? Backwards compatibility could also be realized by adding a filter query like fq=dateField:[* TO lastEndDate]. The user only had to specify the value of lastEndDate e. g. to an property of the processor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, this would be the correct filter query:
fq=dateField:[lastEndDate TO NOW]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exact way to realize backward compatibility is up to you :) I'm fine as long as user can understand how to migrate existing state to new version of this processor. If it needs to be done manually, then it should be documented.

@InputRequirement(Requirement.INPUT_FORBIDDEN)
@CapabilityDescription("Queries Solr and outputs the results as a FlowFile")
@CapabilityDescription("Queries Solr and outputs the results as a FlowFile in the format of XML or using a Record Writer")
@Stateful(scopes = {Scope.LOCAL}, description = "Stores latest date of Date Field so that the same data will not be fetched multiple times.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

State scope should be CLUSTER, I think. Also, capability description should mention that this processor is designed to run on Primary Node only. Please refer ListHDFS processor documentation.

Or does this processor work nicely in distributed fashion by utilizing multiple NiFi nodes against a Solr cluster?

writeLastEndDate();
}
@OnScheduled
public void onScheduled2(final ProcessContext context) throws IOException {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change method name appropriately to represent what it does, such as clearState. The annotation explains when it's called.

.required(false)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.build();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This property should make it quite obvious, how backwards compatibility can be achieved. Additionally, I will describe it in the documentation. BTW: Where can I change descriptions of processor usage? Did not find them in folder nifi-docs...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@ijokarumawak ijokarumawak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JohannesDaniel As another cycle of review. I posted different comments. Please check those out.

Also, I haven't finished testing if it covers the original intent of NIFI-3248, specifically timezone issue and indexing time lag. Did you confirm those are addressed by this PR?
https://issues.apache.org/jira/browse/NIFI-3248

Thanks!

.required(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.allowableValues(MODE_XML, MODE_REC)
.defaultValue(MODE_REC.getValue())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default value should be MODE_XML as it did before.


public static final PropertyDescriptor RETURN_TYPE = new PropertyDescriptor
.Builder().name("Return Type")
.displayName("Return Type")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although I haven't seen a specific guideline or documentation, other processors prefer having name in lower case looks like a key of property or configuration name such as return_type so that user can type the name without worrying about spacing or case sensitivity, while displayName is a more verbose human readable name.

name would be more important in the world of MiNiFi or other application directly talks with NiFi API programatically.

I don't have strong opinion here but just wanted to share what those two are.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The most properties were already available in the prior GetSol processor. I expected this to be critical for backwards compatibility. For the new properties I chose the same naming pattern.

for (SolrDocument doc : response.getResults()) {
doc.removeFields(dateField);
}
}
flowFile = session.write(flowFile, new QueryResponseOutputStreamCallback(response));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Surprisingly, this processor has not been able to output a proper XML if multiple documents are fetched at a single onTrigger method. The resulted FlowFile contains following text data, but it's not a well formatted XML and can't be parsed at downstream correctly. We should wrap doc elements by something like docs to make it parsable..

<doc boost="1.0">
<field name="id">F8V7067-APL-KIT</field>
<field name="price_c____l_ns">1995</field>
</doc>
<doc boost="1.0">
<field name="id">MA147LL/A</field>
</doc>

Would you be able to fix this method, too?
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-solr-bundle/nifi-solr-processors/src/main/java/org/apache/nifi/processors/solr/GetSolr.java#L339

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. But all users will have to change their workflows after updating NiFi, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's correct, but it's necessary.

if (context.getProperty(RETURN_TYPE).evaluateAttributeExpressions().getValue().equals(MODE_REC.getValue())
&& !context.getProperty(RECORD_WRITER).isSet()) {
problems.add(new ValidationResult.Builder()
.explanation("for parsing records a record writer has to be configured")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for parsing records a ... might be for writing records a ...? Parsing is done when reading record formatted data.

.append(dateField)
.append(":[")
.append(context.getStateManager().getState(Scope.CLUSTER).get(STATE_MANAGER_FILTER))
.append(" TO *]");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I rerun GetSolr processor after clearing its state, I've got following error, and probably we should take a situation when state doesn't contain STATE_MANAGER_FILTER here:

2017-10-19 22:22:54,827 ERROR [Timer-Driven Process Thread-4] org.apache.nifi.processors.solr.GetSolr GetSolr[id=34a73c1d-015f-10
00-6121-dfeaf41fe595] GetSolr[id=34a73c1d-015f-1000-6121-dfeaf41fe595] failed to process due to org.apache.solr.client.solrj.impl
.HttpSolrClient$RemoteSolrException: Error from server at http://192.168.99.1:8983/solr/techproducts: Invalid Date String:'null';
 rolling back session: {}
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://192.168.99.1:8983/solr/techprod
ucts: Invalid Date String:'null'
        at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:592)
        at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:261)
        at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250)
        at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:403)
        at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355)
        at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1291)
        at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1061)
        at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:997)
        at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
        at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
        at org.apache.nifi.processors.solr.GetSolr.onTrigger(GetSolr.java:336)
        at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
        at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
        at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
        at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
        at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

final StringBuilder automatedFilterQuery = (new StringBuilder())
.append(dateField)
.append(":[")
.append(context.getStateManager().getState(Scope.CLUSTER).get(STATE_MANAGER_FILTER))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are couple of context.getStateManager().getState(Scope.CLUSTER) call. This should be consolidated by having a variable holding the result of this method call, to avoid accessing Zookeeper often.

public static final String STATE_MANAGER_FILTER = "stateManager_filter";
public static final String STATE_MANAGER_CURSOR_MARK = "stateManager_cursorMark";
public static final AllowableValue MODE_XML = new AllowableValue("XML");
public static final AllowableValue MODE_REC = new AllowableValue("Records");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just an idea. Configuring a schema for the writer manually can be cumbersome. I wonder if it's possible to load a schema from the target collection then auto generate NiFi record schema from it. Do you think it's doable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Principally yes, by using the Schema API. But I dont expect this to be too easy. I suggest that we create a separate ticket for this as it should require some deeper considerations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The difficulty with this is that Solr provides various different field types for different kinds of data. For instance, an integer could be derived from an Int, TrieInt (version < 7.0) or Pint (version >= 7.0) field. This requires a comprehensive fieldtype-datatype mapping.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, this requires parsing of response json, as the response parsing of Schema API is not really realized in SolrJ

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, and dynamic fields could become a problem... I think this is not possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your detailed considerations. I agree it's not easy task to do. To not lose your informative comments for future work, I've created another JIRA. https://issues.apache.org/jira/browse/NIFI-4514

@JohannesDaniel
Copy link
Contributor Author

I dont think that the timezone and the commit issues are still important. GetSolr now takes the timestamp directly from the results. Commit delays wont be a problem as the state is only updated when new documents are retrieved, no matter at which time the query was executed. The same applies to the timezone issue.

@ijokarumawak
Copy link
Member

@JohannesDaniel Thanks for the updates and additional documentation. Confirmed that commit time lag is no longer an issue. All LGTM, +1. I'm going to squash commits and merge to master. Thanks again for your contribution!

@asfgit asfgit closed this in c06dee2 Oct 23, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants