Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

METRON-817: Customise output file path patterns for HDFS indexing #505

Closed
wants to merge 7 commits into from

Conversation

justinleet
Copy link
Contributor

@justinleet justinleet commented Apr 3, 2017

Contributor Comments

Primarily this affects HdfsWriter by changing the output path from a set path (/apps/metron/.../<sensor>), and allow it to be defined via a Stellar Function. Specifically, the base path is still defined the same (The /apps/metron/.../ portion), but the <sensor> portion is dropped and can now be defined by a Stellar function. By default, the original behavior of <sensor> is used. This is defined in the <sensor>.json file as indicated in the new README.md for metron-writer.

Notes

  • This requires adding tracking things a bit more carefully (and if you're reviewing, please validate that it happens correctly). When the outputFile is closed, we remove the sourceHandler from HdfsWriter's map.
    • I'm slightly concerned about the correctness of the implementation, but it seems necessary to ensure that we don't leave a bunch of SourceHandlers lying around as data changes (and we don't want an enormous number of output files being written to).
    • If there's a cleaner way to manage this, I'd love to hear it and can refactor pretty easily. It throws off the rotation count (because we kill the SourceHandler from the map itself), but I doubt we care about that since it really only shows up in the output filename anyway.
  • This also adds an argument for max open files. This is a flux level config. I defaulted this to 500. 500 was chosen because it was an arbitrary round number that wasn't enormous.
    • If someone has a default with any real reasoning behind it, I'll go ahead and change it.
  • In HdfsWriter, we iterate through the messages, apply the Stellar function and then call the relevant handler. The entire group of message is treated as one single pass/fail (which is the same as the old behavior), rather than individually. The try/catch could potentially be moved into the for loop, but I don't think there's an explicit link between the message and the tuples that we can exploit to fail per message. I don't think it needs to be addressed here, but I'm curious if there's thought on this.

Testing

Unit tests are added to pretty much cover HdfsWriter, and this can be spun up in a dev environment.

To test in dev

  • Spin up a dev environment
  • Validate that the output matches the old format in HDFS (Nothing has an output function defined)
    [hdfs@node1 vagrant]$ hdfs dfs -ls /apps/metron/indexing/indexed/
    Found 3 items
    drwxrwxr-x   - storm hadoop          0 2017-04-03 13:11 /apps/metron/indexing/indexed/bro
    drwxrwxr-x   - storm hadoop          0 2017-04-03 13:11 /apps/metron/indexing/indexed/error
    drwxrwxr-x   - storm hadoop          0 2017-04-03 13:11 /apps/metron/indexing/indexed/snort
    
  • Edit the indexing config for Bro to include an outputPathFunction in the hdfs section, e.g. in /usr/metron/0.3.1/config/zookeeper/indexing/bro.json
    {
      "hdfs" : {
        "index": "bro",
        "batchSize": 5,
        "enabled" : true,
        "outputPathFunction": "FORMAT('ipsrc-%s', ip_src_addr)"
      },
      "elasticsearch" : {
        "index": "bro",
        "batchSize": 5,
        "enabled" : true
      },
      "solr" : {
        "index": "bro",
        "batchSize": 5,
        "enabled" : true
      }
    }
    
  • Push the config configs to ZooKeeper: /usr/metron/0.3.1/bin/zk_load_configs.sh -z node1:2181 -m PUSH -i /usr/metron/0.3.1/config/zookeeper/
  • Let some more data run through and check the output folders, e.g.
    [hdfs@node1 vagrant]$ hdfs dfs -ls /apps/metron/indexing/indexed/
    Found 5 items
    drwxrwxr-x   - storm hadoop          0 2017-04-03 13:11 /apps/metron/indexing/indexed/bro
    drwxrwxr-x   - storm hadoop          0 2017-04-03 13:11 /apps/metron/indexing/indexed/error
    drwxrwxr-x   - storm hadoop          0 2017-04-03 13:14 /apps/metron/indexing/indexed/ipsrc-192.168.138.158
    drwxrwxr-x   - storm hadoop          0 2017-04-03 13:14 /apps/metron/indexing/indexed/ipsrc-192.168.66.1
    drwxrwxr-x   - storm hadoop          0 2017-04-03 13:11 /apps/metron/indexing/indexed/snort
    [hdfs@node1 vagrant]$ hdfs dfs -ls /apps/metron/indexing/indexed/ipsrc-192.168.138.158
    Found 1 items
    -rw-r--r--   1 storm hadoop     223182 2017-04-03 13:14 /apps/metron/indexing/indexed/ipsrc-  192.168.138.158/enrichment-null-0-0-1491225291377.json
    

Pull Request Checklist

Thank you for submitting a contribution to Apache Metron (Incubating).
Please refer to our Development Guidelines for the complete guide to follow for contributions.
Please refer also to our Build Verification Guidelines for complete smoke testing guides.

In order to streamline the review of the contribution we ask you follow these guidelines and ask you to double check the following:

For all changes:

  • Is there a JIRA ticket associated with this PR? If not one needs to be created at Metron Jira.
  • Does your PR title start with METRON-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
  • Has your PR been rebased against the latest commit within the target branch (typically master)?

For code changes:

  • Have you included steps to reproduce the behavior or problem that is being changed or addressed?

  • Have you included steps or a guide to how the change may be verified and tested manually?

  • Have you ensured that the full suite of tests and checks have been executed in the root incubating-metron folder via:

    mvn -q clean integration-test install && build_utils/verify_licenses.sh 
    
  • Have you written or updated unit tests and or integration tests to verify your changes?

  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?

  • Have you verified the basic functionality of the build by building and running locally with Vagrant full-dev environment or the equivalent?

For documentation related changes:

  • Have you ensured that format looks appropriate for the output in which it is rendered by building and verifying the site-book? If not then run the following commands and the verify changes via site-book/target/site/index.html:

    cd site-book
    bin/generate-md.sh
    mvn site:site
    

Note:

Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.
It is also recommened that travis-ci is set up for your personal repository such that your branches are built there before submitting a pull request.

}

StellarCompiler.Expression expression = sourceTypeExpressionMap.computeIfAbsent(stellarFunction, s -> stellarProcessor.compile(stellarFunction));
VariableResolver resolver = new MapVariableResolver(message);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be able to find out from the function metadata/annotation the return type, without doing all this work shouldn't we?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes me think of the UI case. We configure the index configuration but have no way of validation before they save and deploy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, I don't think we can, unless we want to do more work to actually look up the function and validate. On top of it, things like MAP_GET essentially return Object anyway, so we'd still want to check if it's a String afterwards.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, is there a reason why this isn't just:

//processor is a StellarProcessor();
VariableResolver resolver = new MapVariableResolver(message);
Object objResult = processor.parse(stellarFunction, resolver, StellarFunctions.FUNCTION_RESOLVER(), Context.EMPTY_CONTEXT());
if(!objResult instanceof String) {
  throw new IllegalArgumentException("Stellar Function <" + stellarFunction + "> did not return a String value. Returned: " + objResult);
}
return objResult == null?"":(String)objResult;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cestella I'm mostly concerned about the performance of function compile on every single message that comes through indexing.

If we keep the current approach, I would be interested in if there's a way to make things a little cleaner.

In retrospect, I think this should be an LRU cache, so that we don't keep around a given parse forever. Any thoughts on that, assuming performance would be enough of a concern to not just use your proposal?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's a good concern. We do actually have a cache in the StellarProcessor so that compilations happen once and are cached afterwards. As long as StellarProcessor is a transient member variable, I think you're good to do what I suggested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cestella Made that change. I did make the check if(objResult != null && !(objResult instanceof String), to avoid having falling into the IAE when objResult is null.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After looking at this a bit further, while reusing the StellarProcessor is the right answer, it is apparent that we don't practice that everywhere...in fact, we practice it almost literally nowhere. I have created a follow-on PR ( #508 ) to address that problem, which is a substantial performance issue, in fact.

@cestella
Copy link
Member

cestella commented Apr 5, 2017

+1 by inspection

@asfgit asfgit closed this in 7e21ad3 Apr 10, 2017
@justinleet justinleet deleted the hdfs_path branch October 17, 2017 13:53
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants