Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use files to communicate with analyzer/responder #176

Closed
To-om opened this issue Mar 11, 2019 · 2 comments
Closed

Use files to communicate with analyzer/responder #176

To-om opened this issue Mar 11, 2019 · 2 comments
Assignees
Milestone

Comments

@To-om
Copy link
Contributor

To-om commented Mar 11, 2019

Request Type

Feature Request

Problem Description

Analyzers/responders reads input from stdin and write the result to stdout. With this behavior, the result of an analyzer can't include file (#120). Moreover, if an error occurs while the analyzer/responder has already started to write its output, it is not possible to report the error and the output becomes invalid (json format is incorrect).

Instead of using stdin/stdout, analyzers/responders will use files. A job will be stored in a folder with the following structure:

job_folder
  \_ input
        \_ input.json    <- input data, equivalent to stdin with Cortex 2.x
        |_ attachment    <- optional extra file when analysis concerns a file
  \_ output
        \_ output.json   <- report of the analysis (generated by analyzer)
        |_ extra_file(s) <- optional extra files linked to report (generated by analyzer)

Job folder is provided to analyzer/responder as argument. Currently, only one job is acceptable but in future release, analyzer/responder will accept several job at a time (bulk mode) in order to increase performance.

This change doesn't require any rewrite of analyzer/responder, however they must use the new version of cortexutils (2.0.0).

@To-om To-om added this to the 3.0.0 milestone Mar 11, 2019
@To-om To-om self-assigned this Mar 11, 2019
To-om added a commit that referenced this issue Mar 11, 2019
To-om added a commit that referenced this issue Mar 11, 2019
@To-om To-om closed this as completed Mar 13, 2019
@geekscrapy
Copy link

What's the reason why this and #120 have been closed? Seems like a sensible path to me.

It would allow large files like disk images to be passed to analyzers without having to be piped over stdin.

@mdtro
Copy link

mdtro commented Aug 6, 2021

Adding some additional clarity on the input changes via file.

You can expect three files to be created in the configured ${job-directory}. Similar to the below.

/${job-directory}/cortex-job-VcKiGXsBZ3Fs1wGYaVK0-1541377302242493258/input/cacerts
/${job-directory}/cortex-job-VcKiGXsBZ3Fs1wGYaVK0-1541377302242493258/input/input.json
/${job-directory}/cortex-job-VcKiGXsBZ3Fs1wGYaVK0-1541377302242493258/input/attachment9152757940259950711

The cacerts file contains the CA Certificates configured in the Cortex UI (where I entered in the certificates in PEM/base64 format).

You can also expect some new fields added (and the data field removed) to the input JSON if the dataType is set to file.

  • file - contains the value of the randomly generated attachment file name
  • filename - the original filename
  • contentType - the mime-type of the file (ex. text/html, application/json, etc.)

Here's an example of the input JSON I got for a submission of an HTML document.

{
    "config": {
        "auto_extract_artifacts": false,
        "cacerts": "[removed]",
        "check_pap": true,
        "check_tlp": true,
        "jobCache": 10,
        "jobTimeout": 30,
        "max_pap": 2,
        "max_tlp": 2,
        "proxy": {
            "http": "http://localhost:8080",
            "https": "https://localhost:8080"
        },
        "proxy_http": "http://localhost:8080",
        "proxy_https": "https://localhost:8080"
    },
    "contentType": "text/html",
    "dataType": "file",
    "file": "attachment5329010166297667382",
    "filename": "index.html",
    "message": "",
    "pap": 2,
    "parameters": {},
    "tlp": 2
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants