Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade OSSEC to v3.6.0 #5196

Closed
wants to merge 2 commits into from
Closed

Upgrade OSSEC to v3.6.0 #5196

wants to merge 2 commits into from

Conversation

zenmonkeykstop
Copy link
Contributor

@zenmonkeykstop zenmonkeykstop commented Apr 16, 2020

Status

Ready for review

Description of Changes

Upgrades OSSEC to version 3.6.0.

Testing

staging environment

(assuming the use of libvirt VMs)

  • checkout this branch and run:
make build-debs
make staging
  • make staging completes successfully
  • log into the app server with molecule login -s libvirt-staging-xenial -h app-staging
    • sudo systemctl status ossec.service shows that the ossec agent and daemons are running
    • the log at /var/ossec/logs/ossec.log shows no unexpected errors
    • there are no grsec denies logged in /var/log/kern.log related to ossec binaries
  • log into the monitor server with molecule login -s libvirt-staging-xenial -h mon-staging
    • sudo systemctl status ossec.service shows that the ossec agent and daemons are running
    • the log at /var/ossec/logs/ossec.log shows no unexpected errors
    • the log at /var/ossec/logs/alerts/alerts.log shows alerts being generated for both mon-staging and app-staging.
    • there are no grsec denies logged in /var/log/kern.log related to ossec binaries

production (HW or VMs)

  • checkout this branch and run make build-debs.
  • copy the ossec VMs in build/xenial to the monitor server (securedrop-ossec-server*.deb and ossec-server*.deb) and application server (securedrop-ossec-agent*.deb and ossec-agent*.deb) and install them.
  • The OSSEC server and agent services start successfully on the monitor and application server respectively.
  • OSSEC emails start to flow as expected, encrypted with the key specified during installation.
  • The test OSSEC email feature in the Journalist Interface works as expected.

Deployment

  • Will be deployed to new instances as part of installation, and old instances with next upgrade.

Checklist

If you made changes to the system configuration:

If you made non-trivial code changes:

  • I have written a test plan and validated it for this PR

@emkll emkll self-requested a review April 16, 2020 14:35
@emkll emkll added this to Under Review in SecureDrop Team Board Apr 16, 2020
@emkll
Copy link
Contributor

emkll commented Apr 17, 2020

I understand this is not ready for final review, but took a pass through it (clean install and upgrade install using prod VMs (to ensure email works).

The upgrade on prod VMs worked as expected (though more testing will be performed as part of the final review): emails were flowing and log files didn't show any errors.

A couple of things i've observed during the staging clean install:

re-registration

The registration seems to no longer be idempotent, based on my local testing. Can you reproduce the following error output by runningmake build-debs, make staging,make staging?

    TASK [ossec : Register OSSEC agent.] *******************************************
    fatal: [app-staging]: FAILED! => {"changed": true, "cmd": ["/var/ossec/bin/agent-auth", "-m", "10.0.1.3", "-p", "1515", "-A", "app-staging", "-P", "/var/ossec/etc/authd.pass"], "delta": "0:00:01.020907", "end": "2020-04-17 16:30:30.387493", "failed_when_result": true, "rc": 0, "start": "2020-04-17 16:30:29.366586", "stderr": "2020/04/17 16:30:29 ossec-authd: INFO: Started (pid: 10782).\n2020/04/17 16:30:29 INFO: Connected to 10.0.1.3 at address 10.0.1.3, port 1515", "stderr_lines": ["2020/04/17 16:30:29 ossec-authd: INFO: Started (pid: 10782).", "2020/04/17 16:30:29 INFO: Connected to 10.0.1.3 at address 10.0.1.3, port 1515"], "stdout": "INFO: Using specified password.\nINFO: Connected to 10.0.1.3:1515\nINFO: Using agent name as: app-staging\nINFO: Send request to manager. Waiting for reply.\nERROR: Invalid IP address: 1024 (from manager)\nERROR: Unable to add agent. (from manager)\nERROR: Unable to create key. Either wrong password or connection not accepted by the manager.\nINFO: Connection closed.", "stdout_lines": ["INFO: Using specified password.", "INFO: Connected to 10.0.1.3:1515", "INFO: Using agent name as: app-staging", "INFO: Send request to manager. Waiting for reply.", "ERROR: Invalid IP address: 1024 (from manager)", "ERROR: Unable to add agent. (from manager)", "ERROR: Unable to create key. Either wrong password or connection not accepted by the manager.", "INFO: Connection closed."]}

Upon closer inspection on the mon server, the error message returned might be misleading, 1024 was likely the agent id for app-staging, these are the relevant logs in ossec.log on mon-staging:

2020/04/17 16:30:28 ossec-authd: INFO: Started (pid: 7196).
2020/04/17 16:30:28 Accepting connections. Using password specified on file: /var/ossec/etc/authd.pass
2020/04/17 16:30:28 IPv4: 0.0.0.0 on port 1515
2020/04/17 16:30:28 Request for TCP listen() succeeded.
2020/04/17 16:30:28 Socket bound for IPv4: 0.0.0.0 on port 1515
2020/04/17 16:30:29 ossec-authd: INFO: New connection from 10.0.1.2
2020/04/17 16:30:29 ossec-authd: INFO: Received request for a new agent (app-staging) from: 10.0.1.2
2020/04/17 16:30:29 ossec-authd: ERROR: Invalid IP address 1024 (duplicated)

pax flags and decoder errors

This one is a bit confusing to me, because it only happens on clean install and was not observed on the upgrade install

Based on my local testing, it's unclear if the paxflags are required on app and mon. I removed the functions in the postinst that applied the paxflags (however as a result, mon did have paxctld installed with default configuration). I did not observe any grsec errors in kern.log or syslog, but did observe some ossec errors on mon while starting the service:

2020/04/17 16:10:18 Accepting connections. Using password specified on file: /var/ossec/etc/authd.pass
2020/04/17 16:10:18 IPv4: 0.0.0.0 on port 1515
2020/04/17 16:10:18 Request for TCP listen() succeeded.
2020/04/17 16:10:18 Socket bound for IPv4: 0.0.0.0 on port 1515
2020/04/17 16:10:19 ossec-authd: INFO: New connection from 10.0.1.2
2020/04/17 16:10:19 ossec-authd: INFO: Received request for a new agent (app-staging) from: 10.0.1.2
2020/04/17 16:10:19 ossec-authd: INFO: Agent key generated for app-staging (requested by 10.0.1.2)
2020/04/17 16:10:19 ossec-authd: INFO: Agent key created for app-staging (requested by 10.0.1.2)
2020/04/17 16:10:20 ossec-authd(1225): INFO: SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
2020/04/17 16:10:20 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:10:20 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.
2020/04/17 16:12:20 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:12:20 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.
2020/04/17 16:16:43 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:16:43 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.
2020/04/17 16:16:43 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:16:43 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.
2020/04/17 16:16:43 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:16:43 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.
2020/04/17 16:16:44 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:16:44 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.
2020/04/17 16:16:44 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:16:44 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.
2020/04/17 16:16:44 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:16:44 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.
2020/04/17 16:16:44 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:16:44 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.
2020/04/17 16:16:44 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:16:44 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.
2020/04/17 16:20:40 ossec-analysisd(1450): ERROR: Syntax error on regex: '\(pam_unix\)$': 9.
2020/04/17 16:20:40 ossec-testrule(1202): ERROR: Configuration error at '/etc/decoder.xml'. Exiting.

@zenmonkeykstop
Copy link
Contributor Author

Yup, it's confusing to me that without pax flags set (via either method), the binaries error out when they try to parse their first regex, but that there isn't a grsec error thrown in the logs. Will try to repro the registration issue above first, but after that it may be worth trying doing the ossec build with a local source build libpcre2 with JIT disabled, to try narrow things down.

@zenmonkeykstop
Copy link
Contributor Author

I didn't replicate the re-registration error above. On second run of make staging, I see

    TASK [ossec : Check whether Application Server is registered as OSSEC agent.] ***
    ok: [mon-staging]

and then the other tasks don't run.

@emkll emkll moved this from Under Review to Ready for Review in SecureDrop Team Board Apr 20, 2020
@zenmonkeykstop
Copy link
Contributor Author

Pushed an update to this with PCRE2 JIT disabled in the ossec build - this will apparently be slower but also removes the need for pax flags to be set.

@emkll emkll moved this from Ready for Review to Under Review in SecureDrop Team Board Apr 20, 2020
@zenmonkeykstop zenmonkeykstop marked this pull request as ready for review April 21, 2020 02:49
Copy link
Contributor

@emkll emkll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @zenmonkeykstop, changes look good, great find on the JIT option. All the errors observed in my previous review are no longer found in logs. I didn't observe any significant load on the system, but we should keep an eye out for that during 1.3.0 QA, especially on hardware.

See inline for two typos which I think we should fix, prior to merge.

Not immediately approving to do a bit more research about the new rules introduced in the past few versions of ossec, and observe the alerts as they flow into my inbox, to ensure there are no new noisy alerts that are being introduced. Will update this ticket later today.

Tested as follows:

staging environment

  • make staging completes successfully
  • log into the app server with molecule login -s libvirt-staging-xenial -h app-staging
    • sudo systemctl status ossec.service shows that the ossec agent and daemons are running
    • the log at /var/ossec/logs/ossec.log shows no unexpected errors
      getting the following entry:
      ERROR: /queue/alerts/execq' not accessible: 'Queue not found'
      This is because we don't use active response
    • there are no grsec denies logged in /var/log/kern.log related to ossec binaries
  • log into the monitor server with molecule login -s libvirt-staging-xenial -h mon-staging
    • sudo systemctl status ossec.service shows that the ossec agent and daemons are running
    • the log at /var/ossec/logs/ossec.log shows no unexpected errors
      NOTE: I am getting sendmail errors and DNS errors for SMTP server, but expected because email does not work out of the box in staging. Not observing any unexpected errors. These errors are not present in prod tests.
    • the log at /var/ossec/logs/alerts/alerts.log shows alerts being generated for both mon-staging and app-staging.
    • there are no grsec denies logged in /var/log/kern.log related to ossec binaries
  • /var/ossec/bin/agent_control -l lists app-staging and mon-staging as active agents

production (Prod VMs, using 1.2.2 VMs)

  • The OSSEC server and agent services start successfully on the monitor and application server respectively.
  • OSSEC emails start to flow as expected, encrypted with the key specified during installation.
  • The test OSSEC email feature in the Journalist Interface works as expected.
  • no unexpected errors in ossec.log

Copy link
Contributor

@emkll emkll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested changes with the upgrade scenario (and observed same results as previous staging tests), changes look good to merge when CI passes.

Inspected the rules, there are a couple of new rulesets, none of which have resulted in any additional notifications :

  • cimserver
  • dnsmasq
  • esl
  • last_rootlogin * (looks like it uses last, and since we dont have root user, just admin+sudo, this should not trigger)
  • linux_usbdetect * (didn't get alerts by connecting usb devices to VM)
  • mhn_cowrie, dionea
  • ms1016
  • ms_ipsec
  • ms_powershell
  • topleveldomain

After reviewing alert changes on the prod install, other than a large amount of syscheck changes related to the rules and delayed notification due to #5201, I think the alerts should not significantly change from an end user perspective.

@emkll
Copy link
Contributor

emkll commented Apr 21, 2020

I clicked the merge button a couple of hours ago , but due to GitHub issues/outage, it seems like this PR was merged into develop in ab8d4a0 . However this PR is not displaying as merged

@zenmonkeykstop
Copy link
Contributor Author

Confirmed that the merge commit and pr commits are present on develop - gonna close this PR to avoid confusion.

@zenmonkeykstop
Copy link
Contributor Author

Lol and now it shows as merged 🙄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

None yet

2 participants