Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

next/298/20240214/v1 #10413

Merged
merged 15 commits into from Feb 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
11 changes: 5 additions & 6 deletions .github/dependabot.yml
@@ -1,14 +1,13 @@
version: 2
updates:
- package-ecosystem: "cargo"
directory: "/rust"
schedule:
interval: "daily"
commit-message:
prefix: "rust:"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
commit-message:
prefix: "github-actions:"
ignore:
- dependency-name: "actions/cache"
versions: ["3.x"]
- dependency-name: "actions/checkout"
versions: ["3.x"]
4 changes: 3 additions & 1 deletion .github/workflows/authors.yml
Expand Up @@ -3,6 +3,8 @@ name: New Authors Check
on:
pull_request:

permissions: read-all

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
Expand All @@ -27,7 +29,7 @@ jobs:
touch new-authors.txt
while read -r author; do
echo "Checking author: ${author}"
if ! grep -q "^${author}\$" authors.txt; then
if ! grep -qFx "${author}" authors.txt; then
echo "ERROR: ${author} NOT FOUND"
echo "::warning ::New author found: ${author}"
echo "${author}" >> new-authors.txt
Expand Down
7 changes: 5 additions & 2 deletions .github/workflows/codeql.yml
Expand Up @@ -13,6 +13,8 @@ on:
schedule:
- cron: '18 21 * * 1'

permissions: read-all

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
Expand All @@ -39,9 +41,10 @@ jobs:

# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v3.24.1
with:
languages: ${{ matrix.language }}
queries: security-extended

- run: |
sudo apt-get update
Expand All @@ -59,4 +62,4 @@ jobs:
./configure
make
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v3.24.1
2 changes: 2 additions & 0 deletions .github/workflows/scan-build.yml
Expand Up @@ -8,6 +8,8 @@ on:
paths-ignore:
- "doc/**"

permissions: read-all

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/scorecards-analysis.yml
Expand Up @@ -51,6 +51,6 @@ jobs:

# Upload the results to GitHub's code scanning dashboard.
- name: "Upload SARIF results"
uses: github/codeql-action/upload-sarif@dc021d495cb77b369e4d9d04a501700fd83b8c51 # v1
uses: github/codeql-action/upload-sarif@bc64d12bb9f349435efba65d373bac054665b85f # v1
with:
sarif_file: results.sarif
18 changes: 14 additions & 4 deletions SECURITY.md
Expand Up @@ -45,6 +45,15 @@ releases.
Note that we'll be refining the levels based on our experiences with applying them
to actual issues.

## CVE ID's and Github Security Advisories (GHSA)

We will request a CVE ID for an issue if appropriate. Note that multiple
issues may share the same CVE ID.

We work with the Github CNA, through the Github Security Advisory (GHSA) facility.

The GHSA's will be published at least 2 weeks after the public release addressing
the issue, together with the redmine security tickets.

## Support Status of affected code

Expand All @@ -63,13 +72,14 @@ other data, please clearly state if these can (eventually) enter our public CI/Q

We will assign a severity and will share our assessment with you.

We will create a security ticket, which will be private until a few weeks after
We will create a security ticket, which will be private until at least 2 weeks after
a public release addressing the issue.

We will acknowledge you in the release notes and the release announcement. If you
do not want this, please clearly state this.
We will acknowledge you in the release notes, release announcement and GHSA. If you
do not want this, please clearly state this. For the GHSA credits, please give us
your github handle.

We will not request a CVE, but if you do please let us know the CVE ID.
Please let us know if you've requested a CVE ID. If you haven't, we can do it.

OISF does not participate in bug bounty programs, or offer any other rewards
for reporting issues.
27 changes: 26 additions & 1 deletion doc/userguide/configuration/suricata-yaml.rst
Expand Up @@ -505,6 +505,27 @@ the alert.
mode: normal # "normal" or multi
conditional: alerts

In ``normal`` mode a pcap file "filename" is created in the default-log-dir or as
specified by "dir". ``normal`` mode is generally not as performant as ``multi``
mode.

In multi mode, multiple pcap files are created (per thread) which performs
better than ``normal`` mode.

In multi mode the filename takes a few special variables:
- %n representing the thread number
- %i representing the thread id
- %t representing the timestamp (secs or secs.usecs based on 'ts-format')

Example: filename: pcap.%n.%t

.. note:: It is possible to use directories but the directories are not
created by Suricata. For example ``filename: pcaps/%n/log.%s`` will log into
the pre-existing ``pcaps`` directory and per thread sub directories.

.. note:: that the limit and max-files settings are enforced per thread. So the
size limit using 8 threads with 1000mb files and 2000 files is about 16TiB.

Verbose Alerts Log (alert-debug.log)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -2117,7 +2138,11 @@ size of the cache is covered in the YAML file.
To be able to run DPDK on Intel cards, it is required to change the default
Intel driver to either `vfio-pci` or `igb_uio` driver. The process is
described in `DPDK manual page regarding Linux drivers
<https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html>`_.
<https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html>`_.
The Intel NICs have the amount of RX/TX descriptors capped at 4096.
This should be possible to change by manually compiling the DPDK while
changing the value of respective macros for the desired drivers
(e.g. IXGBE_MAX_RING_DESC/I40E_MAX_RING_DESC).
DPDK is natively supported by Mellanox and thus their NICs should work
"out of the box".

Expand Down
20 changes: 19 additions & 1 deletion rust/src/applayertemplate/template.rs
Expand Up @@ -17,17 +17,22 @@

use super::parser;
use crate::applayer::{self, *};
use crate::conf::conf_get;
use crate::core::{AppProto, Flow, ALPROTO_UNKNOWN, IPPROTO_TCP};
use nom7 as nom;
use std;
use std::collections::VecDeque;
use std::ffi::CString;
use std::os::raw::{c_char, c_int, c_void};

static mut TEMPLATE_MAX_TX: usize = 256;

static mut ALPROTO_TEMPLATE: AppProto = ALPROTO_UNKNOWN;

#[derive(AppLayerEvent)]
enum TemplateEvent {}
enum TemplateEvent {
TooManyTransactions,
}

pub struct TemplateTransaction {
tx_id: u64,
Expand Down Expand Up @@ -145,7 +150,13 @@ impl TemplateState {
SCLogNotice!("Request: {}", request);
let mut tx = self.new_tx();
tx.request = Some(request);
if self.transactions.len() >= unsafe {TEMPLATE_MAX_TX} {
tx.tx_data.set_event(TemplateEvent::TooManyTransactions as u8);
}
self.transactions.push_back(tx);
if self.transactions.len() >= unsafe {TEMPLATE_MAX_TX} {
return AppLayerResult::err();
}
}
Err(nom::Err::Incomplete(_)) => {
// Not enough data. This parser doesn't give us a good indication
Expand Down Expand Up @@ -429,6 +440,13 @@ pub unsafe extern "C" fn rs_template_register_parser() {
if AppLayerParserConfParserEnabled(ip_proto_str.as_ptr(), parser.name) != 0 {
let _ = AppLayerRegisterParser(&parser, alproto);
}
if let Some(val) = conf_get("app-layer.protocols.template.max-tx") {
if let Ok(v) = val.parse::<usize>() {
TEMPLATE_MAX_TX = v;
} else {
SCLogError!("Invalid value for template.max-tx");
}
}
SCLogNotice!("Rust template parser registered.");
} else {
SCLogNotice!("Protocol detector and parser disabled for TEMPLATE.");
Expand Down
13 changes: 4 additions & 9 deletions src/app-layer-htp-file.c
Expand Up @@ -48,7 +48,7 @@ extern StreamingBufferConfig htp_sbcfg;
* \retval -2 not handling files on this flow
*/
int HTPFileOpen(HtpState *s, HtpTxUserData *tx, const uint8_t *filename, uint16_t filename_len,
const uint8_t *data, uint32_t data_len, uint64_t txid, uint8_t direction)
const uint8_t *data, uint32_t data_len, uint8_t direction)
{
int retval = 0;
uint16_t flags = 0;
Expand Down Expand Up @@ -147,8 +147,8 @@ static int HTPParseAndCheckContentRange(
* \retval -1 error
*/
int HTPFileOpenWithRange(HtpState *s, HtpTxUserData *txud, const uint8_t *filename,
uint16_t filename_len, const uint8_t *data, uint32_t data_len, uint64_t txid,
bstr *rawvalue, HtpTxUserData *htud)
uint16_t filename_len, const uint8_t *data, uint32_t data_len, htp_tx_t *tx, bstr *rawvalue,
HtpTxUserData *htud)
{
SCEnter();
uint16_t flags;
Expand All @@ -159,7 +159,7 @@ int HTPFileOpenWithRange(HtpState *s, HtpTxUserData *txud, const uint8_t *filena
HTTPContentRange crparsed;
if (HTPParseAndCheckContentRange(rawvalue, &crparsed, s, htud) != 0) {
// range is invalid, fall back to classic open
return HTPFileOpen(s, txud, filename, filename_len, data, data_len, txid, STREAM_TOCLIENT);
return HTPFileOpen(s, txud, filename, filename_len, data, data_len, STREAM_TOCLIENT);
}
flags = FileFlowToFlags(s->f, STREAM_TOCLIENT);
FileContainer *files = &txud->files_tc;
Expand All @@ -179,11 +179,6 @@ int HTPFileOpenWithRange(HtpState *s, HtpTxUserData *txud, const uint8_t *filena
}

// Then, we will try to handle reassembly of different ranges of the same file
// TODO have the caller pass directly the tx
htp_tx_t *tx = htp_list_get(s->conn->transactions, txid - s->tx_freed);
if (!tx) {
SCReturnInt(-1);
}
uint8_t *keyurl;
uint32_t keylen;
if (tx->request_hostname != NULL) {
Expand Down
6 changes: 3 additions & 3 deletions src/app-layer-htp-file.h
Expand Up @@ -27,10 +27,10 @@

#include "app-layer-htp.h"

int HTPFileOpen(HtpState *, HtpTxUserData *, const uint8_t *, uint16_t, const uint8_t *, uint32_t,
uint64_t, uint8_t);
int HTPFileOpen(
HtpState *, HtpTxUserData *, const uint8_t *, uint16_t, const uint8_t *, uint32_t, uint8_t);
int HTPFileOpenWithRange(HtpState *, HtpTxUserData *, const uint8_t *, uint16_t, const uint8_t *,
uint32_t, uint64_t, bstr *rawvalue, HtpTxUserData *htud);
uint32_t, htp_tx_t *, bstr *rawvalue, HtpTxUserData *htud);
bool HTPFileCloseHandleRange(const StreamingBufferConfig *sbcfg, FileContainer *, const uint16_t,
HttpRangeContainerBlock *, const uint8_t *, uint32_t);
int HTPFileStoreChunk(HtpState *, HtpTxUserData *, const uint8_t *, uint32_t, uint8_t);
Expand Down
12 changes: 6 additions & 6 deletions src/app-layer-htp.c
Expand Up @@ -1571,7 +1571,7 @@ static int HtpRequestBodyHandleMultipart(HtpState *hstate, HtpTxUserData *htud,
#endif

result = HTPFileOpen(hstate, htud, filename, filename_len, filedata, filedata_len,
HtpGetActiveRequestTxID(hstate), STREAM_TOSERVER);
STREAM_TOSERVER);
if (result == -1) {
goto end;
} else if (result == -2) {
Expand Down Expand Up @@ -1633,7 +1633,7 @@ static int HtpRequestBodyHandleMultipart(HtpState *hstate, HtpTxUserData *htud,
filedata_len = 0;
}
result = HTPFileOpen(hstate, htud, filename, filename_len, filedata,
filedata_len, HtpGetActiveRequestTxID(hstate), STREAM_TOSERVER);
filedata_len, STREAM_TOSERVER);
if (result == -1) {
goto end;
} else if (result == -2) {
Expand All @@ -1648,7 +1648,7 @@ static int HtpRequestBodyHandleMultipart(HtpState *hstate, HtpTxUserData *htud,
SCLogDebug("filedata_len %u", filedata_len);

result = HTPFileOpen(hstate, htud, filename, filename_len, filedata,
filedata_len, HtpGetActiveRequestTxID(hstate), STREAM_TOSERVER);
filedata_len, STREAM_TOSERVER);
if (result == -1) {
goto end;
} else if (result == -2) {
Expand Down Expand Up @@ -1725,7 +1725,7 @@ static int HtpRequestBodyHandlePOSTorPUT(HtpState *hstate, HtpTxUserData *htud,
HTPSetEvent(hstate, htud, STREAM_TOSERVER, HTTP_DECODER_EVENT_FILE_NAME_TOO_LONG);
}
result = HTPFileOpen(hstate, htud, filename, (uint16_t)filename_len, data, data_len,
HtpGetActiveRequestTxID(hstate), STREAM_TOSERVER);
STREAM_TOSERVER);
if (result == -1) {
goto end;
} else if (result == -2) {
Expand Down Expand Up @@ -1802,10 +1802,10 @@ static int HtpResponseBodyHandle(HtpState *hstate, HtpTxUserData *htud,
}
if (h_content_range != NULL) {
result = HTPFileOpenWithRange(hstate, htud, filename, (uint16_t)filename_len, data,
data_len, HtpGetActiveResponseTxID(hstate), h_content_range->value, htud);
data_len, tx, h_content_range->value, htud);
} else {
result = HTPFileOpen(hstate, htud, filename, (uint16_t)filename_len, data, data_len,
HtpGetActiveResponseTxID(hstate), STREAM_TOCLIENT);
STREAM_TOCLIENT);
}
SCLogDebug("result %d", result);
if (result == -1) {
Expand Down
7 changes: 4 additions & 3 deletions src/detect-tls-certs.c
Expand Up @@ -70,6 +70,7 @@ static int g_tls_certs_buffer_id = 0;
struct TlsCertsGetDataArgs {
uint32_t local_id; /**< used as index into thread inspect array */
SSLCertsChain *cert;
const uint8_t flags;
};

typedef struct PrefilterMpmTlsCerts {
Expand Down Expand Up @@ -148,7 +149,7 @@ static InspectionBuffer *TlsCertsGetData(DetectEngineThreadCtx *det_ctx,
const SSLState *ssl_state = (SSLState *)f->alstate;
const SSLStateConnp *connp;

if (f->flags & STREAM_TOSERVER) {
if (cbdata->flags & STREAM_TOSERVER) {
connp = &ssl_state->client_connp;
} else {
connp = &ssl_state->server_connp;
Expand Down Expand Up @@ -183,7 +184,7 @@ static uint8_t DetectEngineInspectTlsCerts(DetectEngineCtx *de_ctx, DetectEngine
transforms = engine->v2.transforms;
}

struct TlsCertsGetDataArgs cbdata = { 0, NULL };
struct TlsCertsGetDataArgs cbdata = { .local_id = 0, .cert = NULL, .flags = flags };

while (1)
{
Expand Down Expand Up @@ -214,7 +215,7 @@ static void PrefilterTxTlsCerts(DetectEngineThreadCtx *det_ctx, const void *pect
const MpmCtx *mpm_ctx = ctx->mpm_ctx;
const int list_id = ctx->list_id;

struct TlsCertsGetDataArgs cbdata = { 0, NULL };
struct TlsCertsGetDataArgs cbdata = { .local_id = 0, .cert = NULL, .flags = flags };

while (1)
{
Expand Down
11 changes: 10 additions & 1 deletion src/runmode-dpdk.c
Expand Up @@ -475,6 +475,9 @@ static int ConfigSetMempoolSize(DPDKIfaceConfig *iconf, intmax_t entry_int)
if (entry_int <= 0) {
SCLogError("%s: positive memory pool size is required", iconf->iface);
SCReturnInt(-ERANGE);
} else if (entry_int > UINT32_MAX) {
SCLogError("%s: memory pool size cannot exceed %" PRIu32, iconf->iface, UINT32_MAX);
SCReturnInt(-ERANGE);
}

iconf->mempool_size = entry_int;
Expand All @@ -495,7 +498,7 @@ static int ConfigSetMempoolCacheSize(DPDKIfaceConfig *iconf, const char *entry_s
SCReturnInt(-EINVAL);
}

uint32_t max_cache_size = MAX(RTE_MEMPOOL_CACHE_MAX_SIZE, iconf->mempool_size / 1.5);
uint32_t max_cache_size = MIN(RTE_MEMPOOL_CACHE_MAX_SIZE, iconf->mempool_size / 1.5);
iconf->mempool_cache_size = GreatestDivisorUpTo(iconf->mempool_size, max_cache_size);
SCReturnInt(0);
}
Expand All @@ -521,6 +524,9 @@ static int ConfigSetRxDescriptors(DPDKIfaceConfig *iconf, intmax_t entry_int)
if (entry_int <= 0) {
SCLogError("%s: positive number of RX descriptors is required", iconf->iface);
SCReturnInt(-ERANGE);
} else if (entry_int > UINT16_MAX) {
SCLogError("%s: number of RX descriptors cannot exceed %" PRIu16, iconf->iface, UINT16_MAX);
SCReturnInt(-ERANGE);
}

iconf->nb_rx_desc = entry_int;
Expand All @@ -533,6 +539,9 @@ static int ConfigSetTxDescriptors(DPDKIfaceConfig *iconf, intmax_t entry_int)
if (entry_int <= 0) {
SCLogError("%s: positive number of TX descriptors is required", iconf->iface);
SCReturnInt(-ERANGE);
} else if (entry_int > UINT16_MAX) {
SCLogError("%s: number of TX descriptors cannot exceed %" PRIu16, iconf->iface, UINT16_MAX);
SCReturnInt(-ERANGE);
}

iconf->nb_tx_desc = entry_int;
Expand Down