Skip to content

Commit

Permalink
Merge branch 'master' into ref/session-partitioning
Browse files Browse the repository at this point in the history
* master:
  ref(make): Simplify M1 exports in Makefile (#1206)
  fix(metrics): Wait for project states during aggregator shutdown (#1205)
  fix(test): Find librdkafka on Apple M1 (#1204)
  build: Bump sentry-relay in dev dependencies to 0.8.9 (#1202)
  ref(metrics): Tag backdated bucket creations in statsd (#1200)
  feat(metrics): Extract user satisfaction as tag (#1197)
  fix(statsd): Add new metric_type tag to existing metrics (#1199)
  fix: Apply clippy 1.59 suggestions (#1198)
  • Loading branch information
jan-auer committed Mar 9, 2022
2 parents 3745794 + f7e7c36 commit 21f265b
Show file tree
Hide file tree
Showing 17 changed files with 427 additions and 102 deletions.
9 changes: 9 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,19 @@

## Unreleased

**Features**:

- Tag transaction metrics by user satisfaction. ([#1197](https://github.com/getsentry/relay/pull/1197))

**Bug Fixes**:

- Prevent dropping metrics during Relay shutdown if the project is outdated or not cached at time of the shutdown. ([#1205](https://github.com/getsentry/relay/pull/1205))

**Internal**:

- Spread out metric aggregation over the aggregation window to avoid concentrated waves of metrics requests to the upstream every 10 seconds. Relay now applies jitter to `initial_delay` to spread out requests more evenly over time. ([#1185](https://github.com/getsentry/relay/pull/1185))
- Use a randomized Kafka partitioning key for sessions instead of the session ID. ([#1194](https://github.com/getsentry/relay/pull/1194))
- Add new statsd metrics for bucketing efficiency. ([#1199](https://github.com/getsentry/relay/pull/1199), [#1192](https://github.com/getsentry/relay/pull/1192), [#1200](https://github.com/getsentry/relay/pull/1200))

## 22.2.0

Expand Down
8 changes: 8 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -144,10 +144,18 @@ clean-target-dir:

.venv/bin/python: Makefile
rm -rf .venv

@# --copies is necessary because OS X make checks the mtime of the symlink
@# target (/usr/local/bin/python), which is always much older than the
@# Makefile, and then proceeds to unconditionally rebuild the venv.
$$RELAY_PYTHON_VERSION -m venv --copies .venv
.venv/bin/pip install -U pip wheel

@# Work around https://github.com/confluentinc/confluent-kafka-python/issues/1190
@if [ "$$(uname -sm)" = "Darwin arm64" ]; then \
echo "Using 'librdkafka' from homebrew to build confluent-kafka"; \
export C_INCLUDE_PATH="$$(brew --prefix librdkafka)/include"; \
fi; \
.venv/bin/pip install -U -r requirements-dev.txt

.git/hooks/pre-commit:
Expand Down
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ workspace with multiple features, so when running building or running tests
always make sure to pass the `--all` and `--all-features` flags.
The `processing` feature additionally requires a C compiler and CMake.

To install the development environment, librdkafka must be installed and on the
path. On macOS, we require to install it with `brew install librdkafka`, as the installation script uses `brew --prefix` to determine the correct location.

We use VSCode for development. This repository contains settings files
configuring code style, linters, and useful features. When opening the project
for the first time, make sure to _install the Recommended Extensions_, as they
Expand All @@ -55,9 +58,10 @@ development:
also performed in CI.
- `make clean`: Removes all build artifacts, the virtualenv and cached files.

For a lot of tests you will need Redis and Kafka running in their respective
default configuration. `sentry devservices` from
[sentry](https://github.com/getsentry/sentry) does this for you.
Integration tests require Redis and Kafka running in their default
configuration. The most convenient way to get all required services is via
[`sentry devservices`](https://develop.sentry.dev/services/devservices/), which
requires an up-to-date Sentry development environment.

### Building and Running

Expand Down
2 changes: 1 addition & 1 deletion relay-filter/src/csp.rs
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,7 @@ mod tests {
.iter()
.map(|url| SchemeDomainPort::from(*url))
.collect();
let actual = matches_any_origin(Some(&(*url).to_string()), &origins[..]);
let actual = matches_any_origin(Some(*url), &origins[..]);
assert_eq!(*expected, actual, "Could not match {}.", url);
}
}
Expand Down
2 changes: 1 addition & 1 deletion relay-general/src/pii/processor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ fn apply_regex_to_chunks<'a>(
let mut search_string = String::new();
for chunk in &chunks {
match chunk {
Chunk::Text { text } => search_string.push_str(&text.replace("\x00", "")),
Chunk::Text { text } => search_string.push_str(&text.replace('\x00', "")),
Chunk::Redaction { .. } => search_string.push('\x00'),
}
}
Expand Down
2 changes: 1 addition & 1 deletion relay-general/src/processor/selector.rs
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ impl fmt::Display for SelectorPathItem {
SelectorPathItem::Index(index) => write!(f, "{}", index),
SelectorPathItem::Key(ref key) => {
if key_needs_quoting(key) {
write!(f, "'{}'", key.replace("'", "''"))
write!(f, "'{}'", key.replace('\'', "''"))
} else {
write!(f, "{}", key)
}
Expand Down
2 changes: 1 addition & 1 deletion relay-general/src/protocol/tags.rs
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ impl FromValue for TagEntry {
type TagTuple = (Annotated<LenientString>, Annotated<LenientString>);
TagTuple::from_value(value).map_value(|(key, value)| {
TagEntry(
key.map_value(|x| x.into_inner().replace(" ", "-").trim().to_string()),
key.map_value(|x| x.into_inner().replace(' ', "-").trim().to_string()),
value.map_value(|x| x.into_inner().trim().to_string()),
)
})
Expand Down
1 change: 1 addition & 0 deletions relay-general/src/store/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ pub use normalize::breakdowns::{
get_breakdown_measurements, BreakdownConfig, BreakdownsConfig, SpanOperationsConfig,
};
pub use normalize::normalize_dist;
pub use transactions::validate_timestamps;

/// The config for store.
#[derive(Serialize, Deserialize, Debug, Default)]
Expand Down
2 changes: 1 addition & 1 deletion relay-general/src/store/normalize.rs
Original file line number Diff line number Diff line change
Expand Up @@ -318,7 +318,7 @@ impl<'a> NormalizeProcessor<'a> {

if !headers.contains("User-Agent") {
headers.insert(
HeaderName::new("User-Agent".to_owned()),
HeaderName::new("User-Agent"),
Annotated::new(HeaderValue::new(client.clone())),
);
}
Expand Down
56 changes: 33 additions & 23 deletions relay-general/src/store/transactions.rs
Original file line number Diff line number Diff line change
@@ -1,10 +1,40 @@
use crate::processor::{ProcessValue, ProcessingState, Processor};
use crate::protocol::{Context, ContextInner, Event, EventType, Span};
use crate::protocol::{Context, ContextInner, Event, EventType, Span, Timestamp};
use crate::types::{Annotated, Meta, ProcessingAction, ProcessingResult};

/// Rejects transactions based on required fields.
pub struct TransactionsProcessor;

/// Returns start and end timestamps if they are both set and start <= end.
pub fn validate_timestamps(
transaction_event: &Event,
) -> Result<(&Timestamp, &Timestamp), ProcessingAction> {
match (
transaction_event.start_timestamp.value(),
transaction_event.timestamp.value(),
) {
(Some(start), Some(end)) => {
if *end < *start {
return Err(ProcessingAction::InvalidTransaction(
"end timestamp is smaller than start timestamp",
));
}
Ok((start, end))
}
(_, None) => {
// This invariant should be already guaranteed for regular error events.
Err(ProcessingAction::InvalidTransaction(
"timestamp hard-required for transaction events",
))
}
(None, _) => {
// XXX: Maybe copy timestamp over?
Err(ProcessingAction::InvalidTransaction(
"start_timestamp hard-required for transaction events",
))
}
}
}

impl Processor for TransactionsProcessor {
fn process_event(
&mut self,
Expand All @@ -27,27 +57,7 @@ impl Processor for TransactionsProcessor {
.set_value(Some("<unlabeled transaction>".to_owned()))
}

match (event.start_timestamp.value(), event.timestamp.value_mut()) {
(Some(start), Some(end)) => {
if *end < *start {
return Err(ProcessingAction::InvalidTransaction(
"end timestamp is smaller than start timestamp",
));
}
}
(_, None) => {
// This invariant should be already guaranteed for regular error events.
return Err(ProcessingAction::InvalidTransaction(
"timestamp hard-required for transaction events",
));
}
(None, _) => {
// XXX: Maybe copy timestamp over?
return Err(ProcessingAction::InvalidTransaction(
"start_timestamp hard-required for transaction events",
));
}
}
validate_timestamps(event)?;

let err_trace_context_required = Err(ProcessingAction::InvalidTransaction(
"trace context hard-required for transaction events",
Expand Down
2 changes: 1 addition & 1 deletion relay-general/tests/test_derive.rs
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ fn test_enums_processor_calls() {
fn test_simple_newtype() {
let mut processor = RecordingProcessor(vec![]);

let mut value = Annotated::new(HeaderName::new("hi".to_string()));
let mut value = Annotated::new(HeaderName::new("hi"));
process_value(
&mut value,
&mut processor,
Expand Down

0 comments on commit 21f265b

Please sign in to comment.