Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OTLPExporter Pipeline issues #1810

Open
cijothomas opened this issue May 23, 2024 · 5 comments
Open

OTLPExporter Pipeline issues #1810

cijothomas opened this issue May 23, 2024 · 5 comments

Comments

@cijothomas
Copy link
Member

cijothomas commented May 23, 2024

OTLP Exporters are configured via the helper methods offered by OTLP crate, using the idea of "pipeline", (eg: opentelemetry_otlp::new_pipeline().logging()...). This hides a lot of LoggerProvider details from the user, and has somewhat reasonable consistency across signals. - opentelemetry_otlp::new_pipeline().logging() vs opentelemetry_otlp::new_pipeline().tracing() vs opentelemetry_otlp::new_pipeline().metrics(), but I see some issues/challenges with that approach.

  1. It is inconsistent with other exporters. For example, the examples for stdout show a different way of configuring the LoggerProvider, via LoggerProvider::builder().with_simple_exporter(exporter).build(). This does not hide LoggerProvider concepts from user. Most users get started with OpenTelemetry using stdout exporters, but then switching to otlp is a big shift in the way things are configured.
  2. Composability is harder.. The OTLP Pipeline returns a built LoggerProvider, so user lose the ability to customize the LoggerProvider themselves. For eg: if user want to add a RedactionProcessor, it is not possible.
  3. Exporters need to be aware of more things than it need to - One possible way to solve the above problem is to expose methods in OTLPPipeleine to allow adding processors. But then the Exporter pipeline need to always mimic the functionality additionally added in the provider. Even today, OTLPExporters does have knowledge about the runtime (tokio etc.), which it should not have.
  4. The install_simple/batch is odd naming (of course, this is subjective!).

The below is the way using LoggerProviderBuilder, shown in all examples except OTLP!

fn init_logs() -> Result<opentelemetry_sdk::logs::LoggerProvider, LogError> {
    let exporter = opentelemetry_otlp::new_exporter()
        .tonic()
        .with_endpoint("http://localhost:4317")
        .build_log_exporter()?;
    let provider: LoggerProvider = LoggerProvider::builder()
        .with_batch_exporter(exporter, runtime::Tokio)
        .with_resource(RESOURCE.clone())
        .build();

    Ok(provider)
}

The below uses OTLP::pipeline/helps, and shown in almost all OTLP examples.

fn init_logs() -> Result<opentelemetry_sdk::logs::LoggerProvider, LogError> {
    opentelemetry_otlp::new_pipeline()
        .logging()
        .with_resource(RESOURCE.clone())
        .with_exporter(
            opentelemetry_otlp::new_exporter()
                .tonic()
                .with_endpoint("http://localhost:4317"),
        )
        .install_batch(runtime::Tokio)
}

Opening to get some feedback on these issues. I'd prefer to remove all OTLPPipeline* constructs, and make OTLP consistent with stdout exporter.

Something like:

let otlp_exporter = opentelemetry_otlp::LogExporter::builder::default().withTonic(TonicConfig {header : vec![...] } )...build();

let stdout_exporter = opentelemetry_stdout::LogExporterBuilder::default().with_encoder(|writer, data|Ok(serde_json::to_writer_pretty(writer, &data).unwrap())).build();

let provider: LoggerProvider = LoggerProvider::builder()
    .with_batch_exporter(exporter, runtime::Tokio)
    .with_simple_exporter(stdout_exporter)
    .with_resource(RESOURCE.clone())
    .with_log_limits(LogLimits {max_attributes : 1000})    
    .build();

The above is easily composable, consistent for all exporters. Exporters need not know anything about provider configurations or runtime or anything - it just does it job - exporting and accepting configuration on how to export, like which protocol or endpoint to use etc.

@lalitb
Copy link
Member

lalitb commented May 28, 2024

The new_pipeline pattern is more widespread in our exporters as it is also used in Zipkin and DataDog (from contrib) exporters. I agree that stdout is more relatable as it is how most of the other languages have implemented exporters, but it also seems that the pipeline approach (probably) is more Rust idiomatic with builder pattern and config chaining to create the complete setup without any intermediate steps.

@cijothomas
Copy link
Member Author

the pipeline approach (probably) is more Rust idiomatic with builder pattern

I am not sure which part makes it more Rust idiomatic... Sure, it is popular with other OTel Rust Exporters (those exporters, most likely, followed the approach from OTLP Exporters.)

@lalitb
Copy link
Member

lalitb commented May 28, 2024

I am not sure which part makes it more Rust idiomatic

with_pipeline uses the single chain to configure the pipeline, more consistent with how other crates in Rust work, as compared to creating exporters, processors, logger-provider, and then logger. Especially in scenarios where these intermediate components are only needed to be passed to the next stage in the pipeline and are not utilized directly by the application. I am not saying this is the right way, it's just a more Rust's idiomatic approach.

@cijothomas
Copy link
Member Author

See #1788 (comment) as well, which talks about potentially re-using the "transport" aspects across signals.

@cijothomas
Copy link
Member Author

@lalitb will explore this more, and share concerns/comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants