Skip to content

Add a README#4

Merged
olix0r merged 5 commits intomasterfrom
ver/readme
Jul 11, 2018
Merged

Add a README#4
olix0r merged 5 commits intomasterfrom
ver/readme

Conversation

@olix0r
Copy link
Copy Markdown
Member

@olix0r olix0r commented Jul 11, 2018

No description provided.

@olix0r olix0r self-assigned this Jul 11, 2018
@olix0r
Copy link
Copy Markdown
Member Author

olix0r commented Jul 11, 2018

We can improve on this over time -- I just wanted to get something in here to make it more welcoming.

Comment thread README.md Outdated
* Automatic [Prometheus][prom] metrics export for HTTP and TCP traffic;
* Transparent, zero-config WebSocket proxying;
* Opportunistic TLS;
* [P2C + Peak-EWMA][loadbalancing] HTTP load balancing; and
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would call out that HTTP load balancing is (1) latency-based, (2) fully-automatic, (3) layer 7 using those keywords. I would also mention that there is also automatic layer 4 load balancing for non-HTTP traffic.

Comment thread README.md Outdated
* Transparent, zero-config proxying for HTTP, HTTP/2, and arbitrary TCP protocols.
* Automatic [Prometheus][prom] metrics export for HTTP and TCP traffic;
* Transparent, zero-config WebSocket proxying;
* Opportunistic TLS;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would phrase this as "Experimental automatic TLS support (temporarily opportunistic)"

Comment thread README.md Outdated

This proxy is primarily intended to run on Linux in containerized
environments like [Kubernetes][k8s], though it may also work on other
Unix-like systems (like MacOS).
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/MacOS/macOS/.

Comment thread README.md Outdated
Unix-like systems (like MacOS).

The proxy supports service discovery via the [`Destination` gRPC
service][linkerd2-proxy-api] and DNS.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say "The proxy supports service discovery via DNS and via the linkerd2 Destination gRPC API" or similar.

Comment thread README.md Outdated

## License

Conduit is copyright 2018 the Linkerd authors. All rights reserved.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to linkerd2-proxy?

@olix0r olix0r merged commit 3c48ba7 into master Jul 11, 2018
@olix0r olix0r deleted the ver/readme branch July 11, 2018 23:01
hawkw pushed a commit that referenced this pull request Jul 27, 2018
pothos referenced this pull request in kinvolk-archives/linkerd2-proxy Jun 13, 2019
profiling: Use fast Rust HTTP server for benchmark
cratelyn added a commit that referenced this pull request Mar 18, 2026
this commit provides a follow-up to #4450, fixing
a bug with existing code identified during review by @Unleased.

> I know this is just keeping the existing behavior, but isn't
tokio::time::interval() going to fire immediately so we'll retry right
away?

\- #4450 (comment)

this commit inserts a call to `tokio::time::Interval::reset()` to the
`Recover` implementation that extracts negative TTL's from
`hickory_resolver` errors.

this means that, upon resolution errors with a negative TTL, we will no
longer immediately retry, and instead wait for the prescribed time
before attempting once more.

introducing test coverage for this is difficult because we cannot create
a `ResolveError` ourselves, and introducing e.g. a trait to inject here
would incur an excessive amount of boilerplate and complexity.

to provide assurance that this is correct, see this small playground
example, in which we poll an `Interval` with and without this call to
`reset()`. note that when calling reset, it will no longer immediately
return `Poll::Ready(_)` upon the first call to `tick()`.

```rust
 #[tokio::main]
 async fn main() {
     let duration = std::time::Duration::from_secs(1);
     let mut interval = tokio::time::interval(duration);
     // interval.reset();

     let start = std::time::Instant::now();
     for i in 1..5 {
         interval.tick().await;
         let elapsed = start.elapsed().as_millis();
         println!("#{i} - {elapsed}ms")
     }
 }
```

```
 ; cargo run
 #1 - 1ms
 #2 - 1001ms
 #3 - 2001ms
 #4 - 3001ms
```

with a reset, to avoid first poll being ready:

```rust
 #[tokio::main]
 async fn main() {
     let duration = std::time::Duration::from_secs(1);
     let mut interval = tokio::time::interval(duration);
     interval.reset();

     let start = std::time::Instant::now();
     for i in 1..5 {
         interval.tick().await;
         let elapsed = start.elapsed().as_millis();
         println!("#{i} - {elapsed}ms")
     }
 }
```

```
 ; cargo run
 #1 - 1001ms
 #2 - 2001ms
 #3 - 3001ms
 #4 - 4001ms
```

Signed-off-by: katelyn martin <kate@buoyant.io>
cratelyn added a commit that referenced this pull request Mar 20, 2026
* fix(app/core): fix negative ttl immediate return

this commit provides a follow-up to #4450, fixing
a bug with existing code identified during review by @Unleased.

> I know this is just keeping the existing behavior, but isn't
tokio::time::interval() going to fire immediately so we'll retry right
away?

\- #4450 (comment)

this commit inserts a call to `tokio::time::Interval::reset()` to the
`Recover` implementation that extracts negative TTL's from
`hickory_resolver` errors.

this means that, upon resolution errors with a negative TTL, we will no
longer immediately retry, and instead wait for the prescribed time
before attempting once more.

introducing test coverage for this is difficult because we cannot create
a `ResolveError` ourselves, and introducing e.g. a trait to inject here
would incur an excessive amount of boilerplate and complexity.

to provide assurance that this is correct, see this small playground
example, in which we poll an `Interval` with and without this call to
`reset()`. note that when calling reset, it will no longer immediately
return `Poll::Ready(_)` upon the first call to `tick()`.

```rust
 #[tokio::main]
 async fn main() {
     let duration = std::time::Duration::from_secs(1);
     let mut interval = tokio::time::interval(duration);
     // interval.reset();

     let start = std::time::Instant::now();
     for i in 1..5 {
         interval.tick().await;
         let elapsed = start.elapsed().as_millis();
         println!("#{i} - {elapsed}ms")
     }
 }
```

```
 ; cargo run
 #1 - 1ms
 #2 - 1001ms
 #3 - 2001ms
 #4 - 3001ms
```

with a reset, to avoid first poll being ready:

```rust
 #[tokio::main]
 async fn main() {
     let duration = std::time::Duration::from_secs(1);
     let mut interval = tokio::time::interval(duration);
     interval.reset();

     let start = std::time::Instant::now();
     for i in 1..5 {
         interval.tick().await;
         let elapsed = start.elapsed().as_millis();
         println!("#{i} - {elapsed}ms")
     }
 }
```

```
 ; cargo run
 #1 - 1001ms
 #2 - 2001ms
 #3 - 3001ms
 #4 - 4001ms
```

Signed-off-by: katelyn martin <kate@buoyant.io>

* nit(app/core): fix comment typo

#4455 (comment)

Co-authored-by: Alejandro Martinez Ruiz <alex@flawedcode.org>

---------

Signed-off-by: katelyn martin <kate@buoyant.io>
Co-authored-by: Alejandro Martinez Ruiz <alex@flawedcode.org>
cratelyn added a commit that referenced this pull request Mar 20, 2026
`linkerd_app_core::control` provides utilities used by the data plane to
communicate with the linkerd control plane. this includes, among other
features such as load-balancing and configurability for settings like
connection timeout durations, an error recovery that respects DNS
record's negative TTL.

as of today, we do this within an inline, anonymous closure.

this commit pulls this business logic out of an inline closure, and into
an explicit pair of structures.

ResolveRecover is the Recover implementation that handles identifying
the proper backoff strategy, when presented with a given boxed error.
ResolveBackoff is the structure that acts as the sum type that
encompasses either a TTL-driven interval, or an exponential backoff.

see also, #4449. that introduces some additional
guardrails to prevent panicking if a negative ttl of zero is
encountered.

as part of this code motion, this commit inserts a call to
`tokio::time::Interval::reset()` to the `Recover` implementation that
extracts negative TTL's from `hickory_resolver` errors.

this means that, upon resolution errors with a negative TTL, we will no
longer immediately retry, and instead wait for the prescribed time
before attempting once more.

introducing test coverage for this is difficult because we cannot create
a `ResolveError` ourselves, and introducing e.g. a trait to inject here
would incur an excessive amount of boilerplate and complexity.

to provide assurance that this is correct, see this small playground
example, in which we poll an `Interval` with and without this call to
`reset()`. note that when calling reset, it will no longer immediately
return `Poll::Ready(_)` upon the first call to `tick()`.

```rust
 #[tokio::main]
 async fn main() {
     let duration = std::time::Duration::from_secs(1);
     let mut interval = tokio::time::interval(duration);
     // interval.reset();

     let start = std::time::Instant::now();
     for i in 1..5 {
         interval.tick().await;
         let elapsed = start.elapsed().as_millis();
         println!("#{i} - {elapsed}ms")
     }
 }
```

```
 ; cargo run
 #1 - 1ms
 #2 - 1001ms
 #3 - 2001ms
 #4 - 3001ms
```

with a reset, to avoid first poll being ready:

```rust
 #[tokio::main]
 async fn main() {
     let duration = std::time::Duration::from_secs(1);
     let mut interval = tokio::time::interval(duration);
     interval.reset();

     let start = std::time::Instant::now();
     for i in 1..5 {
         interval.tick().await;
         let elapsed = start.elapsed().as_millis();
         println!("#{i} - {elapsed}ms")
     }
 }
```

```
 ; cargo run
 #1 - 1001ms
 #2 - 2001ms
 #3 - 3001ms
 #4 - 4001ms
```

Signed-off-by: katelyn martin <kate@buoyant.io>
Co-authored-by: Alejandro Martinez Ruiz <alex@flawedcode.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants