Skip to content

Commit

Permalink
Fix typos
Browse files Browse the repository at this point in the history
Found via `codespell -S CHANGELOG.md -L crate,yur,ue,splitted,ser`
  • Loading branch information
kianmeng committed Nov 10, 2022
1 parent c830a41 commit a930847
Show file tree
Hide file tree
Showing 20 changed files with 28 additions and 28 deletions.
2 changes: 1 addition & 1 deletion LICENSE
Expand Up @@ -167,7 +167,7 @@ A contributor's "essential patent claims" are all patent claims owned or control

Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.

In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to s ue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.

If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
Expand Down
2 changes: 1 addition & 1 deletion RELEASE.md
@@ -1,6 +1,6 @@
# Release process

Sozu has a lot of moving pieces and some dependant projects, so
Sozu has a lot of moving pieces and some dependent projects, so

## Checklist

Expand Down
4 changes: 2 additions & 2 deletions bin/src/command/mod.rs
Expand Up @@ -771,7 +771,7 @@ impl CommandServer {

match kill(Pid::from_raw(worker.pid), Signal::SIGKILL) {
Ok(()) => {
info!("Worker {} was successfuly killed", id);
info!("Worker {} was successfully killed", id);
worker.run_state = RunState::Stopped;
return Ok(Success::WorkerKilled(id));
}
Expand Down Expand Up @@ -812,7 +812,7 @@ impl CommandServer {
// we use to send the response to.
match self.in_flight.remove(&response.id) {
None => {
// FIXME: this messsage happens a lot at startup because AddCluster
// FIXME: this message happens a lot at startup because AddCluster
// messages receive responses from each of the HTTP, HTTPS and TCP
// proxys. The clusters list should be merged
debug!("unknown response id: {}", response.id);
Expand Down
4 changes: 2 additions & 2 deletions bin/src/command/orders.rs
Expand Up @@ -1109,7 +1109,7 @@ impl CommandServer {
Query::Metrics(_) => {}
};

// all theses are passed to the thread
// all these are passed to the thread
let command_tx = self.command_tx.clone();
let cloned_identifier = request_identifier.clone();

Expand Down Expand Up @@ -1282,7 +1282,7 @@ impl CommandServer {
for ref mut worker in self.workers.iter_mut().filter(|worker| {
worker.run_state != RunState::Stopping && worker.run_state != RunState::Stopped
}) {
// sort out the specificly targeted worker, if provided
// sort out the specifically targeted worker, if provided
if let Some(id) = worker_id {
if id != worker.id {
continue;
Expand Down
2 changes: 1 addition & 1 deletion bin/src/ctl/command.rs
Expand Up @@ -427,7 +427,7 @@ impl CommandManager {
}
CommandStatus::Ok => {
if id == response.id {
println!("Successfull metrics command: {}", response.message);
println!("Successful metrics command: {}", response.message);
}
break;
}
Expand Down
4 changes: 2 additions & 2 deletions command/src/channel.rs
Expand Up @@ -115,7 +115,7 @@ impl<Tx: Debug + Serialize, Rx: Debug + DeserializeOwned> Channel<Tx, Rx> {
self.readiness & self.interest
}

/// Checks wether we want and can read or write, and calls the appropriate handler.
/// Checks whether we want and can read or write, and calls the appropriate handler.
pub fn run(&mut self) {
let interest = self.interest & self.readiness;

Expand Down Expand Up @@ -331,7 +331,7 @@ impl<Tx: Debug + Serialize, Rx: Debug + DeserializeOwned> Channel<Tx, Rx> {
}
}

/// Checks wether the channel is blocking or nonblocking, writes the message.
/// Checks whether the channel is blocking or nonblocking, writes the message.
///
/// If the channel is nonblocking, you have to flush using `channel.run()` afterwards
pub fn write_message(&mut self, message: &Tx) -> bool {
Expand Down
4 changes: 2 additions & 2 deletions command/src/config.rs
Expand Up @@ -412,7 +412,7 @@ pub struct FileClusterFrontendConfig {
pub hostname: Option<String>,
/// creates a path routing rule where the request URL path has to match this
pub path: Option<String>,
/// declares wether the path rule is Prefix (default), Regex, or Equals
/// declares whether the path rule is Prefix (default), Regex, or Equals
pub path_type: Option<PathRuleType>,
pub method: Option<String>,
pub certificate: Option<String>,
Expand Down Expand Up @@ -1397,7 +1397,7 @@ impl Config {

let stringified_path = saved_state_path_raw
.to_str()
.ok_or_else(|| anyhow::Error::msg("Unvalid character format, expected UTF8"))?
.ok_or_else(|| anyhow::Error::msg("Invalid character format, expected UTF8"))?
.to_string();

Ok(Some(stringified_path))
Expand Down
2 changes: 1 addition & 1 deletion command/src/proxy.rs
Expand Up @@ -413,7 +413,7 @@ pub struct HttpFrontend {
}

impl HttpFrontend {
/// `is_cluster_id` chech if the frontend is dedicated to the given cluster_id
/// `is_cluster_id` check if the frontend is dedicated to the given cluster_id
pub fn is_cluster_id(&self, cluster_id: &str) -> bool {
matches!(&self.route, Route::ClusterId(id) if id == cluster_id)
}
Expand Down
2 changes: 1 addition & 1 deletion doc/configure_cli.md
Expand Up @@ -59,7 +59,7 @@ sozu --config /etc/sozu/config.toml frontend https add --address 0.0.0.0:443 --h

## Check the status of sozu

It shows a list of workers and show informations about their statuses.
It shows a list of workers and show information about their statuses.

```bash
sozu --config /etc/sozu/config.toml status
Expand Down
4 changes: 2 additions & 2 deletions doc/how_to_use.md
Expand Up @@ -14,7 +14,7 @@ However, if you built the project from source, `sozu` and `sozuctl` are placed i

> `cargo build --release` puts the resulting binary in `target/release` instead of `target/debug`.
You can find a working `config.toml` exemple [here][cfg].
You can find a working `config.toml` example [here][cfg].

To start the reverse proxy:

Expand All @@ -28,7 +28,7 @@ You can edit the reverse proxy's configuration with the `config.toml` file. You

You can use `sozuctl` to interact with the reverse proxy.

Checkout sozuctl [documentation](../ctl/README.md) for more informations.
Checkout sozuctl [documentation](../ctl/README.md) for more information.

## Logging

Expand Down
2 changes: 1 addition & 1 deletion doc/lifetime_of_a_session.md
Expand Up @@ -26,7 +26,7 @@ us whenever something happens to those file descriptors
At the end of the day, sockets are just raw file descriptors. We use the mio
`TcpListener`, `TcpStream` wrappers around these file descriptors. A `TcpListener`
listens for connections on a specific port. For each new connection it creates a
`TcpStream` on which subsequent trafic will be redirected (both from and to the client).
`TcpStream` on which subsequent traffic will be redirected (both from and to the client).

This is all what we use mio for. "Subscribing" to file descriptors events.

Expand Down
2 changes: 1 addition & 1 deletion doc/managing_workers.md
@@ -1,7 +1,7 @@
# How are Sōzu's workers managed?

Sōzu's main process starts and manages _workers_, which are subinstances of itself.
This core feature makes Sōzu pretty efficient, but raises the question of managing state accross a whole cluster of processes.
This core feature makes Sōzu pretty efficient, but raises the question of managing state across a whole cluster of processes.

How do we solve this challenge? Unix sockets and channels.

Expand Down
2 changes: 1 addition & 1 deletion lib/src/https_rustls/configuration.rs
Expand Up @@ -444,7 +444,7 @@ impl ProxyConfiguration<Session> for Proxy {
)
.map_err(|register_error| {
error!(
"error registering fron socket({:?}): {:?}",
"error registering from socket({:?}): {:?}",
frontend_sock, register_error
);
AcceptError::RegisterError
Expand Down
2 changes: 1 addition & 1 deletion lib/src/lib.rs
Expand Up @@ -222,7 +222,7 @@ use self::retry::RetryPolicy;

pub type ClusterId = String;

/// Anything that can be registered in mio (subscibe to kernel events)
/// Anything that can be registered in mio (subscribe to kernel events)
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum Protocol {
HTTP,
Expand Down
6 changes: 3 additions & 3 deletions lib/src/protocol/http/parser/mod.rs
Expand Up @@ -460,7 +460,7 @@ fn is_hostname_char(i: u8) -> bool {
// but is it important here, since we will match this to
// the list of accepted clusters?
// BTW each label between dots has a max of 63 chars,
// and the whole domain shuld not be larger than 253 chars
// and the whole domain should not be larger than 253 chars
//
// this tolerant parser also allows underscore, which is wrong
// in domain names but accepted by some proxies and web servers
Expand All @@ -475,7 +475,7 @@ fn is_hostname_char(i: u8) -> bool {
// but is it important here, since we will match this to
// the list of accepted clusters?
// BTW each label between dots has a max of 63 chars,
// and the whole domain shuld not be larger than 253 chars
// and the whole domain should not be larger than 253 chars
b"-.".contains(&i)
}

Expand Down Expand Up @@ -902,7 +902,7 @@ impl Header {
Some(cookie) => {
let cookie_length = cookie.get_full_length();
// We already know the position of the cookie in the chain, so we avoid
// a string comparision and directly check against where we are in the cookies
// a string comparison and directly check against where we are in the cookies
if current_cookie == sozu_balance_position {
moves.push(BufferMove::Delete(cookie_length));
} else if sozu_balance_is_last {
Expand Down
2 changes: 1 addition & 1 deletion lib/src/protocol/proxy_protocol/parser.rs
Expand Up @@ -173,7 +173,7 @@ mod test {
0x0D, 0x0A, 0x0D, 0x0A, 0x00, 0x0D, 0x0A, 0x51, 0x55, 0x49, 0x54,
0x0A, // MAGIC header
0x20, // Version 2 and command LOCAL
0x00, // family AF_UNSPEC and transport protocol unknow
0x00, // family AF_UNSPEC and transport protocol unknown
0x00, 0x00, // address sizes = 0
];

Expand Down
2 changes: 1 addition & 1 deletion lib/src/protocol/proxy_protocol/send.rs
Expand Up @@ -230,7 +230,7 @@ mod send_test {
}

// Get connection from the session and connect to the backend
// When connections are etablish we send the proxy protocol header
// When connections are establish we send the proxy protocol header
fn start_middleware(addr_client: SocketAddr, addr_backend: SocketAddr, barrier: Arc<Barrier>) {
let listener = TcpListener::bind(addr_client).expect("could not accept session connection");

Expand Down
2 changes: 1 addition & 1 deletion lib/src/router/pattern_trie.rs
Expand Up @@ -207,7 +207,7 @@ impl<V: Debug + Clone> TrieNode<V> {
if let Some(pos) = pos {
if let Ok(s) = str::from_utf8(&partial_key[pos + 1..partial_key.len() - 1]) {
let len = self.regexps.len();
// FIXME: we might have multipe entries with the same regex
// FIXME: we might have multiple entries with the same regex
self.regexps.retain(|(r, _)| r.as_str() != s);
if len > self.regexps.len() {
return RemoveResult::Ok;
Expand Down
4 changes: 2 additions & 2 deletions lib/src/tcp.rs
Expand Up @@ -1121,7 +1121,7 @@ impl Listener {
#[derive(Debug)]
pub struct ClusterConfiguration {
proxy_protocol: Option<ProxyProtocolConfig>,
// Uncomment this when implementing new load balancing algorythms
// Uncomment this when implementing new load balancing algorithms
// load_balancing: LoadBalancingAlgorithms,
}

Expand Down Expand Up @@ -1682,7 +1682,7 @@ mod tests {
Channel::generate(1000, 10000).with_context(|| "should create a channel")?;

// this thread should call a start() function that performs the same logic and returns Result<()>
// any error coming from this start() would be mapped and logged within the tread
// any error coming from this start() would be mapped and logged within the thread
thread::spawn(move || {
setup_test_logger!();
info!("starting event loop");
Expand Down
2 changes: 1 addition & 1 deletion lib/src/tls.rs
@@ -1,6 +1,6 @@
//! # Tls module
//!
//! This module p certificate: (), certificate_chain: (), key: (), versions: () certificate: (), certificate_chain: (), key: (), versions: () rovides traits and structures to handle tls. It provides a unified
//! This module p certificate: (), certificate_chain: (), key: (), versions: () certificate: (), certificate_chain: (), key: (), versions: () provides traits and structures to handle tls. It provides a unified
//! certificate resolver for rustls and openssl.
use std::{
borrow::ToOwned,
Expand Down

0 comments on commit a930847

Please sign in to comment.