Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Syncing documentation #260

Merged
merged 37 commits into from
Oct 12, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
6a86f50
Add missing asserts (#226)
Aug 11, 2019
58630ed
Implement heartbeat functionality (#224)
futile Aug 12, 2019
3cd0310
Turn arranging usize into u16
kstrafe Aug 31, 2019
d8f8bd1
Merge pull request #228 from BourgondAries/arrange-u16
jstnlef Sep 1, 2019
68bdf8f
ordered: Fix spelling
kstrafe Aug 28, 2019
c1675ac
ordered: Loop the expected index value
kstrafe Aug 28, 2019
1280d6f
error: Add dyn to trait object
kstrafe Aug 28, 2019
74a2d4b
ordering: Wrap the expected index acceptability condition
kstrafe Aug 31, 2019
ce03a18
ordering: Change the ordered stream expected index to default to 0
kstrafe Aug 31, 2019
d93fc01
Merge pull request #229 from BourgondAries/ordered-fail-2
jstnlef Sep 1, 2019
edf29f1
ensure that `self.remote_ack_sequence_num` is always increasing
jstnlef Sep 1, 2019
d583916
Fix exact indexing values for half-window calculations (#230)
kstrafe Sep 2, 2019
a7baf72
Sequenced loop u16 (#231)
kstrafe Sep 2, 2019
1d8917d
Merge pull request #233 from jstnlef/remote_ack_seq_must_always_increase
jstnlef Sep 2, 2019
5e2add0
Initial commit build and test script
TimonPost Sep 3, 2019
118ae6a
Removed example config
TimonPost Sep 3, 2019
a67ddbd
Disconnect the connection after sending N un-acked packets (#234)
kstrafe Sep 7, 2019
67dac59
Improved documentation (#219)
TimonPost Sep 7, 2019
1868937
book: Fix typo in protocols.md (#241)
palash25 Sep 9, 2019
8e74dec
book: General minor improvements (#240)
kstrafe Sep 9, 2019
b0a14ab
Dependency maintenance (#243)
TimonPost Sep 10, 2019
ca89ee3
0.3.1 (#245)
TimonPost Sep 16, 2019
328e2a1
Temporarily set blocking mode to false when forgetting packets (#250)
kstrafe Sep 22, 2019
5f7de27
Fix spelling of error enum entry (#253)
kstrafe Sep 22, 2019
2050ac5
Perform acknowledgment after all fragments are received(#251)
kstrafe Sep 23, 2019
5c63a4e
Set the fragment ordering guarantee when queueing packets (#249)
kstrafe Sep 23, 2019
98c0747
Ensure we don't read out-of-bounds on malformed headers (#252)
kstrafe Sep 24, 2019
fce428b
0.3.2 (#254)
TimonPost Sep 24, 2019
78a5563
Clippy Fixes (#257)
fraillt Sep 27, 2019
b0121dc
Iterable VirtualConnection process_* functions result. (#256)
fraillt Oct 2, 2019
8acde72
Codebase improvements (#258)
fraillt Oct 10, 2019
364400e
Merge remote-tracking branch 'upstream/master'
Oct 12, 2019
7585201
Merge remote-tracking branch 'upstream/master'
Oct 12, 2019
3175608
doc strigns
Oct 11, 2019
f8a970e
upgraded contribution guide with rules
Oct 11, 2019
c042b18
updated more comments
Oct 11, 2019
34df82a
round 1
Oct 12, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@
[s6]: https://tokei.rs/b1/github/amethyst/laminar?category=code
[s7]: https://codecov.io/gh/amethyst/laminar/branch/master/graphs/badge.svg

Laminar is a semi-reliable UDP-based protocol for multiplayer games. This library implements wrappers around the UDP-protocol,
and provides a lightweight, message-based interface which provides certain guarantees like reliability and ordering.
Laminar is an application-level transport protocol which provides configurable reliability and ordering guarantees built on top of UDP.
It focuses on fast-paced fps-games and provides a lightweight, message-based interface.

Laminar was designed to be used within the [Amethyst][amethyst] game engine but is usable without it.

Expand Down Expand Up @@ -103,25 +103,25 @@ _Send packets_
```rust
use laminar::{Socket, Packet};

// create the socket
// Creates the socket
let mut socket = Socket::bind("127.0.0.1:12345")?;
let packet_sender = socket.get_packet_sender();
// this will start the socket, which will start a poll mechanism to receive and send messages.
// Starts the socket, which will start a poll mechanism to receive and send messages.
let _thread = thread::spawn(move || socket.start_polling());

// our data
// Bytes to sent
let bytes = vec![...];

// You can create packets with different reliabilities
// Creates packets with different reliabilities
let unreliable = Packet::unreliable(destination, bytes);
let reliable = Packet::reliable_unordered(destination, bytes);

// We can specify on which stream and how to order our packets, checkout our book and documentation for more information
// Specifies on which stream and how to order our packets, checkout our book and documentation for more information
let unreliable = Packet::unreliable_sequenced(destination, bytes, Some(1));
let reliable_sequenced = Packet::reliable_sequenced(destination, bytes, Some(2));
let reliable_ordered = Packet::reliable_ordered(destination, bytes, Some(3));

// send the created packets
// Sends the created packets
packet_sender.send(unreliable_sequenced).unwrap();
packet_sender.send(reliable).unwrap();
packet_sender.send(unreliable_sequenced).unwrap();
Expand All @@ -133,13 +133,13 @@ _Receive Packets_
```rust
use laminar::{SocketEvent, Socket};

// create the socket
// Creates the socket
let socket = Socket::bind("127.0.0.1:12346")?;
let event_receiver = socket.get_event_receiver();
// this will start the socket, which will start a poll mechanism to receive and send messages.
// Starts the socket, which will start a poll mechanism to receive and send messages.
let _thread = thread::spawn(move || socket.start_polling());

// wait until a socket event occurs
// Waits until a socket event occurs
let result = event_receiver.recv();

match result {
Expand Down
36 changes: 36 additions & 0 deletions docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,46 @@ Some code guidelines to keep in mind when contributing to laminar or amethyst-ne
- Keep comments small
- Don’t create unnecessary comments. They must add value
- Comments should explain the “why” not the “what”
- All `///` comments should start capitilized and end with an 'dot'.
- Function comments should be plural like: 'Returns', 'Creates', 'Instantiates' etc.
- All `//` explain code 'in' functions should have no capital letter and not 'dot'
- Referenced types, functions, variables should be put inside '`code markup`'.
2. Hard Coding
- Don't hard code values anywhere
- Use the ‘NetworkConfig’ type for common network settings, use consts or parameter input
- Use of lazy_static is acceptable but first make sure you can’t fix the issue in other ways
3. Code markup
- Keep files small. Better have small files with small pieces of logic than having one file with 1000 lines of logic with multiple types/structs etc. Note that I speak of logic, tests not included
- No panics/unwraps in the main codebase, but they are accepted in tests

## Import Reordering
All imports are semantically grouped and ordered. The order is:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a way to enforce this? It seems awfully pedantic if we don't have a programmatic way of both performing the transform and validating it.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe so. However, as noted the IntelIJ rust plugin is able to do this. So once in a while, we can run that to validate everything is ordered correctly. I have no idea how to enforce it in some way.


- standard library (`use std::...`)
- external crates (`use rand::...`)
- current crate (`use crate::...`)
- parent module (`use super::..`)
- current module (`use self::...`)
- module declaration (`mod ...`)

There must be an empty line between groups. An example:

```rust
use crossterm_utils::{csi, write_cout, Result};

use crate::sys::{get_cursor_position, show_cursor};

use super::Cursor;
```

#### CLion Tips

The CLion IDE does this for you (_Menu_ -> _Code_ -> _Optimize Imports_). Be aware that the CLion sorts
imports in a group in a different way when compared to the `rustfmt`. It's effectively two steps operation
to get proper grouping & sorting:

* _Menu_ -> _Code_ -> _Optimize Imports_ - group & semantically order imports
* `cargo fmt` - fix ordering within the group

Second step can be automated via _CLion_ -> _Preferences_ ->
_Languages & Frameworks_ -> _Rust_ -> _Rustfmt_ -> _Run rustfmt on save_.
4 changes: 2 additions & 2 deletions docs/md_book/src/reliability/reliability.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,11 +100,11 @@ Basically this is almost TCP-like but then sequencing instead of ordering.
```rust
use laminar::Packet;

// You can create packets with different reliabilities
// Creates packets with different reliabilities
let unreliable = Packet::unreliable(destination, bytes);
let reliable = Packet::reliable_unordered(destination, bytes);

// We can specify on which stream and how to order our packets, checkout our book and documentation for more information
// Specifies on which stream and how to order our packets, checkout our book and documentation for more information
let unreliable = Packet::unreliable_sequenced(destination, bytes, Some(1));
let reliable_sequenced = Packet::reliable_sequenced(destination, bytes, Some(2));
let reliable_ordered = Packet::reliable_ordered(destination, bytes, Some(3));
Expand Down
2 changes: 1 addition & 1 deletion src/bin/laminar-tester.rs
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ fn run_server(server_config: ServerConfiguration) -> Result<()> {
fn run_client(config: ClientConfiguration) -> Result<()> {
let socket = Socket::bind(config.listen_host)?;

// See which test we want to run
// see which test we want to run
match config.test_name.as_str() {
"steady-stream" => {
test_steady_stream(config, socket);
Expand Down
8 changes: 4 additions & 4 deletions src/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@ use crate::net::constants::{DEFAULT_MTU, FRAGMENT_SIZE_DEFAULT, MAX_FRAGMENTS_DE
pub struct Config {
/// Make the underlying UDP socket block when true, otherwise non-blocking.
pub blocking_mode: bool,
/// Value which can specify the amount of time that can pass without hearing from a client before considering them disconnected
/// Value which can specify the amount of time that can pass without hearing from a client before considering them disconnected.
pub idle_connection_timeout: Duration,
/// Value which specifies at which interval (if at all) a heartbeat should be sent, if no other packet was sent in the meantime.
/// If None, no heartbeats will be sent (the default).
pub heartbeat_interval: Option<Duration>,
/// Value which can specify the maximum size a packet can be in bytes. This value is inclusive of fragmenting; if a packet is fragmented, the total size of the fragments cannot exceed this value.
///
/// Recommended value: 16384
/// Recommended value: 16384.
pub max_packet_size: usize,
/// Value which can specify the maximal allowed fragments.
///
Expand All @@ -28,7 +28,7 @@ pub struct Config {
///
/// This is the maximum size of each fragment. It defaults to `1450` bytes, due to the default MTU on most network devices being `1500`.
pub fragment_size: u16,
/// Value which can specify the size of the buffer that queues up fragments ready to be reassembled once all fragments have arrived.```
/// Value which can specify the size of the buffer that queues up fragments ready to be reassembled once all fragments have arrived.
pub fragment_reassembly_buffer_size: u16,
/// Value that specifies the size of the buffer the UDP data will be read into. Defaults to `1450` bytes.
pub receive_buffer_max_size: usize,
Expand All @@ -53,7 +53,7 @@ pub struct Config {
/// connection.
///
/// When we send a reliable packet, it is stored locally until an acknowledgement comes back to
/// us, if that store grows to a size
/// us, if that store grows to a size.
pub max_packets_in_flight: u16,
}

Expand Down
24 changes: 12 additions & 12 deletions src/infrastructure/acknowledgment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ const DEFAULT_SEND_PACKETS_SIZE: usize = 256;

/// Responsible for handling the acknowledgment of packets.
pub struct AcknowledgmentHandler {
// Local sequence number which we'll bump each time we send a new packet over the network
// Local sequence number which we'll bump each time we send a new packet over the network.
sequence_number: SequenceNumber,
// The last acked sequence number of the packets we've sent to the remote host.
remote_ack_sequence_num: SequenceNumber,
// Using a Hashmap to track every packet we send out so we can ensure that we can resend when
// Using a `Hashmap` to track every packet we send out so we can ensure that we can resend when
// dropped.
sent_packets: HashMap<u16, SentPacket>,
// However, we can only reasonably ack up to REDUNDANT_PACKET_ACKS_SIZE + 1 packets on each
// message we send so this should be that large
// However, we can only reasonably ack up to `REDUNDANT_PACKET_ACKS_SIZE + 1` packets on each
// message we send so this should be that large.
received_packets: SequenceBuffer<ReceivedPacket>,
}

Expand All @@ -31,7 +31,7 @@ impl AcknowledgmentHandler {
}
}

/// Get the current number of not yet acknowledged packets
/// Returns the current number of not yet acknowledged packets
pub fn packets_in_flight(&self) -> u16 {
self.sent_packets.len() as u16
}
Expand All @@ -46,14 +46,14 @@ impl AcknowledgmentHandler {
self.received_packets.sequence_num().wrapping_sub(1)
}

/// Returns the ack_bitfield corresponding to which of the past 32 packets we've
/// Returns the `ack_bitfield` corresponding to which of the past 32 packets we've
/// successfully received.
pub fn ack_bitfield(&self) -> u32 {
let most_recent_remote_seq_num: u16 = self.remote_sequence_num();
let mut ack_bitfield: u32 = 0;
let mut mask: u32 = 1;

// Iterate the past REDUNDANT_PACKET_ACKS_SIZE received packets and set the corresponding
// iterate the past `REDUNDANT_PACKET_ACKS_SIZE` received packets and set the corresponding
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious why would wouldn't capitalize these as well?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because there are also very small '//' comments, and those are often not a complete sentence. I do think that either way we should do everything capitalized or with a lowercase. This PR has everything lower case. It is a matter of preference.

// bit for each packet which exists in the buffer.
for i in 1..=REDUNDANT_PACKET_ACKS_SIZE {
let sequence = most_recent_remote_seq_num.wrapping_sub(i);
Expand All @@ -76,18 +76,18 @@ impl AcknowledgmentHandler {
remote_ack_seq: u16,
mut remote_ack_field: u32,
) {
// We must ensure that self.remote_ack_sequence_num is always increasing (with wrapping)
// ensure that `self.remote_ack_sequence_num` is always increasing (with wrapping)
if sequence_greater_than(remote_ack_seq, self.remote_ack_sequence_num) {
self.remote_ack_sequence_num = remote_ack_seq;
}

self.received_packets
.insert(remote_seq_num, ReceivedPacket {});

// The current remote_ack_seq was (clearly) received so we should remove it.
// the current `remote_ack_seq` was (clearly) received so we should remove it
self.sent_packets.remove(&remote_ack_seq);

// The remote_ack_field is going to include whether or not the past 32 packets have been
// The `remote_ack_field` is going to include whether or not the past 32 packets have been
// received successfully. If so, we have no need to resend old packets.
for i in 1..=REDUNDANT_PACKET_ACKS_SIZE {
let ack_sequence = remote_ack_seq.wrapping_sub(i);
Expand All @@ -98,7 +98,7 @@ impl AcknowledgmentHandler {
}
}

/// Enqueue the outgoing packet for acknowledgment.
/// Enqueues the outgoing packet for acknowledgment.
pub fn process_outgoing(
&mut self,
packet_type: PacketType,
Expand All @@ -116,7 +116,7 @@ impl AcknowledgmentHandler {
},
);

// Bump the local sequence number for the next outgoing packet.
// bump the local sequence number for the next outgoing packet
self.sequence_number = self.sequence_number.wrapping_add(1);
}

Expand Down
13 changes: 6 additions & 7 deletions src/infrastructure/arranging/ordering.rs
Original file line number Diff line number Diff line change
Expand Up @@ -130,12 +130,12 @@ impl<'a, T> ArrangingSystem for OrderingSystem<T> {
/// # Remarks
/// - See [super-module](../index.html) for more information about streams.
pub struct OrderingStream<T> {
// the id of this stream.
// The id of this stream.
_stream_id: u8,
// the storage for items that are waiting for older items to arrive.
// the items will be stored by key and value where the key is the incoming index and the value is the item value.
// Storage with items that are waiting for older items to arrive.
// Items are stored by key and value where the key is the incoming index and the value is the item value.
storage: HashMap<u16, T>,
// the next expected item index.
// Next expected item index.
expected_index: u16,
// unique identifier which should be used for ordering on a different stream e.g. the remote endpoint.
unique_item_identifier: u16,
Expand Down Expand Up @@ -220,15 +220,14 @@ impl<T> OrderingStream<T> {
}

fn is_u16_within_half_window_from_start(start: u16, incoming: u16) -> bool {
// Check (with wrapping) if the incoming value lies within the next u16::max_value()/2 from
// start.
// check (with wrapping) if the incoming value lies within the `next u16::max_value()/2` from start
incoming.wrapping_sub(start) <= u16::max_value() / 2 + 1
}

impl<T> Arranging for OrderingStream<T> {
type ArrangingItem = T;

/// Will order the given item based on the ordering algorithm.
/// Orders the given item based on the ordering algorithm.
///
/// With every ordering operation an `incoming_index` is given. We also keep a local record of the `expected_index`.
///
Expand Down
8 changes: 4 additions & 4 deletions src/infrastructure/arranging/sequencing.rs
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ impl<T> ArrangingSystem for SequencingSystem<T> {
self.streams.len()
}

/// Try to get an [`SequencingStream`](./struct.SequencingStream.html) by `stream_id`.
/// Tries to get an [`SequencingStream`](./struct.SequencingStream.html) by `stream_id`.
/// When the stream does not exist, it will be inserted by the given `stream_id` and returned.
fn get_or_create_stream(&mut self, stream_id: u8) -> &mut Self::Stream {
self.streams
Expand Down Expand Up @@ -73,7 +73,7 @@ pub struct SequencingStream<T> {
_stream_id: u8,
// the highest seen item index.
top_index: u16,
// I need `PhantomData`, otherwise, I can't use a generic in the `Arranging` implementation because `T` is not constrained.
// Needs `PhantomData`, otherwise, it can't use a generic in the `Arranging` implementation because `T` is not constrained.
phantom: PhantomData<T>,
// unique identifier which should be used for ordering on an other stream e.g. the remote endpoint.
unique_item_identifier: u16,
Expand Down Expand Up @@ -107,15 +107,15 @@ impl<T> SequencingStream<T> {
}

fn is_u16_within_half_window_from_start(start: u16, incoming: u16) -> bool {
// Check (with wrapping) if the incoming value lies within the next u16::max_value()/2 from
// check (with wrapping) if the incoming value lies within the next u16::max_value()/2 from
// start.
incoming.wrapping_sub(start) <= u16::max_value() / 2 + 1
}

impl<T> Arranging for SequencingStream<T> {
type ArrangingItem = T;

/// Will arrange the given item based on a sequencing algorithm.
/// Arranges the given item based on a sequencing algorithm.
///
/// With every sequencing operation an `top_index` is given.
///
Expand Down
6 changes: 3 additions & 3 deletions src/infrastructure/congestion.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ use crate::{
Config,
};

/// Type that is responsible for keeping track of congestion information.
/// Keeps track of congestion information.
pub struct CongestionHandler {
rtt_measurer: RttMeasurer,
congestion_data: SequenceBuffer<CongestionData>,
Expand All @@ -23,15 +23,15 @@ impl CongestionHandler {
}
}

/// Process incoming sequence number.
/// Processes incoming sequence number.
///
/// This will calculate the RTT-time and smooth down the RTT-value to prevent uge RTT-spikes.
pub fn process_incoming(&mut self, incoming_seq: u16) {
let congestion_data = self.congestion_data.get_mut(incoming_seq);
self.rtt_measurer.calculate_rrt(congestion_data);
}

/// Process outgoing sequence number.
/// Processes outgoing sequence number.
///
/// This will insert an entry which is used for keeping track of the sending time.
/// Once we process incoming sequence numbers we can calculate the `RTT` time.
Expand Down
Loading