Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rust-based stream server + visualization #4

Merged
merged 14 commits into from
Sep 19, 2023
Merged

Conversation

ryan-summers
Copy link
Member

@ryan-summers ryan-summers commented Nov 19, 2021

This PR adds a livestream viewer based on a Rust-based receiver + backend server along with a JS-based frontend. The server is started on localhost:8080

NPM must be installed first. Once installed, just run cargo run --bin webserver --release to get going

Sample output
image

This server is capable of running continuously. However, if larger captures (e.g. > 100ms) are requested, data will start being dropped at the receiver because of excessive serialization/transfer times required to send data to the frontend.

This is intended as an initial prototype and is not necessarily pretty, but demonstrates the feasibility and limitations of this approach.

@ryan-summers ryan-summers changed the title Feature/stream server Rust-based stream server + visualization Nov 23, 2021
@ryan-summers ryan-summers marked this pull request as ready for review November 23, 2021 12:22
@ryan-summers
Copy link
Member Author

ryan-summers commented Dec 6, 2021

I've opened #6 and #7 for known issues with this PR - notably:

  • Capture windows may not be respected
  • Exceptionally long capture periods (> 150mS) may not display at all

I've also spawned #5, #8, and #9 to track improvements to this implementation.

Copy link
Member

@jordens jordens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I very much like this overall.
There are a couple of things to sort through to make this workable.

@@ -8,6 +8,7 @@ use std::time::Duration;
pub struct StreamReceiver {
socket: UdpSocket,
buf: [u8; 2048],
timeout: Option<Duration>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The StreamReceiver should not have to bother with the timeout. This should be up a level.
async_std::io::timeout(timeout, receiver::next_frame()) or even on the loop doing next_frame(), depending on context.

/// # Args
/// * `batch_size` - The size of each batch in samples.
/// * `data` - The binary data composing the stream frame.
pub fn new(batch_size: usize, data: &[u8]) -> Result<AdcDacData, FormatError> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep the lazy conversion as in the python API. If there is nobody looking at the data we don't need to convert it and the buck stops in the receiver.
timebase gets the lasy conversion on get_data(). The traces should also do that.

/// Get the earliest element in the buffer along with its location.
pub fn get_earliest_element(&self) -> (usize, T) {
if self.data.len() != self.max_size {
(0, self.data[0])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If data is empty, this crashes.
I hit that frequently.

self.data[self.index..][..data.len()].copy_from_slice(data);
}

self.index = (self.index + data.len()) % self.max_size;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I seem to be hitting max_size == 0 through some clicking around.
Overall this should just be one of the usual queuing patterns. Isn't there an existing impl somewhere that we don't have to carry?

/// Get all available data traces.
///
/// # Note
/// There is no guarantee that the data will be complete. Poll the current trigger state to ensure
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the current trigger state?


// Route configuration and queries.
webapp.at("/traces").get(get_traces);
webapp.at("/capture").post(configure_capture);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
webapp.at("/capture").post(configure_capture);
webapp.at("/configure").post(configure_capture);

webapp.at("/").serve_dir("frontend/dist").unwrap();

// Start up the webapp.
webapp.listen("127.0.0.1:8080").await?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
webapp.listen("127.0.0.1:8080").await?;
webapp.listen("tcp://0.0.0.0:8080").await?;

}

let samples: f32 = SAMPLE_RATE_HZ * config.capture_duration_secs;
if samples > usize::MAX as f32 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

usize::MAX isn't really a practical limit.

Suggested change
if samples > usize::MAX as f32 {
if samples > 100e6 {

for (var i = 0; i < traces.length; i += 1) {
const line = createLineStrip(this.cg, times, traces[i].data, {
colors: colors[i],
widths: 3,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default width no good? This should be just as wide as e.g. the axis lines.

Suggested change
widths: 3,
widths: 3,

// The stored data.
data: Vec<T>,

// The maximum number of data points stored. Once this level is hit, data will begin
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. So many comments here but none in the frontend (the jsx).

@jordens jordens merged commit f3a5c59 into main Sep 19, 2023
@ryan-summers ryan-summers deleted the feature/stream-server branch September 19, 2023 14:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants