-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rust-based stream server + visualization #4
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I very much like this overall.
There are a couple of things to sort through to make this workable.
@@ -8,6 +8,7 @@ use std::time::Duration; | |||
pub struct StreamReceiver { | |||
socket: UdpSocket, | |||
buf: [u8; 2048], | |||
timeout: Option<Duration>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The StreamReceiver should not have to bother with the timeout. This should be up a level.
async_std::io::timeout(timeout, receiver::next_frame())
or even on the loop doing next_frame()
, depending on context.
/// # Args | ||
/// * `batch_size` - The size of each batch in samples. | ||
/// * `data` - The binary data composing the stream frame. | ||
pub fn new(batch_size: usize, data: &[u8]) -> Result<AdcDacData, FormatError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep the lazy conversion as in the python API. If there is nobody looking at the data we don't need to convert it and the buck stops in the receiver.
timebase
gets the lasy conversion on get_data()
. The traces should also do that.
/// Get the earliest element in the buffer along with its location. | ||
pub fn get_earliest_element(&self) -> (usize, T) { | ||
if self.data.len() != self.max_size { | ||
(0, self.data[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If data
is empty, this crashes.
I hit that frequently.
self.data[self.index..][..data.len()].copy_from_slice(data); | ||
} | ||
|
||
self.index = (self.index + data.len()) % self.max_size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I seem to be hitting max_size == 0
through some clicking around.
Overall this should just be one of the usual queuing patterns. Isn't there an existing impl somewhere that we don't have to carry?
/// Get all available data traces. | ||
/// | ||
/// # Note | ||
/// There is no guarantee that the data will be complete. Poll the current trigger state to ensure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is the current trigger state?
src/bin/webserver.rs
Outdated
|
||
// Route configuration and queries. | ||
webapp.at("/traces").get(get_traces); | ||
webapp.at("/capture").post(configure_capture); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
webapp.at("/capture").post(configure_capture); | |
webapp.at("/configure").post(configure_capture); |
src/bin/webserver.rs
Outdated
webapp.at("/").serve_dir("frontend/dist").unwrap(); | ||
|
||
// Start up the webapp. | ||
webapp.listen("127.0.0.1:8080").await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
webapp.listen("127.0.0.1:8080").await?; | |
webapp.listen("tcp://0.0.0.0:8080").await?; |
} | ||
|
||
let samples: f32 = SAMPLE_RATE_HZ * config.capture_duration_secs; | ||
if samples > usize::MAX as f32 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usize::MAX
isn't really a practical limit.
if samples > usize::MAX as f32 { | |
if samples > 100e6 { |
for (var i = 0; i < traces.length; i += 1) { | ||
const line = createLineStrip(this.cg, times, traces[i].data, { | ||
colors: colors[i], | ||
widths: 3, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default width no good? This should be just as wide as e.g. the axis lines.
widths: 3, | |
widths: 3, |
// The stored data. | ||
data: Vec<T>, | ||
|
||
// The maximum number of data points stored. Once this level is hit, data will begin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. So many comments here but none in the frontend (the jsx).
This PR adds a livestream viewer based on a Rust-based receiver + backend server along with a JS-based frontend. The server is started on
localhost:8080
NPM must be installed first. Once installed, just run
cargo run --bin webserver --release
to get goingSample output
This server is capable of running continuously. However, if larger captures (e.g. > 100ms) are requested, data will start being dropped at the receiver because of excessive serialization/transfer times required to send data to the frontend.
This is intended as an initial prototype and is not necessarily pretty, but demonstrates the feasibility and limitations of this approach.