Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send files #11

Closed
wants to merge 7 commits into from
Closed

Send files #11

wants to merge 7 commits into from

Conversation

sigmaSd
Copy link
Contributor

@sigmaSd sigmaSd commented Oct 21, 2020

This Pr add the ability to send files with "?send" command.

Notes:

  • Sending large files will hang (since the file is first read to memory) with no indication on what's happening, this needs to improve
  • Its based on handle_utf8 branch, I'll fix this later I just put it for discussion

It works well! and the errors are actually quite nice.

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 22, 2020

Hi @lemunozm,

I'm trying to implement the ability to send data in chunks, the problem is that not all the data sent is received.

Basically if you count how many time network.send_all is called and how many time UserData event is received you find a huge difference.

Maybe you have some insight about this issue sigmaSd@df4c5c8

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 22, 2020

Its as it have a maximum capacity of sending data, since after sending a big file (like 10mb) not even messages are sent, receving still works though.

@lemunozm
Copy link
Owner

Hi @sigmaSd, thanks for your work on this feature.

Maybe there is an encoding error with large data in message-io. I will investigate if you do not find any problem in the PR.

I will have time (and computer) the next week to make deep reviews and tests all the things that we have talked about and are pending.

Thanks again!

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 22, 2020

Ok!

For now this is what I found:

@lemunozm
Copy link
Owner

Thanks for the info :)

I had some pending test in message-io, and some of them related with testing large messages. If I not remember bad, there is an input buffer of 16KB that could be related.

Did you notice some threshold from which the data is losing?

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 22, 2020

For the second problem, I think sending large data makes mio panics(stack overflow) (which makes the controller mutex panic and poisons the lock), you can reproduce with

fn main() {
    use std::io::Read;
    use std::io::Write;

    let c = mio::net::TcpListener::bind(([0, 0, 0, 0], 0).into()).unwrap();

    let la = c.local_addr().unwrap();

    let mut a = mio::net::TcpStream::connect(la).unwrap();
    let data = [1; 1024 * 6000];
    a.write_all(&data).unwrap();

    let mut b = [0; 1024 * 6000];

    let mut r = c.accept().unwrap().0;

    loop {
        match r.read(&mut b) {
            Ok(n) => {
                if n == 0 {
                    break;
                }
            }
            Err(_) => (),
        }
    }
    dbg!(b.last());
}

The annoying problem is the first one, the stream gets exhausted somehow and no error are shown.
I'm not sure about the threshold for this one, if I try a 10 mb file it writes 1mb, if I try 200mb file its writes 40kb

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 22, 2020

The second problem is just that the variable is too large for stack, it needs to be on the heap so this is not a real issue.

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 22, 2020

Maybe its something with mio
consider this

//use std::net::*;
use mio::net::*;
fn main() {
    use std::io::Read;
    use std::io::Write;

    let c = TcpListener::bind(std::net::SocketAddr::new([0, 0, 0, 0].into(), 0)).unwrap();

    let la = c.local_addr().unwrap();

    let mut a = TcpStream::connect(la).unwrap();
    let data = [1; 1024];
    //let data = [1; 1];
    std::thread::spawn(move || loop {
        a.write_all(&data).unwrap();
    });

    let mut b = [0; 1024];

    let mut r = c.accept().unwrap().0;

    loop {
        match r.read(&mut b) {
            Ok(n) => {
                dbg!(n);
                if n == 0 {
                    break;
                }
            }
            Err(_) => {}
        }
    }
    dbg!(b.last());
}

with mio it stops after ~2800 write
with std it runs indefinitely as expected

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 22, 2020

Wrong lead it was just the writer crashing on wouldblock.

@lemunozm
Copy link
Owner

lemunozm commented Oct 23, 2020

Thanks @sigmaSd. Mio uses non-blocking streams. Sending big data should saturate the output stream, and the OS notify to wait until the message is sent.

I have to make some non-trivial changes in message-io to fix it :S

Sorry for inconveniences.

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 23, 2020

I made some debugging with gdb:

After saturating the network with 10mb file

https://github.com/lemunozm/message-io/blob/master/src/network.rs#L62 still gets called correctly
https://github.com/lemunozm/message-io/blob/master/src/network_adapter.rs#L344 still gets called

But somehow the closure here doesn't get called anymore https://github.com/lemunozm/termchat/blob/master/src/application.rs#L60

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 23, 2020

The issue is here https://github.com/lemunozm/message-io/blob/master/src/encoding.rs#L100 decoded_data becomes always None

@lemunozm
Copy link
Owner

Hi @sigmaSd, I see that this PR contains parts of #10, could you rebase to clean the PR?
Thanks!

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 27, 2020

Yes sure but I prefer if you take a look at the sending files issue first, since there might be an issue with message-io that should be fixed first

To recap:

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 27, 2020

Also using this brach https://github.com/sigmaSd/termchat/tree/send_files

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 27, 2020

For the second issue I think this is the problem https://github.com/lemunozm/message-io/blob/master/src/encoding.rs#L11

Serializing the data len manually is problematic, when data len is > u16::MAX all that padding logic wont work correctly (its silently overflowing)

@sigmaSd
Copy link
Contributor Author

sigmaSd commented Oct 29, 2020

I opened an issue on message io lemunozm/message-io#3 I'll reopen this pr later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants