Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote cube loading bugs discussion #31

Closed
Optiligence opened this issue Sep 9, 2014 · 20 comments
Closed

Remote cube loading bugs discussion #31

Optiligence opened this issue Sep 9, 2014 · 20 comments

Comments

@Optiligence
Copy link
Member

When one loads a remote dataset, eventually some cubes don’t get loaded.
Coincidentally FTP thread should exit is displayed in the console.
If the the download request was not successful, it should be repeated e.g. in total 3 times.

@Optiligence Optiligence added the bug label Sep 9, 2014
@Optiligence Optiligence added this to the v4.1 milestone Sep 9, 2014
@jmrk84
Copy link
Member

jmrk84 commented Sep 9, 2014

Related, but maybe not the same bug: My problem reported last night by Email is still not resolved. It is actually really strange what's going on. It worked in one network, but does not work in two others. It looks like the network traffic of different cubes gets mixed up.
untitled-1

@my-tien
Copy link
Member

my-tien commented Sep 9, 2014

@jmrk84's bug: As far as I can see it, the mentioned possibility that cube deletion causes this, appears unlikely. If cube files are deleted that should not have been removed, the result should simply be missing cubes. But we see cubes at the wrong position which seems like some file contents don't correspond to their filenames. This would also make sense when cube traffic over the network gets mixed up.

@Optiligence
Copy link
Member Author

If you can reproduce it that easily, please try with Knossos 4.0 and Knossos 3.

@Optiligence
Copy link
Member Author

Also, did you try playing with the segmentation alpha slider, or disabling segmentation completely?

@jmrk84
Copy link
Member

jmrk84 commented Sep 9, 2014

I doubt it has something to do with the segmentation, it worked perfectly in the hotel network. However, for Fabi it works in this network here as well, which makes it hard to understand. I can reproduce it.

@Optiligence
Copy link
Member Author

@my-tien If you read invalid file handles, who knows whats gonna happen.
@jmrk84 Please rule it out. Reliable information is what counts.

@Optiligence
Copy link
Member Author

@orenshatz They stay black forever if one does not move them out of the supercube and back in.
The 3 retries was just a suggestion if it is in fact a download problem.
After my short time inspecting the code, i couldn’t find out under what circumstance the FTP thread should exit is reached, maybe you can shed some light on this.

@orenshatz
Copy link
Contributor

it's probably because of the loader timeout on the ftp thread,
probably means the ftp download got stuck indefinitely.
probably a server request timeout.

On 9/9/14, Optiligence notifications@github.com wrote:

@orenshatz They stay black forever if one does not move them out of the
supercube and back in.
The 3 retries was just a suggestion if it is in fact a download problem.
After my short time inspecting the code, i couldn’t find out under what
circumstance the FTP thread should exit is reached.


Reply to this email directly or view it on GitHub:
#31 (comment)

@orenshatz
Copy link
Contributor

How do we proceed on this?
I never had this problem when I worked on 4.0 so far.
Can you send me a conf file where it usually happens, so I can debug this?

@jmrk84
Copy link
Member

jmrk84 commented Sep 10, 2014

I'll look at it back in Heidelberg tomorrow, it was reproducible with all
streaming datasets and all compression types.

Am 10. September 2014 15:46:20 schrieb orenshatz notifications@github.com:

How do we proceed on this?
I never had this problem when I worked on 4.0 so far.
Can you send me a conf file where it usually happens, so I can debug this?


Reply to this email directly or view it on GitHub:
#31 (comment)

@orenshatz
Copy link
Contributor

But hang on, this was not an issue with 3.0 back then, right?
I just try to figure out whether this is machine-specific or version-specific, or simply wasn't tested thoroughly so far.
Norbert, can you try to reproduce this with 3.0 using the same conditions as when you have this with 4.0?

@Optiligence
Copy link
Member Author

It appears pretty quickly if you slowly drag the dataset around.
I downloaded the 3.4.2 installer, but I am still not able to load a remote dataset.

@jmrk84
Copy link
Member

jmrk84 commented Sep 10, 2014

I also tried it shortly with 3.4 but forgot how to set it up for remote (ie
it never showed anything than black)

Am 10. September 2014 17:22:08 schrieb Optiligence notifications@github.com:

It appears pretty quickly if you slowly drag the dataset around.
I downloaded the 3.4.2 installer, but I am still not able to load a remote
dataset.


Reply to this email directly or view it on GitHub:
#31 (comment)

@orenshatz
Copy link
Contributor

But the conf file should be enough...(?)

@jmrk84
Copy link
Member

jmrk84 commented Sep 11, 2014

I tested it again on this machine at the institute network, and it is fully reproducible with 4.0.1 and 4.1 alpha on my laptop on Win64. It is working just fine on my workstation. It must be machine specific.

@jmrk84
Copy link
Member

jmrk84 commented Sep 11, 2014

However, that some cubes remain black with streaming even after a while is reproducible on all machines and needs a fix. Ideally, one detects that there is a currently working streaming dataset (to avoid crazy retries) and then performs a second try in case a cube failed (but not an infinite loop of retries that brings everything down in case of some general issue). It would also be interesting to understand why some cubes fail at al - this should not happen with TCP/IP... maybe we have an issue with too strict timeouts now?

@orenshatz
Copy link
Contributor

But this staying black I never had before, can you please try with 3.0?
Maybe something has changed in the server?

What do you mean by "working streaming dataset"?

@jmrk84
Copy link
Member

jmrk84 commented Sep 11, 2014

"Working streaming dataset" a dataset that currently streams, i.e. data
actually comes in contrary to somebody sits somewhere without a working
internet connection, tries a streaming dataset and millions of retries
are attempted that always fail.
I tried 3.4.2 but was not able to use it, since the obvious way (loading
a streaming conf with the python launcher and then starting knossos) did
not work - I have not attempted to debug this so far.

Am 11.09.2014 15:06, schrieb orenshatz:

But this staying black I never had before, can you please try with 3.0?
Maybe something has changed in the server?

What do you mean by "working streaming dataset"?


Reply to this email directly or view it on GitHub
#31 (comment).

@orenshatz
Copy link
Contributor

Okay.
I would say so far the total failure was on Win8.1 and on your 64-bit laptop - is it also Win8?
Maybe a different bug should be opened for the total failure, and keep this one only for partial load.

@jmrk84
Copy link
Member

jmrk84 commented Sep 11, 2014

My laptop is Win8.1 64bit and the only machine where this problem showed
up so far - I agree that we should have two separate reports, I just
thought initially that they might be related, but I don't think so anymore.

Am 11.09.2014 15:38, schrieb orenshatz:

Okay.
I would say so far the total failure was on Win8.1 and on your 64-bit
laptop - is it also Win8?
Maybe a different bug should be opened for the total failure, and keep
this one only for partial load.


Reply to this email directly or view it on GitHub
#31 (comment).

@Optiligence Optiligence changed the title Streaming cubes are not loaded properly Remote cube loading bugs discussion Sep 18, 2014
@Optiligence Optiligence removed this from the v4.1 milestone Sep 18, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants