Skip to content
This repository has been archived by the owner on Jul 8, 2022. It is now read-only.

Statistics are showing wrong resolution(um/voxel) #294

Open
chevtche opened this issue May 31, 2016 · 17 comments
Open

Statistics are showing wrong resolution(um/voxel) #294

chevtche opened this issue May 31, 2016 · 17 comments
Labels

Comments

@chevtche
Copy link
Contributor

chevtche commented May 31, 2016

In channel.cpp (at line 392) the calculation:
"Vector3f voxelSize = info.boundingBox.getSize() / info.voxels"
is wrong.

info.boundingBox.getSize() is the bounding box of the circuit but in Livre we are adding a cutoff
distance to this bounding box. So the number of voxels is too big.

@chevtche chevtche added the bug label May 31, 2016
@bilgili
Copy link
Contributor

bilgili commented May 31, 2016

This is not "wrong", this is "invalid" for Fivox data source only. Cutoff distance is only meaningful for this data source. We have to think about this carefully.

@chevtche
Copy link
Contributor Author

resolution in um/voxel is only used in Fivox. I don't think it's used for any other datasource??

@bilgili
Copy link
Contributor

bilgili commented Jun 30, 2016

what is to do with cutoff distance :)

@chevtche
Copy link
Contributor Author

Because now our total bounding box is equal to the circuit bounding box + 2 * cutoff distance.

@bilgili
Copy link
Contributor

bilgili commented Jun 30, 2016

But, what is to do with Livre :) Livre does not have a context for cutoff distance. This problem is related to Fivox to seperate cutoff distance from volume size !

@chevtche
Copy link
Contributor Author

Livre has nothing to do with resolution values like ( um/voxel ) it's a pure Fivox stuff...
It's why we can just remove this statistic.

@bilgili
Copy link
Contributor

bilgili commented Jun 30, 2016

ok with that :)

@eile
Copy link
Contributor

eile commented Jul 1, 2016

Not ok with that. Livre does (and has to) know resolution values. It is not only for the overlay, but more importantly for the camera synchronization.

@bilgili
Copy link
Contributor

bilgili commented Jul 1, 2016

Through data source, yes. But the report over there is pure wrong. It is only valid for some use cases. The synchronization of reference frames between applications is not present since we have concentrated on camera only.

@eile
Copy link
Contributor

eile commented Jul 1, 2016

"pure wrong" is a string statement. It's useful for data sources which have a real-world size. Maybe it should not be printed for the ones which don't.

I don't get 'reference frames'? The ZeroEQ lookout is defined in meters, therefore the real-world size is important. We can assume a default unit meter cube for the undefined ones.

@bilgili
Copy link
Contributor

bilgili commented Jul 1, 2016

I am wrong, you are right. Re-checking the code, I have mixed it with worldSize. I thought it was the normalized space, but we have a separate value for defining the space. If it was the normalized space it would be wrong. I mean by the reference frame this one.

@chevtche
Copy link
Contributor Author

chevtche commented Jul 4, 2016

Ok, to summarize the problems I found:

  1. info.boundingBox is the bounding box of the circuit. It is needed for camera synchronization and
    only for that currently.
    info.voxels is the total number of voxels ( with the ones added by the cutoff distance ). It
    doesn't match the info.boundingBox. Because of that, info.boundingBox.getSize() / info.voxels doesn't
    always give us the right resolution.

  2. This resolution information is only valid for some DataSources like BBIC and
    Fivox. MemoryDataSource, UVF and Cubist don't provide any "real world" measurements.
    Assuming anything for a dataSource like UVF is just wrong (a vein with 2m radius?).

  3. What do we want to show exactly? The printed resolution is actually the maximum
    possible resolution and not the current one. Livre using out of core LODs, blocks with
    different resolution are loaded at the same time.

For me it seems that the right thing to do is to add a Vector3f resolution field in the VolumeInformation class and initialize it to some negative values. If a data source provide this information, we show it, if not we don't show anything.

@eile
Copy link
Contributor

eile commented Jul 4, 2016

  1. Sounds correct, except that it is not used for camera sync correctly today. This bug is so old that it grew roots.
  2. In the absence of real-world data, one has to assume a default. This can't be wrong, as you are doing it already in any case. What is technically wrong today is that Livre scales everything to be in meters, even if it is not. (UVF might have the info?)
  3. The real-world size of a single voxel at the highest LOD.

@chevtche
Copy link
Contributor Author

chevtche commented Jul 4, 2016

  1. Yes. It's actually not used at all for camera sync, yet :(

2)"This can't be wrong, as you are doing it already in any case". We are not doing that... The only
thingy that can be computed is the meters per livre unit. For that we need a valid data bounding box
provided by Brion (or BBIC ?). UVF, memory and Cubist datasources are totally "real world" unit agnostic.

"Livre scales everything to be in meters" I don't understand this statement. Livre itself is "real world" unit agnostic. The resolution printing is the only place where "real world" units are used. Even during
the camera synchronization the "real world" unit will simplify because we are interested in a ratio only.

Ahmet told me that UVF doesn't have the required information. And for the UVF it will be very wrong to assume anything. If we use as a default 1m cube and look at a human cell volume, the resolution print will tell to the user that the cell is 1 meter big. What is the point to lie like that? It can only confuse people IMHO...

  1. Yes. Maybe we can explicitly say that ?

@eile
Copy link
Contributor

eile commented Jul 4, 2016

  1. When running Livre in a VR setting, the user sees data at a certain scale. Today all data is scaled to be two meters.

  2. um/voxel says how many um a voxel is? What do you have in mind?

@chevtche
Copy link
Contributor Author

chevtche commented Jul 4, 2016

  1. Ok. It's something new... I was not aware of... But I think it's another problem.

After a discussion with Juan, Daniel and Jafet, we agreed that the best solution will be to compute the transformation matrix between data and application spaces in the datasource and use it to multiply the camera.

@chevtche
Copy link
Contributor Author

chevtche commented Jul 4, 2016

  1. Maybe write "maximum resolution"... but I am not sure if it's needed actually.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants