Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move internal DiscretisedDensity origin to vendor defined origin #223

Open
ashgillman opened this issue Aug 14, 2018 · 6 comments
Open

Move internal DiscretisedDensity origin to vendor defined origin #223

ashgillman opened this issue Aug 14, 2018 · 6 comments

Comments

@ashgillman
Copy link
Collaborator

For latest info on this, see:
https://github.com/UCL/STIR/wiki/Proposals-for-new-features-major-changes#vendor-based-coordinate-system

I will attempt to keep it in-line with latest developments and discussion.

The proposal is to move STIR's reference system for images in line with the vendor's system. This would mean that reconstructed images can be directly compared with vendor reconstructions. The main change required is for DiscretisedDensity to be able to express voxel locations in DICOM-standard LPS space. However, this in turn requires that the internal origin for DiscretisedDensity to align with the vendor origin.

Proposed changes

  • ProjDataInfo.h

    • Private
      • float bed_position
      • TODO: float bed_height? Or Coordinate bed_position?
    • Public
      • ONE OF:
        • LOR ProjDataInfo::lor_gantry_to_bed(LOR lor) Maps a LOR (i.e., encoded
          via s, ϕ, t, θ) from Gantry space with origin at middle of scanner
          to Bed space with origin at vendor origin.
        • BasicCoordinate ProjDataInfo::point_gantry_to_bed(BasicCoordinate point) Maps a point (i.e., encoded by x, y, z) from Gantry space with
          origin at middle of scanner to Bed space with origin at vendor
          origin.
        • Currently these functions will simply apply an offset to t and
          z based on Scanner::default_vendor_origin_in_gantry_space
          and ProjDataInfo::bed_position.
        • Plus inverse functions.
      • OR
        • float ProjDataInfo::get_s_bed(Bin bin)
        • float ProjDataInfo::get_phi_bed(Bin bin)
        • float ProjDataInfo::get_t_bed(Bin bin)
        • float ProjDataInfo::get_m_bed(Bin bin)
        • float ProjDataInfo::get_theta_bed(Bin bin)
        • etc.
      • OR
        • LOR ProjDataInfo::get_LOR(Bin bin)
        • LOR ProjDataInfo::get_LOR_in_bed_space(Bin bin)
        • float LOR::get_s()
        • etc.
  • Everything inheriting will have to be checked for z offsets.

  • Scanner.h

    • BasicCoordinate default_vendor_origin_in_gantry_space
  • ProjMatrixByBinUsingRayTracing

    • calculate_proj_matrix_elems_for_one_bin()
      • Where s, ϕ, etc. are calculated, we will then convert this into the bed space
  • Other ProjMatrixByBin

    • Likely will have to change on a case-by-case basis.
  • DiscretisedDensity

    • New member: CartesianCoordinate DiscretisedDensity::get_DICOM_LPS_for_indices(BasicCoordinate idx).
  • ScatterEstimationByBin

    • This does its own manual projection in integral_between_2_points
      const float z_to_middle =
      (image.get_max_index() + image.get_min_index())*voxel_size.z()/2.F;
      origin.z() -= z_to_middle;
      /* TODO replace with image.get_index_coordinates_for_physical_coordinates */
      ProjMatrixElemsForOneBin lor;
      RayTraceVoxelsOnCartesianGrid(lor,
      (scatter_point-origin)/voxel_size, // should be in voxel units
      (detector_coord-origin)/voxel_size, // should be in voxel units
      voxel_size, //should be in mm
      #ifdef NEWSCALE
      1.F // normalise to mm
      #else
      1/voxel_size.x() // normalise to some kind of 'pixel units'
      #endif
      );
      lor.sort();
      float sum = 0; // add up values along LOR
      {
      ProjMatrixElemsForOneBin::iterator element_ptr =lor.begin() ;
      bool we_have_been_within_the_image = false;
      while (element_ptr != lor.end())
      {
      const BasicCoordinate<3,int> coords = element_ptr->get_coords();
      if (coords[1] >= image.get_min_index() &&
      coords[1] <= image.get_max_index() &&
      coords[2] >= image[coords[1]].get_min_index() &&
      coords[2] <= image[coords[1]].get_max_index() &&
      coords[3] >= image[coords[1]][coords[2]].get_min_index() &&
      coords[3] <= image[coords[1]][coords[2]].get_max_index())
      {
      we_have_been_within_the_image = true;
      sum += image[coords] * element_ptr->get_value();
      }
      else if (we_have_been_within_the_image)
      {
      // we jump out of the loop as we are now at the other side of
      // the image
      // break;
      }
      ++element_ptr;
      }
      }

      and uses a z offset. Also
      beforehand in sample_scatter_points()
      const float z_to_middle =
      (image.get_max_index() + image.get_min_index())*voxel_size.z()/2.F;
      origin.z() -= z_to_middle;
      this->scatter_volume = voxel_size[1]*voxel_size[2]*voxel_size[3];
      if(this->random)
      { // Initialize Pseudo Random Number generator using time
      srand((unsigned)time( NULL ));
      }
      this->scatt_points_vector.resize(0); // make sure we don't keep scatter points from a previous run
      this->scatt_points_vector.reserve(1000);
      // coord[] is in voxels units
      for(coord[1]=min_index[1];coord[1]<=max_index[1];++coord[1])
      for(coord[2]=min_index[2];coord[2]<=max_index[2];++coord[2])
      for(coord[3]=min_index[3];coord[3]<=max_index[3];++coord[3])
      if(attenuation_map[coord] >= this->attenuation_threshold)
      {
      ScatterPoint scatter_point;
      scatter_point.coord = convert_int_to_float(coord);
      if (random)
      scatter_point.coord +=
      CartesianCoordinate3D<float>(random_point(-.5,.5),
      random_point(-.5,.5),
      random_point(-.5,.5));
      scatter_point.coord =
      voxel_size*scatter_point.coord + origin;
      scatter_point.mu_value = attenuation_map[coord];
      this->scatt_points_vector.push_back(scatter_point);
      }
    • These could use something like:
      • proj_data_info_ptr->image_bed_to_gantry(scatter_point)/voxel_size,
      • proj_data_info_ptr->image_bed_to_gantry(detector_coord)/voxel_size,
        eliminating the need for origin.
  • Symmetries

    • TODO
  • zoom.cxx Zooms around image origin, will have to be changed.

@ashgillman
Copy link
Collaborator Author

See also SyneRBI/SIRF#193 and SyneRBI/SIRF#198

@ashgillman
Copy link
Collaborator Author

My initial work approach will be as follows:

  • 1. Do a normal recon
  • 2. Redefine the origin in DiscretisedDensity.
  • 3. Introduce a shift to image (this will eventually become a shift to vendor origin but just a small shift for now)
  • 4. Modify projector so that the resulting image is identical in voxel space.

In order to achieve (2/3), I need to modify the relevant VoxelsOnCartesianGrid constructor.

@ashgillman
Copy link
Collaborator Author

Question:
What should the "default" frame-of-reference point be? i.e., if we are unable to determine the vendor frame-of-reference point, we can either default to "middle of first ring" or "middle of scanner". The former means that images for unsupported scanners should be unchanged. The latter probably is a bit nicer as it means we have the same frame of reference as out projection data.

@ashgillman
Copy link
Collaborator Author

@rijobro @KrisThielemans

Following some discussion today:

https://github.com/InsightSoftwareConsortium/ITK/blob/master/Modules/Core/Common/src/itkSpatialOrientationAdapter.cxx#L34
I looked through this function, which the OrientImageFilter uses to determine input orientation (I also checked the code, it does set the input which is phrased incorrectly in the documentation for the filter). It looks like it will encode a identity Direction matrix to RAI.

I can't find any documentation on what ITK officially uses. I did find some emails saying LPS, e.g., here: https://itk.org/pipermail/insight-users/2009-April/029874.html

Then I also found this quote

The InsightSNAP tool in InsightApplications (which has a great IO loader by the way), uses the following: When it says LPS it means the x axis is RUNNING FROM left to right, the y axis is running from Posterior to Anterior, the z axis is running from Superior to Inferior.

This is exactly the same as what DICOM means when it says RAI. 

here https://itk.org/Wiki/Proposals:Orientation#Current_ITK_Usage_and_sources_of_confusion

Perhaps they are saying "ITK is LPS, but ITK encodes the negative of the identity direction matrix to LPS because we make our direction matrix point the direction that counting comes from"
That would be bizarre, but I think it might be the case.

@ashgillman
Copy link
Collaborator Author

ashgillman commented Sep 12, 2018

Indeed
https://github.com/InsightSoftwareConsortium/ITK/blob/master/Modules/Core/Common/include/itkSpatialOrientation.h#L35-L43

Coordinate orientation codes have a place-value organization such that an ImageDimension-al sequence of subcodes says both which varies fastest through which varies slowest, but also which end of the frame of reference is considered zero for each of the coordinates. For example, 'RIP' means Right to Left varies fastest, then Inferior to Superior, and Posterior to Anterior varies the slowest.

I've never liked what a south-easterly breeze meant, and that feeling translates into ITK's Direction matrices.

@ashgillman
Copy link
Collaborator Author

@KrisThielemans @rijobro and I had further discussions on this topic today.

  • DiscretisedDensity should be defined w.r.t. vendor defined origin.
  • DiscretisedDensity should convert bed coords and LPS coords using only direction flipping/swapping.
  • ProjDataInfo should represent LOR image-space locations directly in Bed coordinates.
  • DiscretisedDensity should convert Bed coordinates to Index coordinates (float) for projector.
  • DiscretisedDensity axes should be defined w.r.t vendor axes, may not be aligned with gantry.
    • Therefore, DiscretisedDensity needs to encode axes directions.
  • Projectors will still require sampling aligned with gantry axes, these will need to be checked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant