Skip to content

Commit

Permalink
Merge pull request #155 from qurit/lukework
Browse files Browse the repository at this point in the history
Add Sinogram PET System Matrix, PET overhaul, other recon changes
  • Loading branch information
lukepolson committed Apr 12, 2024
2 parents c1c4528 + 8399353 commit 14c5671
Show file tree
Hide file tree
Showing 33 changed files with 3,332 additions and 801 deletions.
Binary file added docs/source/images/example_scanner_sm.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
422 changes: 0 additions & 422 deletions docs/source/notebooks/t_PETGATE.ipynb

This file was deleted.

465 changes: 465 additions & 0 deletions docs/source/notebooks/t_PETGATE_LM.ipynb

Large diffs are not rendered by default.

414 changes: 414 additions & 0 deletions docs/source/notebooks/t_PETGATE_SINO.ipynb

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion docs/source/notebooks/t_PETSIRD.ipynb
Expand Up @@ -16,8 +16,9 @@
"**It should be strongly emphasized that this data format is still considered a work-in-progress**. While the data format is still relatively new, PyTomography will continue to add updated support throughout its evolution. The sample PETSIRD data file used in this tutorial was generated during the ETSI hackathon and is the output of a GATE simulation. This tutorial will demonstrate how to reconstruction this data in PyTomography.\n",
"\n",
"A few caveats:\n",
"* Currently, sinogram is reconstruction is not available for PETSIRD data as the system geometry is not specified.\n",
"* The establishment of *all valid detector pairs* has not yet been formulated in the PETSIRD format as of yet. For now, it is assumed that all combinations of detector pairs are valid: this is important when computing normalization/sensitivity factors.\n",
"* Correction for scatter/randoms has not yet been formulated. Incorporation of these corrections can be expected in future PyTomography versions."
"* Correction for scatter/randoms has not yet been formulated. Incorporation of these corrections can be expected in future PyTomography versions.\n"
]
},
{
Expand Down
8 changes: 5 additions & 3 deletions docs/source/notebooks/t_dicomdata.ipynb
Expand Up @@ -116,7 +116,7 @@
{
"data": {
"text/plain": [
"<matplotlib.colorbar.Colorbar at 0x7f3a27683b10>"
"<matplotlib.colorbar.Colorbar at 0x7f583f9d7e90>"
]
},
"execution_count": 5,
Expand Down Expand Up @@ -237,7 +237,7 @@
{
"data": {
"text/plain": [
"<matplotlib.colorbar.Colorbar at 0x7f39d65f43d0>"
"<matplotlib.colorbar.Colorbar at 0x7f59a8d2d090>"
]
},
"execution_count": 10,
Expand Down Expand Up @@ -437,7 +437,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can compare this reconstruction to one done by the vendor:"
"We can compare this reconstruction to one done by the vendor:\n",
"\n",
"* Note: we divide the vendor reconstruction by 90 because this particualr vendor multiplies by the number of acquisition angles when saving the data. Although we have 180 projections, there are two heads, so 90 different angles. (Other vendors multiply may multiply by 180: this is inconsistent between vendors)"
]
},
{
Expand Down
53 changes: 27 additions & 26 deletions docs/source/notebooks/t_dicommultibed.ipynb

Large diffs are not rendered by default.

1,497 changes: 1,497 additions & 0 deletions docs/source/notebooks/t_examplesystemmatrix.ipynb

Large diffs are not rendered by default.

16 changes: 8 additions & 8 deletions docs/source/notebooks/t_fbp.ipynb
Expand Up @@ -82,7 +82,7 @@
{
"data": {
"text/plain": [
"<matplotlib.colorbar.Colorbar at 0x7ff8e2ee8690>"
"<matplotlib.colorbar.Colorbar at 0x7fece9b46a10>"
]
},
"execution_count": 3,
Expand Down Expand Up @@ -123,7 +123,7 @@
"angles = np.arange(0,360.,3.)\n",
"radii = 15 * np.ones(len(angles))\n",
"object_meta = SPECTObjectMeta(dr=(0.4,0.4,0.4), shape=object_truth[0].shape)\n",
"proj_meta = SPECTProjMeta(projection_shape=object_truth[0,0].shape, angles=angles, radii=radii)"
"proj_meta = SPECTProjMeta(projection_shape=object_truth[0,0].shape, angles=angles, radii=radii, dr=(0.4,0.4))"
]
},
{
Expand Down Expand Up @@ -324,10 +324,10 @@
"metadata": {},
"outputs": [],
"source": [
" def filtered_back_projection(proj):\n",
" fbp = FilteredBackProjection(proj, angles)\n",
" object_prediction = fbp()\n",
" return object_prediction"
"def filtered_back_projection(proj):\n",
" fbp = FilteredBackProjection(proj, angles)\n",
" object_prediction = fbp()\n",
" return object_prediction"
]
},
{
Expand Down Expand Up @@ -393,7 +393,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As can be seen, FBP works well so long as the imaging system is representative of a radon transform. In the case of SPECT imaging, this does not hold."
"As can be seen, FBP works well so long as the imaging system is representative of a radon transform. In the case of SPECT imaging, this does not hold (attenuation correction has not been done, so the object is darker in the center)."
]
},
{
Expand All @@ -416,7 +416,7 @@
"metadata": {},
"outputs": [],
"source": [
" def filtered_back_projection_hamming(proj):\n",
"def filtered_back_projection_hamming(proj):\n",
" filter = HammingFilter(wl=0.02, wh=0.93)\n",
" fbp = FilteredBackProjection(proj, angles, filter=filter)\n",
" object_prediction = fbp()\n",
Expand Down
30 changes: 9 additions & 21 deletions docs/source/notebooks/t_siminddata.ipynb

Large diffs are not rendered by default.

10 changes: 3 additions & 7 deletions docs/source/notebooks/t_uncertainty_spect.ipynb

Large diffs are not rendered by default.

8 changes: 7 additions & 1 deletion docs/source/usage.md
Expand Up @@ -7,10 +7,16 @@ The tutorials use data that can be downloaded [here](https://drive.google.com/dr
* {doc}`notebooks/t_dicommultibed`
* {doc}`notebooks/t_uncertainty_spect`

## PET (Sinogram)
* {doc}`notebooks/t_PETGATE_SINO`

## PET (Listmode)
* {doc}`notebooks/t_PETGATE`
* {doc}`notebooks/t_PETGATE_LM`
* {doc}`notebooks/t_PETSIRD`
* Usage of Deep Image Prior Recon (DIPRecon): [this link](https://github.com/lukepolson/PyTomographyPETDIPExample)

## Developer Guide
* {doc}`notebooks/t_examplesystemmatrix`

## Misc
* {doc}`notebooks/t_fbp`
2 changes: 1 addition & 1 deletion pyproject.toml
Expand Up @@ -4,7 +4,7 @@ build-backend = "hatchling.build"

[project]
name = "pytomography"
version = "2.1.0"
version = "2.1.1"

authors = [
{ name="Luke Polson", email="lukepolson@outlook.com" },
Expand Down
2 changes: 1 addition & 1 deletion src/pytomography/algorithms/__init__.py
@@ -1,5 +1,5 @@
"""This module contains all the available reconstruction algorithms in PyTomography.
"""
from .preconditioned_gradient_ascent import PreconditionedGradientAscentAlgorithm, OSEM, OSMAPOSL, BSREM, KEM, RBIEM, RBIMAP, SART, PGAAMultiBedSPECT
from .preconditioned_gradient_ascent import PreconditionedGradientAscentAlgorithm, OSEM, OSMAPOSL, BSREM, KEM, RBIEM, RBIMAP, SART, PGAAMultiBedSPECT, MLEM
from .fbp import FilteredBackProjection
from .dip_recon import DIPRecon
2 changes: 1 addition & 1 deletion src/pytomography/algorithms/dip_recon.py
Expand Up @@ -85,5 +85,5 @@ def __call__(
self.object_prediction = nn.ReLU()(x_network)
# evaluate callback
if self.callback is not None:
self._compute_callback(n_iter = _)
self._compute_callback(n_iter = _, n_subset=None)
return self.object_prediction
2 changes: 1 addition & 1 deletion src/pytomography/algorithms/fbp.py
Expand Up @@ -24,7 +24,7 @@ def __init__(
) -> None:
self.proj = projections
self.object_meta = SPECTObjectMeta(dr=(1,1,1),shape=(self.proj.shape[2], self.proj.shape[2], self.proj.shape[3]))
self.proj_meta = SPECTProjMeta(projection_shape=self.proj.shape[2:],angles=angles)
self.proj_meta = SPECTProjMeta(projection_shape=self.proj.shape[2:],angles=angles,dr=(1,1))
self.filter = filter
# Random transform equivalent to SPECT System matrix
self.system_matrix = SPECTSystemMatrix(
Expand Down
59 changes: 37 additions & 22 deletions src/pytomography/algorithms/preconditioned_gradient_ascent.py
Expand Up @@ -28,7 +28,7 @@ def __init__(
) -> None:
self.likelihood = likelihood
if object_initial is None:
self.object_prediction = self.likelihood.system_matrix._get_object_initial()
self.object_prediction = self.likelihood.system_matrix._get_object_initial(likelihood.projections.device)
else:
self.object_prediction = object_initial.to(pytomography.device).to(pytomography.dtype)
self.prior = prior
Expand Down Expand Up @@ -78,7 +78,7 @@ def _compute_callback(self, n_iter: int, n_subset: int):
def __call__(
self,
n_iters: int,
n_subsets: int,
n_subsets: int = 1,
n_subset_specific: int | None = None,
callback: Callback | None = None,
):
Expand All @@ -103,14 +103,18 @@ def __call__(
if n_subset_specific is not None:
if n_subset_specific!=k:
continue
if n_subsets==1:
subset_idx = None
else:
subset_idx = k
if bool(self.prior):
self.prior.set_object(torch.clone(self.object_prediction).to(pytomography.device))
self.prior.set_beta_scale(self.likelihood.system_matrix.get_weighting_subset(k))
self.prior.set_beta_scale(self.likelihood.system_matrix.get_weighting_subset(subset_idx))
self.prior_gradient = self.prior(derivative_order=1)
else:
self.prior_gradient = 0
likelihood_gradient = self.likelihood.compute_gradient(self.object_prediction, k, self.norm_BP_subset_method)
preconditioner = self._compute_preconditioner(self.object_prediction, j, k)
likelihood_gradient = self.likelihood.compute_gradient(self.object_prediction, subset_idx, self.norm_BP_subset_method)
preconditioner = self._compute_preconditioner(self.object_prediction, j, subset_idx)
self.object_prediction += preconditioner * (likelihood_gradient - self.prior_gradient)
# Get rid of small negative values
self.object_prediction[self.object_prediction<0] = 0
Expand Down Expand Up @@ -213,7 +217,10 @@ def _compute_uncertainty_matrix(
else:
return torch.zeros((1, *self.likelihood.system_matrix.proj_meta.shape)).to(pytomography.device)
else:
subset_idx = (n-1)%self.n_subsets
if self.n_subsets==1:
subset_idx = None
else:
subset_idx = (n-1)%self.n_subsets
object_current_update = data_storage_callback.objects[n].to(pytomography.device)
object_previous_update = data_storage_callback.objects[n-1].to(pytomography.device)
FP_previous_update = data_storage_callback.projections_predicted[n-1].to(pytomography.device)
Expand All @@ -235,7 +242,8 @@ def _compute_uncertainty_matrix(
torch.cuda.empty_cache()
term1 = self._compute_uncertainty_matrix(1*mask-term1, data_storage_callback, n-1, include_additive_term=include_additive_term)
# Additive step after recursion (computation of B_sigma^k and B_n^k)
subset_indices_array = self.likelihood.system_matrix.subset_indices_array[subset_idx]
if subset_idx is not None:
subset_indices_array = self.likelihood.system_matrix.subset_indices_array[subset_idx]
object_previous_update = data_storage_callback.objects[n-1].to(pytomography.device)
FP_previous_update = data_storage_callback.projections_predicted[n-1].to(pytomography.device)
term2 = mask * object_previous_update * self._linear_preconditioner_factor(None, subset_idx)
Expand All @@ -249,9 +257,15 @@ def _compute_uncertainty_matrix(
term2 = likelihood_grad_gf(term2)
term_return = torch.zeros((1, *self.likelihood.system_matrix.proj_meta.shape)).to(pytomography.device)
term_return += term1
term_return[0,subset_indices_array] += term2[0]
if subset_idx is not None:
term_return[0,subset_indices_array] += term2[0]
else:
term_return[0,:] += term2[0]
if include_additive_term:
term_return[1,subset_indices_array] += term2_additive[0]
if subset_idx is not None:
term_return[1,subset_indices_array] += term2_additive[0]
else:
term_return[1,:] += term2_additive[0]
# Delete to save memory
del(term1)
del(term2)
Expand Down Expand Up @@ -289,7 +303,7 @@ def _linear_preconditioner_factor(self, n_iter: int, n_subset: int) -> torch.Ten
Returns:
torch.Tensor: linear preconditioner factor
"""
return 1/(self.likelihood.norm_BPs[n_subset] + pytomography.delta)
return 1/(self.likelihood._get_normBP(n_subset) + pytomography.delta)

class OSMAPOSL(PreconditionedGradientAscentAlgorithm):
r"""Implementation of the ordered subset maximum a posteriori one step late algorithm :math:`f^{n+1} = f^{n} + \frac{f^n}{H_n^T+\nabla_f V(f^n)} \left[ \nabla_{f} L(g^n|f^{n}) - \nabla_f V(f^n) \right]`
Expand Down Expand Up @@ -322,7 +336,7 @@ def _compute_preconditioner(self, object: torch.Tensor, n_iter: int, n_subset: i
Returns:
torch.Tensor: preconditioner factor.
"""
return object/(self.likelihood.norm_BPs[n_subset] + self.prior_gradient + pytomography.delta)
return object/(self.likelihood._get_normBP(n_subset) + self.prior_gradient + pytomography.delta)

class RBIEM(LinearPreconditionedGradientAscentAlgorithm):
r"""Implementation of the rescaled block iterative expectation maximum algorithm
Expand Down Expand Up @@ -355,8 +369,8 @@ def _compute_preconditioner(self, object: torch.Tensor, n_iter: int, n_subset: i
Returns:
torch.Tensor: preconditioner factor.
"""
norm_BP = self.likelihood.norm_BPs[n_subset]
norm_BP_allsubsets = torch.stack(self.likelihood.norm_BPs).sum(axis=0)
norm_BP = self.likelihood._get_normBP(n_subset)
norm_BP_allsubsets = self.likelihood._get_normBP(n_subset, return_sum=True)
rm = torch.max(norm_BP / (norm_BP_allsubsets + pytomography.delta))
return object/(norm_BP_allsubsets*rm + pytomography.delta)

Expand Down Expand Up @@ -391,8 +405,8 @@ def _compute_preconditioner(self, object: torch.Tensor, n_iter: int, n_subset: i
Returns:
torch.Tensor: preconditioner factor.
"""
norm_BP = self.likelihood.norm_BPs[n_subset]
norm_BP_allsubsets = torch.stack(self.likelihood.norm_BPs).sum(axis=0)
norm_BP = self.likelihood._get_normBP(n_subset)
norm_BP_allsubsets = self.likelihood._get_normBP(n_subset, return_sum=True)
rm = torch.max((norm_BP + self.prior_gradient) / (norm_BP_allsubsets + self.prior_gradient + pytomography.delta))
return object/(norm_BP_allsubsets*rm + self.prior_gradient + pytomography.delta)

Expand Down Expand Up @@ -429,7 +443,7 @@ def _linear_preconditioner_factor(self, n_iter: int, n_subset: int):
torch.Tensor: linear preconditioner factor
"""
relaxation_factor = self.relaxation_sequence(n_iter)
norm_BP = torch.stack(self.likelihood.norm_BPs).sum(axis=0)
norm_BP = self.likelihood._get_normBP(n_subset, return_sum=True)
norm_BP_weight = self.likelihood.system_matrix.get_weighting_subset(n_subset)
return relaxation_factor/(norm_BP_weight * norm_BP + pytomography.delta)

Expand Down Expand Up @@ -459,6 +473,9 @@ def __call__(
object_prediction = super(KEM, self).__call__(*args, **kwargs)
return self.likelihood.system_matrix.kem_transform.forward(object_prediction)

class MLEM(OSEM):
def __call__(self, n_iters, callback=None):
return super(MLEM, self).__call__(n_iters, n_subsets=1, callback=callback)

class SART(OSEM):
r"""Implementation of the SART algorithm (OSEM with SARTWeightedNegativeMSELikelihood)
Expand Down Expand Up @@ -500,7 +517,6 @@ def __call__(
n_iters: int,
n_subsets: int,
callback: Callback | Sequence[Callback] | None = None,
stitching_method: str = 'midslice'
) -> torch.Tensor:
"""Perform reconstruction of each bed position for specified iteraitons and subsets, and return the stitched image
Expand All @@ -520,8 +536,8 @@ def __call__(
self.recons.append(recon_algo(1, n_subsets, n_subset_specific=j))
self.object_prediction = dicom.stitch_multibed(
recons=torch.cat(self.recons),
files_NM = self.files_NM,
method=stitching_method)
files_NM = self.files_NM
)
self._compute_callback(i,j)
return self.object_prediction

Expand All @@ -540,8 +556,7 @@ def _compute_callback(self, n_iter: int, n_subset: int):
else:
self.object_prediction = dicom.stitch_multibed(
recons=torch.cat(self.recons),
files_NM = self.files_NM,
method='TEM')
files_NM = self.files_NM)
super()._compute_callback(n_iter=n_iter, n_subset=n_subset)

def _finalize_callback(self):
Expand Down Expand Up @@ -576,7 +591,7 @@ def compute_uncertainty(
subiteration_number = len(data_storage_callbacks[0].objects) - 1
# Crop mask to FOV region
recons = [data_storage_callback.objects[subiteration_number].to(pytomography.device) for data_storage_callback in data_storage_callbacks]
stitching_weights, zs = dicom.stitch_multibed(torch.cat(recons), self.files_NM, method='TEM', return_stitching_weights=True)
stitching_weights, zs = dicom.stitch_multibed(torch.cat(recons), self.files_NM, return_stitching_weights=True)
uncertainty_abs = []
total_counts = 0
for k in range(len(recons)):
Expand Down

0 comments on commit 14c5671

Please sign in to comment.