Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

List-mode objective function: minor refactor and extra functionality #1418

Merged
merged 15 commits into from
May 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 20 additions & 3 deletions .github/workflows/build-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,10 @@ jobs:
fi
# don't run all of them in Debug mode as it takes too long
if test ${BUILD_TYPE} = Debug; then
EXCLUDE_Debug="test_data_processor_projectors|test_export_array|test_ArcCorrection"
# Also excluding test_PoissonLogLikelihoodWithLinearModelForMeanAndListModeDataWithProjMatrixByBin
# as it is a bit slow. Moreover it fails on MacOS (Debug)
# https://github.com/UCL/STIR/pull/1418#issuecomment-2109518132
EXCLUDE_Debug="test_data_processor_projectors|test_export_array|test_ArcCorrection|test_PoissonLogLikelihoodWithLinearModelForMeanAndListModeWithProjMatrixByBin"
EXCLUDE="${EXCLUDE_Debug}${EXCLUDE:+"|"}${EXCLUDE}"
fi
# prepend -E
Expand All @@ -360,6 +363,19 @@ jobs:
# execute tests
ctest --output-on-failure -C ${BUILD_TYPE} ${EXCLUDE}

- name: Upload ctest log files for debugging
uses: actions/upload-artifact@v4
if: failure()
with:
name: ctest_log_files-${{ matrix.os }}-${{ matrix.compiler }}${{ matrix.compiler_version }}-cuda${{ matrix.cuda_version }}-${{ matrix.BUILD_TYPE }}-pp=${{ matrix.parallelproj }}-ROOT=${{ matrix.ROOT }}
path: |
${{ github.workspace }}/build/**/*.log
${{ github.workspace }}/build/**/*.hv
${{ github.workspace }}/build/**/*.v
${{ github.workspace }}/build/**/*.hs
${{ github.workspace }}/build/**/*.s
retention-days: 7

- name: C++ examples with STIR_LOCAL
shell: bash
run: |
Expand Down Expand Up @@ -432,11 +448,12 @@ jobs:
fi

- name: Upload recon_test_pack log files for debugging
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: failure()
with:
name: recon_test_pack_log_files-${{ matrix.os }}-${{ matrix.compiler }}${{ matrix.compiler_version }}-cuda${{ matrix.cuda_version }}-${{ matrix.BUILD_TYPE }}-pp=${{ matrix.parallelproj }}-ROOT=${{ matrix.ROOT }}
path: ${{ github.workspace }}/recon_test_pack/**/*.log
path: |
${{ github.workspace }}/recon_test_pack/**/*.log
${{ github.workspace }}/recon_test_pack/**/my_*v
${{ github.workspace }}/recon_test_pack/**/my_*s
retention-days: 7
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ install_manifest.txt
~$*
*~
*.bak
\#*#

# Ignore Visual Studio User-specific files
*.suo
Expand Down
28 changes: 28 additions & 0 deletions documentation/release_6.1.htm
Original file line number Diff line number Diff line change
Expand Up @@ -52,13 +52,34 @@ <h3>New functionality</h3>
<ul>
<li>
Add TOF capability of the parallelproj projector (see <a href=https://github.com/UCL/STIR/pull/1356>PR #1356</a>)
</li>
<li>
Read TOF bin order from interfile header (see <a href=https://github.com/UCL/STIR/pull/1389> PR #1389</a>)
</li>
<li>
<code>PoissonLogLikelihoodWithLinearModelForMeanAndListModeDataWithProjMatrixByBin</code> can now compute the value
as well as <code>accumulate_Hessian_times_input</code>.
<br>
<a href=https://github.com/UCL/STIR/pull/1418> PR #1418</a>
</li>
<li>
<code>GeneralisedObjectiveFunction</code> has 2 new members to compute the full gradient
(<code>compute_gradient</code> and <code>compute_gradient_without_penalty</code>). Previously,
only subset gradients were available.
<br>
<a href=https://github.com/UCL/STIR/pull/1418> PR #1418</a>
</li>
</ul>


<h3>Changed functionality</h3>
<ul>
<li>
<code>PoissonLogLikelihoodWithLinearModelForMeanAndListModeDataWithProjMatrixByBin</code> now computes the gradient
multi-threaded (if OpenMP is enabled), even if caching to file of the list-mode file is not enabled.
<br>
<a href=https://github.com/UCL/STIR/pull/1418> PR #1418</a>
</li>
<li>
Accumulation in computation of priors now uses doubles, which could result in slightly better precision. <a href=https://github.com/UCL/STIR/pull/1410>Part of PR #1410</a>.
</li>
Expand Down Expand Up @@ -127,6 +148,13 @@ <h4>recon_test_pack changes</h4>
</ul>

<h4>C++ tests</h4>
<ul>
<li>
objective function and priors now have a numerical test for <code>accumulate_Hessian_times_input</code>
<br>
<a href=https://github.com/UCL/STIR/pull/1418> PR #1418</a>
</li>
</ul>

</body>

Expand Down
30 changes: 26 additions & 4 deletions src/include/stir/recon_buildblock/GeneralisedObjectiveFunction.h
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
//
/*
Copyright (C) 2003- 2009, Hammersmith Imanet Ltd
Copyright (C) 2018, University College London
Copyright (C) 2018, 2020, 2024 University College London
Copyright (C) 2016, University of Hull
This file is part of STIR.

Expand All @@ -17,6 +17,7 @@

\author Nikos Efthimiou
\author Kris Thielemans
\author Robert Twyman Skelly
\author Sanida Mustafovic

*/
Expand Down Expand Up @@ -98,8 +99,8 @@ class GeneralisedObjectiveFunction : public RegisteredObject<GeneralisedObjectiv
//! Has to be called before using this object
virtual Succeeded set_up(shared_ptr<TargetT> const& target_sptr);

//! This should compute the sub-gradient of the objective function at the \a current_estimate
/*! The subgradient is the gradient of the objective function restricted to the
//! Compute the subset-gradient of the objective function at \a current_estimate
/*! The subset-gradient is the gradient of the objective function restricted to the
subset specified. What this means depends on how this function is implemented later
on in the hierarchy.

Expand All @@ -112,12 +113,30 @@ class GeneralisedObjectiveFunction : public RegisteredObject<GeneralisedObjectiv
*/
virtual void compute_sub_gradient(TargetT& gradient, const TargetT& current_estimate, const int subset_num);

//! This should compute the sub-gradient of the unregularised objective function at the \a current_estimate
//! This should compute the subset-gradient of the unregularised objective function at \a current_estimate
/*!
\warning The derived class should overwrite any data in \a gradient.
*/
virtual void compute_sub_gradient_without_penalty(TargetT& gradient, const TargetT& current_estimate, const int subset_num) = 0;

//! Compute the gradient of the objective function at the \a current_estimate
/*! Computed as the <i>difference</i> of
<code>compute_gradient_without_penalty</code>
and
<code>get_prior_ptr()-&gt;compute_gradient()</code>.

\warning Any data in \a gradient will be overwritten.
*/
virtual void compute_gradient(TargetT& gradient, const TargetT& current_estimate);

//! Compute the gradient of the unregularised objective function at the \a current_estimate
/*!
Computed by summing subset-gradients.

\warning Any data in \a gradient will be overwritten.
*/
virtual void compute_gradient_without_penalty(TargetT& gradient, const TargetT& current_estimate);

//! Compute the value of the unregularised sub-objective function at the \a current_estimate
/*! Implemented in terms of actual_compute_objective_function_without_penalty. */
virtual double compute_objective_function_without_penalty(const TargetT& current_estimate, const int subset_num);
Expand Down Expand Up @@ -157,6 +176,9 @@ class GeneralisedObjectiveFunction : public RegisteredObject<GeneralisedObjectiv
*/
double compute_objective_function(const TargetT& current_estimate);

//! Alias for compute_objective_function(const TargetT&)
double compute_value(const TargetT& current_estimate) { return compute_objective_function(current_estimate); }

//! Fill any elements that we cannot estimate with a fixed value
/*! In many cases, it is easier to use a larger target than what we can
actually estimate. For instance, using a rectangular image while we estimate
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
/*
Copyright (C) 2003- 2011, Hammersmith Imanet Ltd
Copyright (C) 2015, Univ. of Leeds
Copyright (C) 2016, 2022 UCL
Copyright (C) 2016, 2022, 2024 UCL
Copyright (C) 2021, University of Pennsylvania
SPDX-License-Identifier: Apache-2.0

Expand Down Expand Up @@ -74,17 +74,28 @@ class PoissonLogLikelihoodWithLinearModelForMeanAndListModeDataWithProjMatrixByB

PoissonLogLikelihoodWithLinearModelForMeanAndListModeDataWithProjMatrixByBin();

//! Computes the value of the objective function at the \a current_estimate.
/*!
\warning If <code>add_sensitivity = false</code> and <code>use_subset_sensitivities = false</code> will return an error
because the value will not be correct. Try <code>use_subset_sensitivities = true</code>.
*/
double actual_compute_objective_function_without_penalty(const TargetT& current_estimate, const int subset_num) override;

//! Computes the gradient of the objective function at the \a current_estimate overwriting \a gradient.
/*!
\warning If <code>add_sensitivity = false</code> and <code>use_subset_sensitivities = false</code> will return an error
because the gradient will not be correct. Try <code>use_subset_sensitivities = true</code>.
*/

void actual_compute_subset_gradient_without_penalty(TargetT& gradient,
const TargetT& current_estimate,
const int subset_num,
const bool add_sensitivity) override;

Succeeded actual_accumulate_sub_Hessian_times_input_without_penalty(TargetT& output,
const TargetT& current_image_estimate,
const TargetT& input,
const int subset_num) const override;

TargetT* construct_target_ptr() const override;

int set_num_subsets(const int new_num_subsets) override;
Expand All @@ -103,13 +114,6 @@ class PoissonLogLikelihoodWithLinearModelForMeanAndListModeDataWithProjMatrixByB
#endif

protected:
/*! \todo this function is not implemented yet and currently calls error() */
double actual_compute_objective_function_without_penalty(const TargetT& current_estimate, const int subset_num) override
{ // TODO
error("compute_objective_function_without_penalty Not implemented yet");
return 0;
}

Succeeded set_up_before_sensitivity(shared_ptr<const TargetT> const& target_sptr) override;

void add_subset_sensitivity(TargetT& sensitivity, const int subset_num) const override;
Expand Down Expand Up @@ -148,20 +152,44 @@ class PoissonLogLikelihoodWithLinearModelForMeanAndListModeDataWithProjMatrixByB
bool skip_balanced_subsets;

private:
//! Cache of the listmode file
//! Cache of the current "batch" in the listmode file
/*! \todo Move this higher-up in the hierarchy as it doesn't depend on ProjMatrixByBin
*/
std::vector<BinAndCorr> record_cache;
mutable std::vector<BinAndCorr> record_cache;

//! This function caches the listmode file, or reads it. It is run during set_up()
//! This function loads the next "batch" of data from the listmode file.
/*!
This function will either use read_listmode_batch or load_listmode_cache_file.

\param[in] ibatch the batch number to be read.
\return \c true if there are no more events to read after this call, \c false otherwise
\todo Move this function higher-up in the hierarchy as it doesn't depend on ProjMatrixByBin
*/
bool load_listmode_batch(unsigned int ibatch) const;

//! This function reads the next "batch" of data from the listmode file.
/*!
This function keeps on reading from the current position in the list-mode data and stores
prompts events and additive terms in \c record_cache. It also updates \c end_time_per_batch
such that we know when each batch starts/ends.

\param[in] ibatch the batch number to be read.
\return \c true if there are no more events to read after this call, \c false otherwise
\todo Move this function higher-up in the hierarchy as it doesn't depend on ProjMatrixByBin
\warning This function has to be called in sequence.
*/
bool read_listmode_batch(unsigned int ibatch) const;
//! This function caches the list-mode batches to file. It is run during set_up()
/*! \todo Move this function higher-up in the hierarchy as it doesn't depend on ProjMatrixByBin
*/
Succeeded cache_listmode_file();

Succeeded load_listmode_cache_file(unsigned int file_id);
//! Reads the "batch" of data from the cache
bool load_listmode_cache_file(unsigned int file_id) const;
Succeeded write_listmode_cache_file(unsigned int file_id) const;

unsigned int num_cache_files;
mutable std::vector<double> end_time_per_batch;
};

END_NAMESPACE_STIR
Expand Down
2 changes: 1 addition & 1 deletion src/include/stir/recon_buildblock/ProjMatrixByBin.inl
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ ProjMatrixByBin::get_tof_value(const float d1, const float d2) const
if ((d1_n >= 4.f && d2_n >= 4.f) || (d1_n <= -4.f && d2_n <= -4.f))
return 0.F;
else
return 0.5f * (erf_interpolation(d2_n) - erf_interpolation(d1_n));
return static_cast<float>(0.5 * (erf_interpolation(d2_n) - erf_interpolation(d1_n)));
}

END_NAMESPACE_STIR
8 changes: 4 additions & 4 deletions src/include/stir/recon_buildblock/ProjMatrixElemsForOneBin.h
Original file line number Diff line number Diff line change
Expand Up @@ -178,14 +178,14 @@ class ProjMatrixElemsForOneBin

//******************** projection operations ********************//

//! back project a single bin
//! back project a single bin (accumulates)
void back_project(DiscretisedDensity<3, float>&, const Bin&) const;

//! forward project into a single bin
//! forward project into a single bin (accumulates)
void forward_project(Bin&, const DiscretisedDensity<3, float>&) const;
//! back project related bins
//! back project related bins (accumulates)
void back_project(DiscretisedDensity<3, float>&, const RelatedBins&) const;
//! forward project related bins
//! forward project related bins (accumulates)
void forward_project(RelatedBins&, const DiscretisedDensity<3, float>&) const;

private:
Expand Down
7 changes: 6 additions & 1 deletion src/include/stir/recon_buildblock/distributable.h
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,10 @@ void distributable_computation(const shared_ptr<ForwardProjectorByBin>& forward_

\param has_add if \c true, the additive term in \c record_cache is taken into account
\param accumulate if \c true, add to \c output_image_ptr, otherwise fill it with zeroes before doing anything.
\param double_out_ptr accumulated value (for every event) computed by the call-back, unless the pointer is zero
\param call_back
!*/
template <typename CallBackT>
void LM_distributable_computation(const shared_ptr<ProjMatrixByBin> PM_sptr,
const shared_ptr<ProjDataInfo>& proj_data_info_sptr,
DiscretisedDensity<3, float>* output_image_ptr,
Expand All @@ -197,7 +200,9 @@ void LM_distributable_computation(const shared_ptr<ProjMatrixByBin> PM_sptr,
const int subset_num,
const int num_subsets,
const bool has_add,
const bool accumulate);
const bool accumulate,
double* double_out_ptr,
CallBackT&& call_back);

/*! \name Tag-names currently used by stir::distributable_computation and related functions
\ingroup distributable
Expand Down
Loading
Loading