Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 14 additions & 2 deletions docs/api-reference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,6 @@
smoothing
types
uncertainty

external.powgen
```

## ESSdream
Expand All @@ -43,6 +41,7 @@
:toctree: ../generated/functions

instrument_view
DreamGeant4Workflow
```

### Submodules
Expand All @@ -56,3 +55,16 @@
data
io
```

## SNS powder

```{eval-rst}
.. currentmodule:: ess.snspowder

.. autosummary::
:toctree: ../generated/modules
:template: module-template.rst
:recursive:

powgen
```
93 changes: 35 additions & 58 deletions docs/user-guide/dream/dream-data-reduction.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,92 +17,70 @@
"source": [
"import scipp as sc\n",
"import scippneutron as scn\n",
"import sciline\n",
"from ess import dream, powder\n",
"from ess.powder.types import *\n",
"from ess.dream.io.geant4 import providers as geant4_providers"
"from ess.powder.types import *"
]
},
{
"cell_type": "markdown",
"id": "1252feab-12d2-46ac-bf74-70b32344473d",
"id": "dcaf1d53-2a81-4a31-8379-1fb3791aaeab",
"metadata": {},
"source": [
"## Define reduction parameters\n",
"## Create and configure the workflow\n",
"\n",
"We define a dictionary containing the reduction parameters.\n",
"The keys are types defined in [essdiffraction.types](../generated/modules/ess.diffraction.types.rst)."
"We begin by creating the Dream (Geant4) workflow object which is a skeleton for reducing Dream data, with pre-configured steps."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "502e77cc-0253-4e71-9d97-81ff560ef99d",
"id": "b2d15f12-69d7-4c5d-9d4d-f1c13fc29103",
"metadata": {},
"outputs": [],
"source": [
"params = {\n",
" Filename[SampleRun]: dream.data.simulated_diamond_sample(),\n",
" Filename[VanadiumRun]: dream.data.simulated_vanadium_sample(),\n",
" Filename[EmptyCanRun]: dream.data.simulated_empty_can(),\n",
" CalibrationFilename: None,\n",
" NeXusDetectorName: \"mantle\",\n",
" # The upper bounds mode is not yet implemented.\n",
" UncertaintyBroadcastMode: UncertaintyBroadcastMode.drop,\n",
" # Edges for binning in d-spacing\n",
" DspacingBins: sc.linspace(\"dspacing\", 0.0, 2.3434, 201, unit=\"angstrom\"),\n",
" # Mask in time-of-flight to crop to valid range\n",
" TofMask: lambda x: (x < sc.scalar(0.0, unit=\"ns\"))\n",
" | (x > sc.scalar(86e6, unit=\"ns\")),\n",
" TwoThetaMask: None,\n",
" WavelengthMask: None,\n",
"}\n",
"\n",
"# Not available in simulated data\n",
"sample = sc.DataGroup(position=sc.vector([0.0, 0.0, 0.0], unit=\"mm\"))\n",
"params[RawSample[SampleRun]] = sample\n",
"params[RawSample[VanadiumRun]] = sample\n",
"\n",
"source = sc.DataGroup(position=sc.vector([-3.478, 0.0, -76550], unit=\"mm\"))\n",
"params[RawSource[SampleRun]] = source\n",
"params[RawSource[VanadiumRun]] = source\n",
"\n",
"charge = sc.scalar(1.0, unit=\"µAh\")\n",
"params[AccumulatedProtonCharge[SampleRun]] = charge\n",
"params[AccumulatedProtonCharge[VanadiumRun]] = charge"
"workflow = dream.DreamGeant4Workflow()"
]
},
{
"cell_type": "markdown",
"id": "21cb87f2-4ff7-436e-b603-cc8f60c73e7a",
"id": "1252feab-12d2-46ac-bf74-70b32344473d",
"metadata": {},
"source": [
"## Create pipeline using Sciline\n",
"\n",
"We use the `powder` and `geant4` providers to build our pipeline."
"We then need to set the missing parameters which are specific to each experiment\n",
"(the keys are types defined in [essdiffraction.powder.types](../generated/modules/ess.diffraction.powder.types.rst)):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "98e99f33-6f4b-4b60-acaf-added3c6e1b0",
"id": "502e77cc-0253-4e71-9d97-81ff560ef99d",
"metadata": {},
"outputs": [],
"source": [
"providers = (\n",
" *geant4_providers,\n",
" *powder.providers,\n",
")\n",
"\n",
"pipeline = sciline.Pipeline(providers, params=params)\n",
"pipeline = powder.with_pixel_mask_filenames(pipeline, [])"
"workflow[Filename[SampleRun]] = dream.data.simulated_diamond_sample()\n",
"workflow[Filename[VanadiumRun]] = dream.data.simulated_vanadium_sample()\n",
"workflow[Filename[EmptyCanRun]] = dream.data.simulated_empty_can()\n",
"workflow[CalibrationFilename] = None\n",
"workflow[NeXusDetectorName] = \"mantle\"\n",
"# The upper bounds mode is not yet implemented.\n",
"workflow[UncertaintyBroadcastMode] = UncertaintyBroadcastMode.drop\n",
"# Edges for binning in d-spacing\n",
"workflow[DspacingBins] = sc.linspace(\"dspacing\", 0.0, 2.3434, 201, unit=\"angstrom\")\n",
"# Mask in time-of-flight to crop to valid range\n",
"workflow[TofMask] = lambda x: (x < sc.scalar(0.0, unit=\"ns\")) | (x > sc.scalar(86e6, unit=\"ns\"))\n",
"workflow[TwoThetaMask] = None\n",
"workflow[WavelengthMask] = None\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can those be None by default?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They could, but I thought this would make it obvious what the syntax is if you want to add other masks. I guess this goes back to the discussion on whether these notebooks should only show what is needed, and we need another page that describes everything that can be done in the workflow.

"# No pixel masks\n",
"workflow = powder.with_pixel_mask_filenames(workflow, [])"
]
},
{
"cell_type": "markdown",
"id": "21fb4492-e836-41d3-a2d4-9678df43b9f9",
"metadata": {},
"source": [
"## Use the workflow\n",
"\n",
"We can visualize the graph for computing the final normalized result for intensity as a function of d-spacing:"
]
},
Expand All @@ -113,7 +91,7 @@
"metadata": {},
"outputs": [],
"source": [
"pipeline.visualize(IofDspacing, graph_attr={\"rankdir\": \"LR\"})"
"workflow.visualize(IofDspacing, graph_attr={\"rankdir\": \"LR\"})"
]
},
{
Expand All @@ -131,7 +109,7 @@
"metadata": {},
"outputs": [],
"source": [
"result = pipeline.compute(IofDspacing)\n",
"result = workflow.compute(IofDspacing)\n",
"result"
]
},
Expand Down Expand Up @@ -186,7 +164,7 @@
"metadata": {},
"outputs": [],
"source": [
"intermediates = pipeline.compute(\n",
"intermediates = workflow.compute(\n",
" (\n",
" DataWithScatteringCoordinates[SampleRun],\n",
" MaskedData[SampleRun],\n",
Expand Down Expand Up @@ -215,8 +193,8 @@
"source": [
"## Grouping by scattering angle\n",
"\n",
"The above pipeline focuses the data by merging all instrument pixels to produce a 1d d-spacing curve.\n",
"If instead we want to group into $2\\theta$ bins, we can alter the pipeline parameters by adding some binning in $2\\theta$:"
"The above workflow focuses the data by merging all instrument pixels to produce a 1d d-spacing curve.\n",
"If instead we want to group into $2\\theta$ bins, we can alter the workflow parameters by adding some binning in $2\\theta$:"
]
},
{
Expand All @@ -226,7 +204,7 @@
"metadata": {},
"outputs": [],
"source": [
"pipeline[TwoThetaBins] = sc.linspace(\n",
"workflow[TwoThetaBins] = sc.linspace(\n",
" dim=\"two_theta\", unit=\"rad\", start=0.8, stop=2.4, num=17\n",
")"
]
Expand All @@ -238,7 +216,7 @@
"metadata": {},
"outputs": [],
"source": [
"grouped_dspacing = pipeline.compute(IofDspacingTwoTheta)\n",
"grouped_dspacing = workflow.compute(IofDspacingTwoTheta)\n",
"grouped_dspacing"
]
},
Expand Down Expand Up @@ -286,8 +264,7 @@
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
"pygments_lexer": "ipython3"
}
},
"nbformat": 4,
Expand Down
Loading