diff --git a/PLUGINS.md b/PLUGINS.md new file mode 100644 index 0000000..42d5dbc --- /dev/null +++ b/PLUGINS.md @@ -0,0 +1,72 @@ +# py4DGUI Plugins + +Over time, we have substantially pared down the functionality available in the browser, removing things such as pre-processing, file conversion, and data analysis. This has allowed the browser code to become much cleaner, and focused primarily on its core functionality of visualizing 4D-STEM data. In doing so, we have made the browser more robust and maintainable. +With the introduction of plugins in version 1.3.0, we hope to enable easy expansibility of the capabilities of the browser without complicating the core implementation. + +## Known Plugins +We hope to maintain a list of existing plugins here. If you produce a browser plugin, feel free to message `sezelt` or create a PR to be added to this list. + +### Pre-packaged plugins +Parts of what used to be "core" functionality are now implemented using the plugin interface to separate them from the core browser code. These are packaged with py4DGUI and always available: +* `Calibration`: Allows for the calibration of the scale bars using known physical distances. **Note:** This plugin is currently considered "badly behaved" because of the way it accesses the detector ROI objects directly. An abstract interface for this behavior will be created in the future, but for now this plugin should not be considered an "example" to follow. +* `tcBF`: Allows for the computation of tilt-corrected brightfield images. This also accesses detector ROIs directly and should be considered "badly behaved". + +### External plugins +* [EMPAD2 Raw File Reader](https://github.com/sezelt/empad2): This also previously was present in the core browser code and would add an additional menu if the external package was installed. This adds the ability to import the "concatenated" raw binary data from the TFS EMPAD-G2 detector. This plugin is considered conforming to the guidelines. + +# Creating a Plugin + +The py4D_browser plugin mechanics are inspired by [Nion Swift](https://nionswift.readthedocs.io/en/stable/api/plugins.html), particularly how plugins are installed, discovered, and loaded. + +Plugins should create a module in the `py4d_browser_plugin` namespace and should define a class with the `plugin_id` attribute + +```python +class ExamplePlugin: + + # required for py4DGUI to recognize this as a plugin. + plugin_id = "my.plugin.identifier" + + ######## optional flags ######## + display_name = "Example Plugin" + + # Plugins may add a top-level menu on their own, or can opt to have + # a submenu located under Plugins>[display_name], which is created before + # initialization and its QMenu object passed as `plugin_menu` + uses_plugin_menu = False + + # If the plugin only needs a single action button, the browser can opt + # to have that menu item created automatically under Plugins>[Display Name] + # and its QAction object passed as `plugin_action` + uses_single_action = False + + def __init__(self, parent, **kwargs): + self.parent = parent + + def close(self): + pass # perform any shutdown activities + +``` + +On loading the class is initialized using +```python +ExamplePlugin(parent=self, [...]) +``` +where `self` is the `DataViewer` instance (the main window object). All arguments will always be passed as keywords, including any additional arguments that are provided as a result of setting various optional flags. Plugins are loaded as the last step after constructing the `DataViewer`, before its `show()` method is called. + +The current implementation of the plugin interface is thus extremely simple: the plugin object gets a reference to the main window, and can in theory do whatever artitrarily stupid things it wants with it, and there are no guarantees on compatibility between different versions of the browser and plugins. Swift solves this using the API Broker, which interposes all actions taken by the plugin. While we may adopt such an interface in version 2.0, for now we simply have the following design guidelines that should ensure compatibility: + +* If the plugin adds menu items, it should only add items to its own menu (not to ones already existing in the GUI). The plugin is permitted to add a menu to the top bar on its own, or (preferably) can set the `uses_plugin_menu` attribute which will initialize a menu under Plugins>MyPluginDisplayName which gets passed to the initializer as `plugin_menu`. +* If the plugin adds a single menu item, it can have the browser create and insert that action item automatically by setting `uses_single_action`. The `QAction` object will be passed in as `plugin_action`. +* The plugin should *never* render an image to the views directly. To display images, plugins should always call `set_virtual_image` or `set_diffraction_image` using raw, unscaled data. If the plugin needs to produce a customized display, it cannot do that in the existing views and must create its own window. +* The plugin should not retain references to any objects in the `DataViewer`, as that may prevent objects from being freed at the right times. For example, do not do something like `self.current_datacube = self.parent.datacube`, as until this reference is cleared the browser could not free memory after closing a dataset and opening a new one. +* The plugin is allowed to read/write from the QSettings of the GUI, but should only do so in a top-level section with the same name as `plugin_id`, i.e. `value = self.parent.settings(self.plugin_id + "/my_setting", default_value)`. + +## Accessing the detectors + +With version 1.3.0, there is a new API for accessing the ROI selections made using the detectors on the two views. Plugins should only interact with the detectors via this API, as the implementation details of the ROI objects themselves are considered internal and subject to change. Calling `get_diffraction_detector` or `get_virtual_image_detector` yields a `DetectorInfo` object containing the properties of the current detector and the information (either a slice or a mask array) needed to produce the selection it represents. + +## Namespace packages + +Namespace packages are a way to split a package across multiple sources, which can be provided by different distributions. This allows the py4DGUI to import this special namespace and have all plugins, regardless of their source, appear under that import. Details can be found in [PEP 420](https://peps.python.org/pep-0420/). + +In order to create a plugin, create a directory called `py4d_browser_plugin` under your `src` directory, and then create a directory for your plugin within that folder. _Do not place an `__init__.py` file in the `py4d_browser_plugin` folder, or the import mechanism will be broken for all plugins._ \ No newline at end of file diff --git a/README.md b/README.md index c7132b6..b9792c9 100644 --- a/README.md +++ b/README.md @@ -21,12 +21,17 @@ Run `py4DGUI` in your terminal to open the GUI. Then just drag and drop a 4D-STE * The information in the bottom bar contains the details of the virtual detector used to generate the images, and can be entered into py4DSTEM to generate the same image. * The FFT pane can be switched between displaying the FFT of the virtual image and displaying the [exit wave power cepstrum](https://doi.org/10.1016/j.ultramic.2020.112994). * Virtual images can be exported either as the scaled and clipped displays shown in the GUI or as raw data. The exact datatype stored in the raw TIFF image depends on both the datatype of the dataset and the type of virtual image being displayed (in particular, integer datatypes are converted internally to floating point to prevent overflows when generating any synthesized virtual images). -* If the [EMPAD-G2 Raw Reader](https://github.com/sezelt/empad2) is installed in the same environment, an extra menu will appear that allows the concatenated binary format data to be background subtracted and calibrated in the GUI. You can also save the calibrated data as an HDF5 file for later analysis. ![Demonstration](/images/demo.gif) The keyboard map in the Help menu was made using [this tool](https://archie-adams.github.io/keyboard-shortcut-map-maker/) and the map file is in the top level of this repo. +## Plugins + +As of version 1.3.0, we now support a simple means for loading plugins that extend the functionality of the browser. Details on creating a plugin can be found in [this document](PLUGINS.md). + +The [EMPAD-G2 Raw Reader](https://github.com/sezelt/empad2), which was previously implemented in the browser code itself, is now implemented as a plugin, which can serve as an example. + ## About ![py4DSTEM logo](/images/py4DSTEM_logo.png) diff --git a/pyproject.toml b/pyproject.toml index ef30fc1..7793480 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta" [project] name = "py4D_browser" -version = "1.2.1" +version = "1.3.0" authors = [ { name="Steven Zeltmann", email="steven.zeltmann@lbl.gov" }, ] @@ -34,9 +34,6 @@ py4DGUI = "py4D_browser.runGUI:launch" "Homepage" = "https://github.com/py4dstem/py4D-browser" "Bug Tracker" = "https://github.com/py4dstem/py4D-browser/issues" -[tool.pyright] -venv = "py4dstem" - [tool.setuptools] include-package-data = true diff --git a/src/py4D_browser/dialogs.py b/src/py4D_browser/dialogs.py index d86352e..399528f 100644 --- a/src/py4D_browser/dialogs.py +++ b/src/py4D_browser/dialogs.py @@ -116,350 +116,3 @@ def get_next_rect(self, current, direction): return i, self.N // i raise ValueError("Factor finding failed, frustratingly.") - - -class CalibrateDialog(QDialog): - def __init__(self, datacube, parent, diffraction_selector_size=None): - super().__init__(parent=parent) - - self.datacube = datacube - self.parent = parent - self.diffraction_selector_size = diffraction_selector_size - - layout = QVBoxLayout(self) - - ####### LAYOUT ######## - - realspace_box = QGroupBox("Real Space") - layout.addWidget(realspace_box) - realspace_layout = QHBoxLayout() - realspace_box.setLayout(realspace_layout) - - realspace_left_layout = QGridLayout() - realspace_layout.addLayout(realspace_left_layout) - - realspace_left_layout.addWidget(QLabel("Pixel Size"), 0, 0, Qt.AlignRight) - self.realspace_pix_box = QLineEdit() - self.realspace_pix_box.setValidator(QDoubleValidator()) - realspace_left_layout.addWidget(self.realspace_pix_box, 0, 1) - - realspace_left_layout.addWidget(QLabel("Full Width"), 1, 0, Qt.AlignRight) - self.realspace_fov_box = QLineEdit() - realspace_left_layout.addWidget(self.realspace_fov_box, 1, 1) - - realspace_right_layout = QHBoxLayout() - realspace_layout.addLayout(realspace_right_layout) - self.realspace_unit_box = QComboBox() - self.realspace_unit_box.addItems(["Å", "nm"]) - self.realspace_unit_box.setMinimumContentsLength(5) - realspace_right_layout.addWidget(self.realspace_unit_box) - - diff_box = QGroupBox("Diffraction") - layout.addWidget(diff_box) - diff_layout = QHBoxLayout() - diff_box.setLayout(diff_layout) - - diff_left_layout = QGridLayout() - diff_layout.addLayout(diff_left_layout) - - diff_left_layout.addWidget(QLabel("Pixel Size"), 0, 0, Qt.AlignRight) - self.diff_pix_box = QLineEdit() - diff_left_layout.addWidget(self.diff_pix_box, 0, 1) - - diff_left_layout.addWidget(QLabel("Full Width"), 1, 0, Qt.AlignRight) - self.diff_fov_box = QLineEdit() - diff_left_layout.addWidget(self.diff_fov_box, 1, 1) - - diff_left_layout.addWidget(QLabel("Selection Radius"), 2, 0, Qt.AlignRight) - self.diff_selection_box = QLineEdit() - diff_left_layout.addWidget(self.diff_selection_box, 2, 1) - self.diff_selection_box.setEnabled(self.diffraction_selector_size is not None) - - diff_right_layout = QHBoxLayout() - diff_layout.addLayout(diff_right_layout) - self.diff_unit_box = QComboBox() - self.diff_unit_box.setMinimumContentsLength(5) - self.diff_unit_box.addItems( - [ - "mrad", - "Å⁻¹", - # "nm⁻¹", - ] - ) - diff_right_layout.addWidget(self.diff_unit_box) - - button_layout = QHBoxLayout() - button_layout.addStretch() - cancel_button = QPushButton("Cancel") - cancel_button.pressed.connect(self.close) - button_layout.addWidget(cancel_button) - done_button = QPushButton("Done") - done_button.pressed.connect(self.set_and_close) - button_layout.addWidget(done_button) - layout.addLayout(button_layout) - - ######### CALLBACKS ######## - self.realspace_pix_box.textEdited.connect(self.realspace_pix_box_changed) - self.realspace_fov_box.textEdited.connect(self.realspace_fov_box_changed) - self.diff_pix_box.textEdited.connect(self.diffraction_pix_box_changed) - self.diff_fov_box.textEdited.connect(self.diffraction_fov_box_changed) - self.diff_selection_box.textEdited.connect( - self.diffraction_selection_box_changed - ) - - def realspace_pix_box_changed(self, new_text): - pix_size = float(new_text) - - fov = pix_size * self.datacube.R_Ny - self.realspace_fov_box.setText(f"{fov:g}") - - def realspace_fov_box_changed(self, new_text): - fov = float(new_text) - - pix_size = fov / self.datacube.R_Ny - self.realspace_pix_box.setText(f"{pix_size:g}") - - def diffraction_pix_box_changed(self, new_text): - pix_size = float(new_text) - - fov = pix_size * self.datacube.Q_Ny - self.diff_fov_box.setText(f"{fov:g}") - - if self.diffraction_selector_size: - sel_size = pix_size * self.diffraction_selector_size - self.diff_selection_box.setText(f"{sel_size:g}") - - def diffraction_fov_box_changed(self, new_text): - fov = float(new_text) - - pix_size = fov / self.datacube.Q_Ny - self.diff_pix_box.setText(f"{pix_size:g}") - - if self.diffraction_selector_size: - sel_size = pix_size * self.diffraction_selector_size - self.diff_selection_box.setText(f"{sel_size:g}") - - def diffraction_selection_box_changed(self, new_text): - if self.diffraction_selector_size: - sel_size = float(new_text) - - pix_size = sel_size / self.diffraction_selector_size - fov = pix_size * self.datacube.Q_Nx - self.diff_pix_box.setText(f"{pix_size:g}") - self.diff_fov_box.setText(f"{fov:g}") - - sel_size = pix_size * self.diffraction_selector_size - self.diff_selection_box.setText(f"{sel_size:g}") - - def set_and_close(self): - - print("Old calibration") - print(self.datacube.calibration) - - realspace_text = self.realspace_pix_box.text() - if realspace_text != "": - realspace_pix = float(realspace_text) - self.datacube.calibration.set_R_pixel_size(realspace_pix) - self.datacube.calibration.set_R_pixel_units( - self.realspace_unit_box.currentText().replace("Å", "A") - ) - - diff_text = self.diff_pix_box.text() - if diff_text != "": - diff_pix = float(diff_text) - self.datacube.calibration.set_Q_pixel_size(diff_pix) - translation = { - "mrad": "mrad", - "Å⁻¹": "A^-1", - "nm⁻¹": "1/nm", - } - self.datacube.calibration.set_Q_pixel_units( - translation[self.diff_unit_box.currentText()] - ) - - self.parent.update_scalebars() - - print("New calibration") - print(self.datacube.calibration) - - self.close() - - -class ManualTCBFDialog(QDialog): - def __init__(self, parent): - super().__init__(parent=parent) - - self.parent = parent - - layout = QVBoxLayout(self) - - ####### LAYOUT ######## - - params_box = QGroupBox("Parameters") - layout.addWidget(params_box) - - params_layout = QGridLayout() - params_box.setLayout(params_layout) - - params_layout.addWidget(QLabel("Rotation [deg]"), 0, 0, Qt.AlignRight) - self.rotation_box = QLineEdit() - self.rotation_box.setValidator(QDoubleValidator()) - params_layout.addWidget(self.rotation_box, 0, 1) - - params_layout.addWidget(QLabel("Transpose x/y"), 1, 0, Qt.AlignRight) - self.transpose_box = QCheckBox() - params_layout.addWidget(self.transpose_box, 1, 1) - - params_layout.addWidget(QLabel("Max Shift [px]"), 2, 0, Qt.AlignRight) - self.max_shift_box = QLineEdit() - self.max_shift_box.setValidator(QDoubleValidator()) - params_layout.addWidget(self.max_shift_box, 2, 1) - - params_layout.addWidget(QLabel("Pad Images"), 3, 0, Qt.AlignRight) - self.pad_checkbox = QCheckBox() - params_layout.addWidget(self.pad_checkbox, 3, 1) - - button_layout = QHBoxLayout() - button_layout.addStretch() - cancel_button = QPushButton("Cancel") - cancel_button.pressed.connect(self.close) - button_layout.addWidget(cancel_button) - done_button = QPushButton("Reconstruct") - done_button.pressed.connect(self.reconstruct) - button_layout.addWidget(done_button) - layout.addLayout(button_layout) - - def reconstruct(self): - datacube = self.parent.datacube - - # tcBF requires an area detector for generating the mask - detector_shape = ( - self.parent.detector_shape_group.checkedAction().text().replace("&", "") - ) - if detector_shape not in [ - "Rectangular", - "Circle", - ]: - self.parent.statusBar().showMessage( - "tcBF requires a selection of the BF disk" - ) - return - - if detector_shape == "Rectangular": - # Get slices corresponding to ROI - slices, _ = self.parent.virtual_detector_roi.getArraySlice( - self.parent.datacube.data[0, 0, :, :], - self.parent.diffraction_space_widget.getImageItem(), - ) - slice_y, slice_x = slices - - mask = np.zeros( - (self.parent.datacube.Q_Nx, self.parent.datacube.Q_Ny), dtype=np.bool_ - ) - mask[slice_x, slice_y] = True - - elif detector_shape == "Circle": - R = self.parent.virtual_detector_roi.size()[0] / 2.0 - - x0 = self.parent.virtual_detector_roi.pos()[0] + R - y0 = self.parent.virtual_detector_roi.pos()[1] + R - - mask = make_detector( - (self.parent.datacube.Q_Nx, self.parent.datacube.Q_Ny), - "circle", - ((x0, y0), R), - ) - else: - raise ValueError("idk how we got here...") - - if self.max_shift_box.text() == "": - self.parent.statusBar().showMessage("Max Shift must be specified") - return - - rotation = np.radians(float(self.rotation_box.text() or 0.0)) - transpose = self.transpose_box.checkState() - max_shift = float(self.max_shift_box.text()) - - x, y = np.meshgrid( - np.arange(datacube.Q_Nx), np.arange(datacube.Q_Ny), indexing="ij" - ) - - mask_comx = np.sum(mask * x) / np.sum(mask) - mask_comy = np.sum(mask * y) / np.sum(mask) - - pix_coord_x = x - mask_comx - pix_coord_y = y - mask_comy - - q_pix = np.hypot(pix_coord_x, pix_coord_y) - # unrotated shifts in scan pixels - shifts_pix_x = pix_coord_x / np.max(q_pix * mask) * max_shift - shifts_pix_y = pix_coord_y / np.max(q_pix * mask) * max_shift - - R = np.array( - [ - [np.cos(rotation), -np.sin(rotation)], - [np.sin(rotation), np.cos(rotation)], - ] - ) - T = np.array([[0.0, 1.0], [1.0, 0.0]]) - - if transpose: - R = T @ R - - shifts_pix = np.stack([shifts_pix_x, shifts_pix_y], axis=2) @ R - shifts_pix_x, shifts_pix_y = shifts_pix[..., 0], shifts_pix[..., 1] - - # generate image to accumulate reconstruction - pad = self.pad_checkbox.checkState() - pad_width = int( - np.maximum(np.abs(shifts_pix_x).max(), np.abs(shifts_pix_y).max()) - ) - - reconstruction = ( - np.zeros((datacube.R_Nx + 2 * pad_width, datacube.R_Ny + 2 * pad_width)) - if pad - else np.zeros((datacube.R_Nx, datacube.R_Ny)) - ) - - qx = np.fft.fftfreq(reconstruction.shape[0]) - qy = np.fft.fftfreq(reconstruction.shape[1]) - - qx_operator, qy_operator = np.meshgrid(qx, qy, indexing="ij") - qx_operator = qx_operator * -2.0j * np.pi - qy_operator = qy_operator * -2.0j * np.pi - - # loop over images and shift - img_indices = np.argwhere(mask) - for mx, my in tqdm( - img_indices, - desc="Shifting images", - file=StatusBarWriter(self.parent.statusBar()), - mininterval=1.0, - ): - if mask[mx, my]: - img_raw = datacube.data[:, :, mx, my] - - if pad: - img = np.zeros_like(reconstruction) + img_raw.mean() - img[ - pad_width : img_raw.shape[0] + pad_width, - pad_width : img_raw.shape[1] + pad_width, - ] = img_raw - else: - img = img_raw - - reconstruction += np.real( - np.fft.ifft2( - np.fft.fft2(img) - * np.exp( - qx_operator * shifts_pix_x[mx, my] - + qy_operator * shifts_pix_y[mx, my] - ) - ) - ) - - # crop away padding so the image lines up with the original - if pad: - reconstruction = reconstruction[pad_width:-pad_width, pad_width:-pad_width] - - self.parent.set_virtual_image(reconstruction, reset=True) diff --git a/src/py4D_browser/empad2_reader.py b/src/py4D_browser/empad2_reader.py deleted file mode 100644 index 8d8a9ec..0000000 --- a/src/py4D_browser/empad2_reader.py +++ /dev/null @@ -1,80 +0,0 @@ -import empad2 -from PyQt5.QtWidgets import QFileDialog, QMessageBox, QApplication -import numpy as np -from py4D_browser.utils import StatusBarWriter - - -def set_empad2_sensor(self, sensor_name): - self.empad2_calibrations = empad2.load_calibration_data(sensor=sensor_name) - self.statusBar().showMessage(f"{sensor_name} calibrations loaded", 5_000) - - -def load_empad2_background(self): - if self.empad2_calibrations is not None: - filename = raw_file_dialog(self) - self.empad2_background = empad2.load_background( - filepath=filename, calibration_data=self.empad2_calibrations - ) - self.statusBar().showMessage("Background data loaded", 5_000) - else: - QMessageBox.warning( - self, "No calibrations loaded!", "Please select a sensor first" - ) - - -def load_empad2_dataset(self): - if self.empad2_calibrations is not None: - dummy_data = False - if self.empad2_background is None: - continue_wo_bkg = QMessageBox.question( - self, - "Load without background?", - "Background data has not been loaded. Do you want to continue loading data?", - ) - if continue_wo_bkg == QMessageBox.No: - return - else: - self.empad2_background = { - "even": np.zeros((128, 128), dtype=np.float32), - "odd": np.zeros((128, 128), dtype=np.float32), - } - dummy_data = True - - filename = raw_file_dialog(self) - self.datacube = empad2.load_dataset( - filename, - self.empad2_background, - self.empad2_calibrations, - _tqdm_args={ - "desc": "Loading", - "file": StatusBarWriter(self.statusBar()), - "mininterval": 1.0, - }, - ) - - if dummy_data: - self.empad2_background = None - - self.update_diffraction_space_view(reset=True) - self.update_real_space_view(reset=True) - - self.setWindowTitle(filename) - - else: - QMessageBox.warning( - self, "No calibrations loaded!", "Please select a sensor first" - ) - - -def raw_file_dialog(browser): - filename = QFileDialog.getOpenFileName( - browser, - "Open EMPAD-G2 Data", - "", - "EMPAD-G2 Data (*.raw);;Any file(*)", - ) - if filename is not None and len(filename[0]) > 0: - return filename[0] - else: - print("File was invalid, or something?") - raise ValueError("Could not read file") diff --git a/src/py4D_browser/main_window.py b/src/py4D_browser/main_window.py index df13f24..d1b0ee2 100644 --- a/src/py4D_browser/main_window.py +++ b/src/py4D_browser/main_window.py @@ -21,7 +21,7 @@ from functools import partial from pathlib import Path import importlib -import os +import os, sys import platformdirs from py4D_browser.utils import pg_point_roi, VLine, LatchingButton @@ -48,16 +48,16 @@ class DataViewer(QMainWindow): export_datacube, export_virtual_image, show_keyboard_map, - show_calibration_dialog, reshape_data, + set_datacube, update_scalebars, - reconstruct_tcBF_auto, - reconstruct_tcBF_manual, ) from py4D_browser.update_views import ( set_virtual_image, set_diffraction_image, + get_diffraction_detector, + get_virtual_image_detector, _render_virtual_image, _render_diffraction_image, update_diffraction_space_view, @@ -73,13 +73,7 @@ class DataViewer(QMainWindow): update_tooltip, ) - HAS_EMPAD2 = importlib.util.find_spec("empad2") is not None - if HAS_EMPAD2: - from py4D_browser.empad2_reader import ( - set_empad2_sensor, - load_empad2_background, - load_empad2_dataset, - ) + from py4D_browser.plugins import load_plugins def __init__(self, argv): super().__init__() @@ -129,6 +123,9 @@ def __init__(self, argv): self.settings.value("last_state/window_size", QtCore.QSize(1000, 800)), ) + # (Potentially) load plugins + self.load_plugins() + self.show() # If a file was passed on the command line, open it @@ -204,32 +201,6 @@ def setup_menus(self): partial(self.export_virtual_image, method, "diffraction") ) - # EMPAD2 menu - if self.HAS_EMPAD2: - self.empad2_calibrations = None - self.empad2_background = None - - self.empad2_menu = QMenu("&EMPAD-G2", self) - self.menu_bar.addMenu(self.empad2_menu) - - sensor_menu = self.empad2_menu.addMenu("&Sensor") - calibration_action_group = QActionGroup(self) - calibration_action_group.setExclusive(True) - from empad2 import SENSORS - - for name, sensor in SENSORS.items(): - menu_item = sensor_menu.addAction(sensor["display-name"]) - calibration_action_group.addAction(menu_item) - menu_item.setCheckable(True) - menu_item.triggered.connect(partial(self.set_empad2_sensor, name)) - - self.empad2_menu.addAction("Load &Background...").triggered.connect( - self.load_empad2_background - ) - self.empad2_menu.addAction("Load &Dataset...").triggered.connect( - self.load_empad2_dataset - ) - # Scaling Menu self.scaling_menu = QMenu("&Scaling", self) self.menu_bar.addMenu(self.scaling_menu) @@ -531,23 +502,10 @@ def setup_menus(self): partial(self.update_diffraction_space_view, False) ) - # Processing menu - self.processing_menu = QMenu("&Processing", self) + # Plugins menu + self.processing_menu = QMenu("&Plugins", self) self.menu_bar.addMenu(self.processing_menu) - calibrate_action = QAction("&Calibrate...", self) - calibrate_action.triggered.connect(self.show_calibration_dialog) - self.processing_menu.addAction(calibrate_action) - - tcBF_action_manual = QAction("tcBF (Manual)...", self) - tcBF_action_manual.triggered.connect(self.reconstruct_tcBF_manual) - self.processing_menu.addAction(tcBF_action_manual) - - tcBF_action_auto = QAction("tcBF (Automatic)", self) - tcBF_action_auto.triggered.connect(self.reconstruct_tcBF_auto) - self.processing_menu.addAction(tcBF_action_auto) - # tcBF_action_auto.setEnabled(False) - # Help menu self.help_menu = QMenu("&Help", self) self.menu_bar.addMenu(self.help_menu) @@ -624,7 +582,12 @@ def setup_views(self): rightside.addWidget(self.real_space_widget) rightside.addWidget(self.fft_widget) rightside.setOrientation(QtCore.Qt.Vertical) - rightside.setStretchFactor(0, 2) + # set a sensible ratio for the sizes + full_height = ( + self.real_space_widget.size().height() + self.fft_widget.size().height() + ) + rightside.setSizes([int(full_height * 2 / 3), int(full_height / 3)]) + layout.addWidget(rightside, 1) widget = QWidget() diff --git a/src/py4D_browser/menu_actions.py b/src/py4D_browser/menu_actions.py index 74a6ce4..2f67234 100644 --- a/src/py4D_browser/menu_actions.py +++ b/src/py4D_browser/menu_actions.py @@ -6,8 +6,7 @@ import numpy as np import matplotlib.pyplot as plt from py4D_browser.help_menu import KeyboardMapMenu -from py4D_browser.dialogs import CalibrateDialog, ResizeDialog, ManualTCBFDialog -from py4D_browser.utils import make_detector +from py4D_browser.dialogs import ResizeDialog from py4DSTEM.io.filereaders import read_arina @@ -31,18 +30,9 @@ def load_data_arina(self): filename = self.show_file_dialog() dataset = read_arina(filename) - # Try to reshape the data to be square - N_patterns = dataset.data.shape[1] - Nxy = np.sqrt(N_patterns) - if np.abs(Nxy - np.round(Nxy)) <= 1e-10: - Nxy = int(Nxy) - dataset.data = dataset.data.reshape( - Nxy, Nxy, dataset.data.shape[2], dataset.data.shape[3] - ) - else: - self.statusBar().showMessage( - f"The scan appears to not be square! Found {N_patterns} patterns", 5_000 - ) + # Warn if the data is not square + if dataset.data.shape[1] == 1: + self.statusBar().showMessage(f"Arina data was loaded as 3D, please reshape...") self.datacube = dataset self.diffraction_scale_bar.pixel_size = self.datacube.calibration.get_Q_pixel_size() @@ -113,6 +103,17 @@ def load_file(self, filepath, mmap=False, binning=1): self.setWindowTitle(filepath) +def set_datacube(self, datacube, window_title): + self.datacube = datacube + + self.update_scalebars() + + self.update_diffraction_space_view(reset=True) + self.update_real_space_view(reset=True) + + self.setWindowTitle(window_title) + + def update_scalebars(self): realspace_translation = { @@ -239,85 +240,6 @@ def show_keyboard_map(self): keymap.open() -def reconstruct_tcBF_auto(self): - # tcBF requires an area detector for generating the mask - detector_shape = self.detector_shape_group.checkedAction().text().replace("&", "") - if detector_shape not in [ - "Rectangular", - "Circle", - ]: - self.statusBar().showMessage("tcBF requires a selection of the BF disk", 5_000) - return - - if ( - self.datacube.calibration.get_R_pixel_units == "pixels" - or self.datacube.calibration.get_Q_pixel_units == "pixels" - ): - self.statusBar().showMessage("tcBF requires caibrated data", 5_000) - return - - if detector_shape == "Rectangular": - # Get slices corresponding to ROI - slices, _ = self.virtual_detector_roi.getArraySlice( - self.datacube.data[0, 0, :, :], self.diffraction_space_widget.getImageItem() - ) - slice_y, slice_x = slices - - mask = np.zeros((self.datacube.Q_Nx, self.datacube.Q_Ny), dtype=np.bool_) - mask[slice_x, slice_y] = True - - elif detector_shape == "Circle": - R = self.virtual_detector_roi.size()[0] / 2.0 - - x0 = self.virtual_detector_roi.pos()[0] + R - y0 = self.virtual_detector_roi.pos()[1] + R - - mask = make_detector( - (self.datacube.Q_Nx, self.datacube.Q_Ny), "circle", ((x0, y0), R) - ) - else: - raise ValueError("idk how we got here...") - - # do tcBF! - self.statusBar().showMessage("Reconstructing... (This may take a while)") - self.app.processEvents() - - tcBF = py4DSTEM.process.phase.Parallax( - energy=300e3, - datacube=self.datacube, - ) - tcBF.preprocess( - dp_mask=mask, - plot_average_bf=False, - vectorized_com_calculation=False, - store_initial_arrays=False, - ) - tcBF.reconstruct( - plot_aligned_bf=False, - plot_convergence=False, - ) - - self.set_virtual_image(tcBF.recon_BF, reset=True) - - -def reconstruct_tcBF_manual(self): - dialog = ManualTCBFDialog(parent=self) - dialog.show() - - -def show_calibration_dialog(self): - # If the selector has a size, figure that out - if hasattr(self, "virtual_detector_roi") and self.virtual_detector_roi is not None: - selector_size = self.virtual_detector_roi.size()[0] / 2.0 - else: - selector_size = None - - dialog = CalibrateDialog( - self.datacube, parent=self, diffraction_selector_size=selector_size - ) - dialog.open() - - def show_file_dialog(self) -> str: filename = QFileDialog.getOpenFileName( self, diff --git a/src/py4D_browser/plugins.py b/src/py4D_browser/plugins.py new file mode 100644 index 0000000..5c41886 --- /dev/null +++ b/src/py4D_browser/plugins.py @@ -0,0 +1,109 @@ +import pkgutil +import importlib +import inspect +import traceback + +from PyQt5.QtWidgets import QMenu, QAction + +__all__ = ["load_plugins", "unload_plugins"] + + +def load_plugins(self): + """ + The py4D_browser plugin mechanics are inspired by Nion Swift: + https://nionswift.readthedocs.io/en/stable/api/plugins.html + + Plugins should create a module in the py4d_browser_plugin namespace + and should define a class with the `plugin_id` attribute + + On loading the class is initialized using + ExamplePlugin(parent=self) + with additional arguments potentially passed as kwargs + + + """ + + import py4d_browser_plugin + + self.loaded_plugins = [] # we need to hold on to these objects to keep them alive + + for module_info in pkgutil.iter_modules(getattr(py4d_browser_plugin, "__path__")): + + try: + module = importlib.import_module( + py4d_browser_plugin.__name__ + "." + module_info.name + ) + except Exception as e: + print( + f"Attempting to import plugin {module_info.name} raised exception:\n{e}" + ) + print(traceback.print_exc()) + continue + + for name, member in inspect.getmembers(module, inspect.isclass): + plugin_id = getattr(member, "plugin_id", None) + + if plugin_id: + print(f"Loading plugin: {plugin_id} \tfrom: {name}") + try: + plugin_menu = ( + QMenu(getattr(member, "display_name", "DEFAULT_NAME")) + if getattr(member, "uses_plugin_menu", False) + else None + ) + if plugin_menu: + self.processing_menu.addMenu(plugin_menu) + + plugin_action = ( + QAction(getattr(member, "display_name", "DEFAULT_NAME")) + if getattr(member, "uses_single_action", False) + else None + ) + if plugin_action: + self.processing_menu.addAction(plugin_action) + + self.loaded_plugins.append( + { + "plugin": member( + parent=self, + plugin_menu=plugin_menu, + plugin_action=plugin_action, + ), + "menu": plugin_menu, + "action": plugin_action, + } + ) + except Exception as exc: + print(f"Failed to load plugin.\n{exc}") + print(traceback.print_exc()) + + +def unload_plugins(self): + # NOTE: This is currently not actually called! + for plugin in self.loaded_plugins: + plugin["plugin"].close() + + +class ExamplePlugin: + + # required for py4DGUI to recognize this as a plugin. + plugin_id = "my.plugin.identifier" + + ######## optional flags ######## + display_name = "Example Plugin" + + # Plugins may add a top-level menu on their own, or can opt to have + # a submenu located under Plugins>[display_name], which is created before + # initialization and its QMenu object passed as `plugin_menu` + uses_plugin_menu = False + + # If the plugin only needs a single action button, the browser can opt + # to have that menu item created automatically under Plugins>[Display Name] + # and its QAction object passed as `plugin_action` + uses_single_action = False + + def __init__(self, parent, **kwargs): + self.parent = parent + + def close(self): + pass # perform any shutdown activities diff --git a/src/py4D_browser/update_views.py b/src/py4D_browser/update_views.py index 0dc256a..93f1b44 100644 --- a/src/py4D_browser/update_views.py +++ b/src/py4D_browser/update_views.py @@ -7,124 +7,244 @@ from PyQt5.QtGui import QCursor import os + from py4D_browser.utils import ( pg_point_roi, make_detector, complex_to_Lab, StatusBarWriter, + DetectorShape, + DetectorMode, + DetectorInfo, + RectangleGeometry, + CircleGeometry, + AnnulusGeometry, + PointGeometry, ) +def get_diffraction_detector(self) -> DetectorInfo: + """ + Get the current detector and its position on the diffraction view. + Returns a DetectorInfo dictionary, which contains the shape and + response mode of the detector and information on the selection + it represents. The selection is described using one (or more) of + the `slice`, `mask`, and `point` entries, depending on the detector + type. The selections are expressed in data coordinates. + """ + shape = DetectorShape(self.detector_shape_group.checkedAction().text()) + mode = DetectorMode(self.detector_mode_group.checkedAction().text()) + + match shape: + case DetectorShape.POINT: + roi_state = self.virtual_detector_point.saveState() + y0, x0 = roi_state["pos"] + xc, yc = int(x0 + 1), int(y0 + 1) + + # Normalize coordinates + xc = np.clip(xc, 0, self.datacube.Q_Nx - 1) + yc = np.clip(yc, 0, self.datacube.Q_Ny - 1) + + return DetectorInfo( + shape=shape, + mode=mode, + point=[xc, yc], + geometry=PointGeometry(x=xc, y=yc), + ) + + case DetectorShape.RECTANGULAR: + slices, _ = self.virtual_detector_roi.getArraySlice( + self.datacube.data[0, 0, :, :].T, + self.diffraction_space_widget.getImageItem(), + ) + slice_y, slice_x = slices + + mask = np.zeros(self.datacube.Qshape, dtype=np.bool_) + mask[slice_x, slice_y] = True + + return DetectorInfo( + shape=shape, + mode=mode, + slice=[slice_x, slice_y], + mask=mask, + geometry=RectangleGeometry( + xmin=slice_x.start, + xmax=slice_x.stop, + ymin=slice_y.start, + ymax=slice_y.stop, + ), + ) + case DetectorShape.CIRCLE: + R = self.virtual_detector_roi.size()[0] / 2.0 + + x0 = self.virtual_detector_roi.pos()[1] + R + y0 = self.virtual_detector_roi.pos()[0] + R + + mask = make_detector( + (self.datacube.Q_Nx, self.datacube.Q_Ny), "circle", ((x0, y0), R) + ) + + return DetectorInfo( + shape=shape, + mode=mode, + mask=mask, + geometry=CircleGeometry(x=x0, y=y0, R=R), + ) + + case DetectorShape.ANNULUS: + inner_pos = self.virtual_detector_roi_inner.pos() + inner_size = self.virtual_detector_roi_inner.size() + R_inner = inner_size[0] / 2.0 + x0 = inner_pos[1] + R_inner + y0 = inner_pos[0] + R_inner + + outer_size = self.virtual_detector_roi_outer.size() + R_outer = outer_size[0] / 2.0 + + if R_inner <= R_outer: + R_inner -= 1 + + mask = make_detector( + (self.datacube.Q_Nx, self.datacube.Q_Ny), + "annulus", + ((x0, y0), (R_inner, R_outer)), + ) + + return DetectorInfo( + shape=shape, + mode=mode, + mask=mask, + geometry=AnnulusGeometry(x=x0, y=y0, R_inner=R_inner, R_outer=R_outer), + ) + + case _: + raise ValueError("Detector could not be determined") + + +def get_virtual_image_detector(self) -> DetectorInfo: + """ + Get the current detector and its position on the diffraction view. + Returns a DetectorInfo dictionary, which contains the shape and + response mode of the detector and information on the selection + it represents. The selection is described using one (or more) of + the `slice`, `mask`, and `point` entries, depending on the detector + type. The selections are expressed in data coordinates. + """ + shape = DetectorShape(self.rs_detector_shape_group.checkedAction().text()) + mode = DetectorMode(self.realspace_detector_mode_group.checkedAction().text()) + + match shape: + case DetectorShape.POINT: + roi_state = self.real_space_point_selector.saveState() + y0, x0 = roi_state["pos"] + xc, yc = int(x0 + 1), int(y0 + 1) + + # Normalize coordinates + xc = np.clip(xc, 0, self.datacube.R_Nx - 1) + yc = np.clip(yc, 0, self.datacube.R_Ny - 1) + + return DetectorInfo( + shape=shape, + mode=mode, + point=[xc, yc], + geometry=PointGeometry(x=xc, y=yc), + ) + + case DetectorShape.RECTANGULAR: + slices, _ = self.real_space_rect_selector.getArraySlice( + np.zeros((self.datacube.Rshape)).T, + self.real_space_widget.getImageItem(), + ) + slice_y, slice_x = slices + + mask = np.zeros(self.datacube.Rshape, dtype=np.bool_) + mask[slice_x, slice_y] = True + + return DetectorInfo( + shape=shape, + mode=mode, + slice=[slice_x, slice_y], + mask=mask, + geometry=RectangleGeometry( + xmin=slice_x.start, + xmax=slice_x.stop, + ymin=slice_y.start, + ymax=slice_y.stop, + ), + ) + + case _: + raise ValueError("Detector could not be determined") + + def update_real_space_view(self, reset=False): - detector_shape = self.detector_shape_group.checkedAction().text().replace("&", "") - assert detector_shape in [ - "Point", - "Rectangular", - "Circle", - "Annulus", - ], detector_shape - - detector_mode = self.detector_mode_group.checkedAction().text().replace("&", "") - assert detector_mode in [ - "Integrating", - "Maximum", - "CoM", - "CoM X", - "CoM Y", - "iCoM", - ], detector_mode + if self.datacube is None: + return + + detector = self.get_diffraction_detector() # If a CoM method is checked, ensure linear scaling scaling_mode = self.vimg_scaling_group.checkedAction().text().replace("&", "") - if detector_mode == "CoM" and scaling_mode != "Linear": + if ( + detector["mode"] in (DetectorMode.CoM, DetectorMode.CoMx, DetectorMode.CoMy) + and scaling_mode != "Linear" + ): self.statusBar().showMessage("Warning! Setting linear scaling for CoM image") self.vimg_scale_linear_action.setChecked(True) scaling_mode = "Linear" - if self.datacube is None: - return - # We will branch through certain combinations of detector shape and mode. # If we happen across a special case that can be handled directly, we - # compute vimg. If we encounter a case that needs a more complicated - # computation we compute the mask and then do the virtual image later - mask = None - if detector_shape == "Rectangular": - # Get slices corresponding to ROI - slices, transforms = self.virtual_detector_roi.getArraySlice( - self.datacube.data[0, 0, :, :].T, - self.diffraction_space_widget.getImageItem(), - ) - slice_y, slice_x = slices - - # update the label: - self.diffraction_space_view_text.setText( - f"Diffraction Slice: [{slice_x.start}:{slice_x.stop},{slice_y.start}:{slice_y.stop}]" - ) - - if detector_mode == "Integrating": - vimg = np.sum(self.datacube.data[:, :, slice_x, slice_y], axis=(2, 3)) - elif detector_mode == "Maximum": - vimg = np.max(self.datacube.data[:, :, slice_x, slice_y], axis=(2, 3)) - else: - mask = np.zeros((self.datacube.Q_Nx, self.datacube.Q_Ny), dtype=np.bool_) - mask[slice_x, slice_y] = True - - elif detector_shape == "Circle": - R = self.virtual_detector_roi.size()[0] / 2.0 - - x0 = self.virtual_detector_roi.pos()[1] + R - y0 = self.virtual_detector_roi.pos()[0] + R - - self.diffraction_space_view_text.setText( - f"Diffraction Circle: Center ({x0:.0f},{y0:.0f}), Radius {R:.0f}" - ) + # compute vimg. If we don't encounter a special case, the image is calculated + # in the next block using the mask + vimg = None + match detector["shape"]: + case DetectorShape.RECTANGULAR: + # Get slices corresponding to ROI + slice_x, slice_y = detector["slice"] + + # update the label: + self.diffraction_space_view_text.setText( + f"Diffraction Slice: [{slice_x.start}:{slice_x.stop},{slice_y.start}:{slice_y.stop}]" + ) - mask = make_detector( - (self.datacube.Q_Nx, self.datacube.Q_Ny), "circle", ((x0, y0), R) - ) - elif detector_shape == "Annulus": - inner_pos = self.virtual_detector_roi_inner.pos() - inner_size = self.virtual_detector_roi_inner.size() - R_inner = inner_size[0] / 2.0 - x0 = inner_pos[1] + R_inner - y0 = inner_pos[0] + R_inner + if detector["mode"] is DetectorMode.INTEGRATING: + vimg = np.sum(self.datacube.data[:, :, slice_x, slice_y], axis=(2, 3)) + elif detector["mode"] is DetectorMode.MAXIMUM: + vimg = np.max(self.datacube.data[:, :, slice_x, slice_y], axis=(2, 3)) - outer_size = self.virtual_detector_roi_outer.size() - R_outer = outer_size[0] / 2.0 + case DetectorShape.CIRCLE: + # This has no direct methods, so vimg will be made with mask + circle_geometry: CircleGeometry = detector["geometry"] + self.diffraction_space_view_text.setText( + f"Diffraction Circle: Center ({circle_geometry['x']:.0f},{circle_geometry['y']:.0f}), Radius {circle_geometry['R']:.0f}" + ) - if R_inner <= R_outer: - R_inner -= 1 + case DetectorShape.ANNULUS: + # No direct computation, so vimg gets made with mask + annulus_geometry: AnnulusGeometry = detector["geometry"] - self.diffraction_space_view_text.setText( - f"Diffraction Annulus: Center ({x0:.0f},{y0:.0f}), Radii ({R_inner:.0f},{R_outer:.0f})" - ) + self.diffraction_space_view_text.setText( + f"Diffraction Annulus: Center ({annulus_geometry['x']:.0f},{annulus_geometry['y']:.0f}), Radii ({annulus_geometry['R_inner']:.0f},{annulus_geometry['R_outer']:.0f})" + ) - mask = make_detector( - (self.datacube.Q_Nx, self.datacube.Q_Ny), - "annulus", - ((x0, y0), (R_inner, R_outer)), - ) - elif detector_shape == "Point": - roi_state = self.virtual_detector_point.saveState() - y0, x0 = roi_state["pos"] - xc, yc = int(x0 + 1), int(y0 + 1) + case DetectorShape.POINT: + xc, yc = detector["point"] + vimg = self.datacube.data[:, :, xc, yc] - # Set the diffraction space image - # Normalize coordinates - xc = np.clip(xc, 0, self.datacube.Q_Nx - 1) - yc = np.clip(yc, 0, self.datacube.Q_Ny - 1) - vimg = self.datacube.data[:, :, xc, yc] + self.diffraction_space_view_text.setText(f"Diffraction: Point [{xc},{yc}]") - self.diffraction_space_view_text.setText(f"Diffraction: Point [{xc},{yc}]") + case _: + raise ValueError("Detector shape not recognized") - else: - raise ValueError("Detector shape not recognized") + if vimg is None: + mask = detector["mask"] - if mask is not None: + # Debug mode for displaying the mask if "MASK_DEBUG" in os.environ: self.set_diffraction_image(mask.astype(np.float32), reset=reset) return + mask = mask.astype(np.float32) vimg = np.zeros((self.datacube.R_Nx, self.datacube.R_Ny)) iterator = py4DSTEM.tqdmnd( @@ -134,15 +254,20 @@ def update_real_space_view(self, reset=False): mininterval=0.1, ) - if detector_mode == "Integrating": + if detector["mode"] is DetectorMode.INTEGRATING: for rx, ry in iterator: vimg[rx, ry] = np.sum(self.datacube.data[rx, ry] * mask) - elif detector_mode == "Maximum": + elif detector["mode"] is DetectorMode.MAXIMUM: for rx, ry in iterator: vimg[rx, ry] = np.max(self.datacube.data[rx, ry] * mask) - elif "CoM" in detector_mode: + elif detector["mode"] in ( + DetectorMode.CoM, + DetectorMode.CoMx, + DetectorMode.CoMy, + DetectorMode.ICOM, + ): ry_coord, rx_coord = np.meshgrid( np.arange(self.datacube.Q_Ny), np.arange(self.datacube.Q_Nx) ) @@ -157,13 +282,13 @@ def update_real_space_view(self, reset=False): CoMx -= np.mean(CoMx) CoMy -= np.mean(CoMy) - if detector_mode == "CoM": + if detector["mode"] is DetectorMode.CoM: vimg = CoMx + 1.0j * CoMy - elif detector_mode == "CoM X": + elif detector["mode"] is DetectorMode.CoMx: vimg = CoMx - elif detector_mode == "CoM Y": + elif detector["mode"] is DetectorMode.CoMy: vimg = CoMy - elif detector_mode == "iCoM": + elif detector["mode"] is DetectorMode.ICOM: dpc = py4DSTEM.process.phase.DPC(verbose=False) dpc.preprocess( force_com_measured=[CoMx, CoMy], @@ -291,53 +416,33 @@ def update_diffraction_space_view(self, reset=False): if self.datacube is None: return - detector_shape = ( - self.rs_detector_shape_group.checkedAction().text().replace("&", "") - ) - assert detector_shape in [ - "Point", - "Rectangular", - ], detector_shape + detector = self.get_virtual_image_detector() - detector_response = ( - self.realspace_detector_mode_group.checkedAction().text().replace("&", "") - ) - assert detector_response in ["Integrating", "Maximum"], detector_response - - if detector_shape == "Point": - roi_state = self.real_space_point_selector.saveState() - y0, x0 = roi_state["pos"] - xc, yc = int(x0 + 1), int(y0 + 1) + match detector["shape"]: + case DetectorShape.POINT: + xc, yc = detector["point"] - # Set the diffraction space image - # Normalize coordinates - xc = np.clip(xc, 0, self.datacube.R_Nx - 1) - yc = np.clip(yc, 0, self.datacube.R_Ny - 1) + self.real_space_view_text.setText(f"Virtual Image: Point [{xc},{yc}]") - self.real_space_view_text.setText(f"Virtual Image: Point [{xc},{yc}]") + DP = self.datacube.data[xc, yc] - DP = self.datacube.data[xc, yc] - elif detector_shape == "Rectangular": - # Get slices corresponding to ROI - slices, _ = self.real_space_rect_selector.getArraySlice( - np.zeros((self.datacube.Rshape)).T, self.real_space_widget.getImageItem() - ) - slice_y, slice_x = slices + case DetectorShape.RECTANGULAR: + slice_x, slice_y = detector["slice"] - # update the label: - self.real_space_view_text.setText( - f"Virtual Image: Slice [{slice_x.start}:{slice_x.stop},{slice_y.start}:{slice_y.stop}]" - ) + self.real_space_view_text.setText( + f"Virtual Image: Slice [{slice_x.start}:{slice_x.stop},{slice_y.start}:{slice_y.stop}]" + ) - if detector_response == "Integrating": - DP = np.sum(self.datacube.data[slice_x, slice_y], axis=(0, 1)) - elif detector_response == "Maximum": - DP = np.max(self.datacube.data[slice_x, slice_y], axis=(0, 1)) - else: - raise ValueError("Detector response problem") + match detector["mode"]: + case DetectorMode.INTEGRATING: + DP = np.sum(self.datacube.data[slice_x, slice_y], axis=(0, 1)) + case DetectorMode.MAXIMUM: + DP = np.max(self.datacube.data[slice_x, slice_y], axis=(0, 1)) + case _: + raise ValueError("Unsupported detector response") - else: - raise ValueError("Detector shape not recognized") + case _: + raise ValueError("Unsupported detector shape...") self.set_diffraction_image(DP, reset=reset) diff --git a/src/py4D_browser/utils.py b/src/py4D_browser/utils.py index b27ad7f..0fabc26 100644 --- a/src/py4D_browser/utils.py +++ b/src/py4D_browser/utils.py @@ -5,6 +5,97 @@ from PyQt5.QtCore import Qt, QObject from PyQt5.QtWidgets import QDialog, QHBoxLayout, QVBoxLayout, QSpinBox +from typing import NotRequired, TypedDict +from enum import Enum + + +class DetectorShape(Enum): + RECTANGULAR = "rectangular" + POINT = "point" + CIRCLE = "circle" + ANNULUS = "annulus" + + @classmethod + def _missing_(cls, value): + if isinstance(value, str): + value = value.replace("&", "").lower() + for member in cls: + if member.value == value: + return member + return None + + +class DetectorMode(Enum): + INTEGRATING = "integrating" + MAXIMUM = "maximum" + CoM = "com" + CoMx = "comx" + CoMy = "comy" + ICOM = "icom" + + # Strip GUI-related cruft from strings to map to internal representations + @classmethod + def _missing_(cls, value): + if isinstance(value, str): + value = value.replace("&", "").replace(" ", "").lower() + for member in cls: + if member.value == value: + return member + return None + + +RectangleGeometry = TypedDict( + "RectangleGeometry", + { + "xmin": float, + "xmax": float, + "ymin": float, + "ymax": float, + }, +) +CircleGeometry = TypedDict( + "CircleGeometry", + { + "x": float, + "y": float, + "R": float, + }, +) +AnnulusGeometry = TypedDict( + "AnnulusGeometry", + { + "x": float, + "y": float, + "R_inner": float, + "R_outer": float, + }, +) +PointGeometry = TypedDict( + "PointGeometry", + { + "x": float, + "y": float, + }, +) + +DetectorInfo = TypedDict( + "DetectorInfo", + { + "shape": DetectorShape, + "mode": DetectorMode, + # Geometry is intended for display purposes only + "geometry": RectangleGeometry + | CircleGeometry + | AnnulusGeometry + | PointGeometry, + # The are provided based on the detector shape, and should + # be used for any image computation: + "slice": NotRequired[list[slice]], + "mask": NotRequired[np.ndarray], + "point": NotRequired[list[int]], + }, +) + class StatusBarWriter: def __init__(self, statusBar): diff --git a/src/py4d_browser_plugin/README b/src/py4d_browser_plugin/README new file mode 100644 index 0000000..a129472 --- /dev/null +++ b/src/py4d_browser_plugin/README @@ -0,0 +1 @@ +Placeholder to define the py4d_browser_plugin namespace \ No newline at end of file diff --git a/src/py4d_browser_plugin/calibration_plugin/__init__.py b/src/py4d_browser_plugin/calibration_plugin/__init__.py new file mode 100644 index 0000000..d73bfad --- /dev/null +++ b/src/py4d_browser_plugin/calibration_plugin/__init__.py @@ -0,0 +1 @@ +from .calibration_plugin import CalibrationPlugin diff --git a/src/py4d_browser_plugin/calibration_plugin/calibration_plugin.py b/src/py4d_browser_plugin/calibration_plugin/calibration_plugin.py new file mode 100644 index 0000000..9471489 --- /dev/null +++ b/src/py4d_browser_plugin/calibration_plugin/calibration_plugin.py @@ -0,0 +1,233 @@ +from py4DSTEM import DataCube, data +import pyqtgraph as pg +import numpy as np +from tqdm import tqdm +from PyQt5.QtWidgets import QFrame, QPushButton, QApplication, QLabel +from PyQt5.QtCore import pyqtSignal +from PyQt5.QtCore import Qt, QObject +from PyQt5.QtGui import QDoubleValidator +from PyQt5.QtWidgets import ( + QDialog, + QHBoxLayout, + QVBoxLayout, + QSpinBox, + QLineEdit, + QComboBox, + QGroupBox, + QGridLayout, + QCheckBox, + QWidget, +) +from py4D_browser.utils import ( + DetectorShape, + DetectorInfo, + RectangleGeometry, + CircleGeometry, +) + + +class CalibrationPlugin(QWidget): + + # required for py4DGUI to recognize this as a plugin. + plugin_id = "py4DGUI.internal.calibration" + + uses_single_action = True + display_name = "Calibrate..." + + def __init__(self, parent, plugin_action, **kwargs): + super().__init__() + + self.parent = parent + + plugin_action.triggered.connect(self.launch_dialog) + + def close(self): + pass + + def launch_dialog(self): + parent = self.parent + # If the selector has a size, figure that out + detector_info: DetectorInfo = parent.get_diffraction_detector() + + match detector_info["shape"]: + case DetectorShape.CIRCLE: + circle_geometry: CircleGeometry = detector_info["geometry"] + selector_size = circle_geometry["R"] + case _: + selector_size = None + parent.statusBar().showMessage( + "Use a Circle selection to calibrate based on a known spacing...", + 5_000, + ) + + dialog = CalibrateDialog( + parent.datacube, parent=parent, diffraction_selector_size=selector_size + ) + dialog.open() + + +class CalibrateDialog(QDialog): + def __init__(self, datacube, parent, diffraction_selector_size=None): + super().__init__(parent=parent) + + self.datacube = datacube + self.parent = parent + self.diffraction_selector_size = diffraction_selector_size + + layout = QVBoxLayout(self) + + ####### LAYOUT ######## + + realspace_box = QGroupBox("Real Space") + layout.addWidget(realspace_box) + realspace_layout = QHBoxLayout() + realspace_box.setLayout(realspace_layout) + + realspace_left_layout = QGridLayout() + realspace_layout.addLayout(realspace_left_layout) + + realspace_left_layout.addWidget(QLabel("Pixel Size"), 0, 0, Qt.AlignRight) + self.realspace_pix_box = QLineEdit() + self.realspace_pix_box.setValidator(QDoubleValidator()) + realspace_left_layout.addWidget(self.realspace_pix_box, 0, 1) + + realspace_left_layout.addWidget(QLabel("Full Width"), 1, 0, Qt.AlignRight) + self.realspace_fov_box = QLineEdit() + realspace_left_layout.addWidget(self.realspace_fov_box, 1, 1) + + realspace_right_layout = QHBoxLayout() + realspace_layout.addLayout(realspace_right_layout) + self.realspace_unit_box = QComboBox() + self.realspace_unit_box.addItems(["Å", "nm"]) + self.realspace_unit_box.setMinimumContentsLength(5) + realspace_right_layout.addWidget(self.realspace_unit_box) + + diff_box = QGroupBox("Diffraction") + layout.addWidget(diff_box) + diff_layout = QHBoxLayout() + diff_box.setLayout(diff_layout) + + diff_left_layout = QGridLayout() + diff_layout.addLayout(diff_left_layout) + + diff_left_layout.addWidget(QLabel("Pixel Size"), 0, 0, Qt.AlignRight) + self.diff_pix_box = QLineEdit() + diff_left_layout.addWidget(self.diff_pix_box, 0, 1) + + diff_left_layout.addWidget(QLabel("Full Width"), 1, 0, Qt.AlignRight) + self.diff_fov_box = QLineEdit() + diff_left_layout.addWidget(self.diff_fov_box, 1, 1) + + diff_left_layout.addWidget(QLabel("Selection Radius"), 2, 0, Qt.AlignRight) + self.diff_selection_box = QLineEdit() + diff_left_layout.addWidget(self.diff_selection_box, 2, 1) + self.diff_selection_box.setEnabled(self.diffraction_selector_size is not None) + + diff_right_layout = QHBoxLayout() + diff_layout.addLayout(diff_right_layout) + self.diff_unit_box = QComboBox() + self.diff_unit_box.setMinimumContentsLength(5) + self.diff_unit_box.addItems( + [ + "mrad", + "Å⁻¹", + # "nm⁻¹", + ] + ) + diff_right_layout.addWidget(self.diff_unit_box) + + button_layout = QHBoxLayout() + button_layout.addStretch() + cancel_button = QPushButton("Cancel") + cancel_button.pressed.connect(self.close) + button_layout.addWidget(cancel_button) + done_button = QPushButton("Done") + done_button.pressed.connect(self.set_and_close) + button_layout.addWidget(done_button) + layout.addLayout(button_layout) + + ######### CALLBACKS ######## + self.realspace_pix_box.textEdited.connect(self.realspace_pix_box_changed) + self.realspace_fov_box.textEdited.connect(self.realspace_fov_box_changed) + self.diff_pix_box.textEdited.connect(self.diffraction_pix_box_changed) + self.diff_fov_box.textEdited.connect(self.diffraction_fov_box_changed) + self.diff_selection_box.textEdited.connect( + self.diffraction_selection_box_changed + ) + + def realspace_pix_box_changed(self, new_text): + pix_size = float(new_text) + + fov = pix_size * self.datacube.R_Ny + self.realspace_fov_box.setText(f"{fov:g}") + + def realspace_fov_box_changed(self, new_text): + fov = float(new_text) + + pix_size = fov / self.datacube.R_Ny + self.realspace_pix_box.setText(f"{pix_size:g}") + + def diffraction_pix_box_changed(self, new_text): + pix_size = float(new_text) + + fov = pix_size * self.datacube.Q_Ny + self.diff_fov_box.setText(f"{fov:g}") + + if self.diffraction_selector_size: + sel_size = pix_size * self.diffraction_selector_size + self.diff_selection_box.setText(f"{sel_size:g}") + + def diffraction_fov_box_changed(self, new_text): + fov = float(new_text) + + pix_size = fov / self.datacube.Q_Ny + self.diff_pix_box.setText(f"{pix_size:g}") + + if self.diffraction_selector_size: + sel_size = pix_size * self.diffraction_selector_size + self.diff_selection_box.setText(f"{sel_size:g}") + + def diffraction_selection_box_changed(self, new_text): + if self.diffraction_selector_size: + sel_size = float(new_text) + + pix_size = sel_size / self.diffraction_selector_size + fov = pix_size * self.datacube.Q_Nx + self.diff_pix_box.setText(f"{pix_size:g}") + self.diff_fov_box.setText(f"{fov:g}") + + sel_size = pix_size * self.diffraction_selector_size + self.diff_selection_box.setText(f"{sel_size:g}") + + def set_and_close(self): + + print("Old calibration") + print(self.datacube.calibration) + + realspace_text = self.realspace_pix_box.text() + if realspace_text != "": + realspace_pix = float(realspace_text) + self.datacube.calibration.set_R_pixel_size(realspace_pix) + self.datacube.calibration.set_R_pixel_units( + self.realspace_unit_box.currentText().replace("Å", "A") + ) + + diff_text = self.diff_pix_box.text() + if diff_text != "": + diff_pix = float(diff_text) + self.datacube.calibration.set_Q_pixel_size(diff_pix) + translation = { + "mrad": "mrad", + "Å⁻¹": "A^-1", + "nm⁻¹": "1/nm", + } + self.datacube.calibration.set_Q_pixel_units( + translation[self.diff_unit_box.currentText()] + ) + + self.parent.update_scalebars() + + print("New calibration") + print(self.datacube.calibration) + + self.close() diff --git a/src/py4d_browser_plugin/placeholder/__init__.py b/src/py4d_browser_plugin/placeholder/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/src/py4d_browser_plugin/tcBF_plugin/__init__.py b/src/py4d_browser_plugin/tcBF_plugin/__init__.py new file mode 100644 index 0000000..ddedc8f --- /dev/null +++ b/src/py4d_browser_plugin/tcBF_plugin/__init__.py @@ -0,0 +1 @@ +from .tcBF_plugin import tcBFPlugin diff --git a/src/py4d_browser_plugin/tcBF_plugin/tcBF_plugin.py b/src/py4d_browser_plugin/tcBF_plugin/tcBF_plugin.py new file mode 100644 index 0000000..9a1c876 --- /dev/null +++ b/src/py4d_browser_plugin/tcBF_plugin/tcBF_plugin.py @@ -0,0 +1,238 @@ +import numpy as np +from tqdm import tqdm +from PyQt5.QtWidgets import QPushButton, QLabel +from PyQt5.QtCore import Qt +from PyQt5.QtGui import QDoubleValidator +from PyQt5.QtWidgets import ( + QDialog, + QHBoxLayout, + QVBoxLayout, + QLineEdit, + QGroupBox, + QGridLayout, + QCheckBox, + QWidget, +) +from py4D_browser.utils import ( + DetectorShape, + DetectorInfo, + RectangleGeometry, + CircleGeometry, + StatusBarWriter, +) +import py4DSTEM + + +class tcBFPlugin(QWidget): + + # required for py4DGUI to recognize this as a plugin. + plugin_id = "py4DGUI.internal.tcBF" + + uses_plugin_menu = True + display_name = "Tilt-Corrected BF" + + def __init__(self, parent, plugin_menu, **kwargs): + super().__init__() + + self.parent = parent + + manual_action = plugin_menu.addAction("Manual tcBF...") + manual_action.triggered.connect(self.launch_manual) + + auto_action = plugin_menu.addAction("Automatic tcBF") + auto_action.triggered.connect(self.launch_auto) + + def close(self): + pass # perform any shutdown activities + + def launch_manual(self): + dialog = ManualTCBFDialog(parent=self.parent) + dialog.show() + + def launch_auto(self): + parent = self.parent + + detector: DetectorInfo = self.parent.get_diffraction_detector() + + if detector["shape"] is DetectorShape.POINT: + parent.statusBar().showMessage("tcBF requires an area detector!", 5_000) + return + + if ( + parent.datacube.calibration.get_R_pixel_units == "pixels" + or parent.datacube.calibration.get_Q_pixel_units == "pixels" + ): + parent.statusBar().showMessage("Auto tcBF requires caibrated data", 5_000) + return + + # do tcBF! + parent.statusBar().showMessage("Reconstructing... (This may take a while)") + parent.qtapp.processEvents() + + tcBF = py4DSTEM.process.phase.Parallax( + energy=300e3, + datacube=parent.datacube, + ) + tcBF.preprocess( + dp_mask=detector["mask"], + plot_average_bf=False, + vectorized_com_calculation=False, + store_initial_arrays=False, + ) + tcBF.reconstruct( + plot_aligned_bf=False, + plot_convergence=False, + ) + + parent.set_virtual_image(tcBF.recon_BF, reset=True) + + +class ManualTCBFDialog(QDialog): + def __init__(self, parent): + super().__init__(parent=parent) + + self.parent = parent + + layout = QVBoxLayout(self) + + ####### LAYOUT ######## + + params_box = QGroupBox("Parameters") + layout.addWidget(params_box) + + params_layout = QGridLayout() + params_box.setLayout(params_layout) + + params_layout.addWidget(QLabel("Rotation [deg]"), 0, 0, Qt.AlignRight) + self.rotation_box = QLineEdit() + self.rotation_box.setValidator(QDoubleValidator()) + params_layout.addWidget(self.rotation_box, 0, 1) + + params_layout.addWidget(QLabel("Transpose x/y"), 1, 0, Qt.AlignRight) + self.transpose_box = QCheckBox() + params_layout.addWidget(self.transpose_box, 1, 1) + + params_layout.addWidget(QLabel("Max Shift [px]"), 2, 0, Qt.AlignRight) + self.max_shift_box = QLineEdit() + self.max_shift_box.setValidator(QDoubleValidator()) + params_layout.addWidget(self.max_shift_box, 2, 1) + + params_layout.addWidget(QLabel("Pad Images"), 3, 0, Qt.AlignRight) + self.pad_checkbox = QCheckBox() + params_layout.addWidget(self.pad_checkbox, 3, 1) + + button_layout = QHBoxLayout() + button_layout.addStretch() + cancel_button = QPushButton("Cancel") + cancel_button.pressed.connect(self.close) + button_layout.addWidget(cancel_button) + done_button = QPushButton("Reconstruct") + done_button.pressed.connect(self.reconstruct) + button_layout.addWidget(done_button) + layout.addLayout(button_layout) + + def reconstruct(self): + datacube = self.parent.datacube + + # tcBF requires an area detector for generating the mask + detector: DetectorInfo = self.parent.get_diffraction_detector() + + if detector["shape"] is DetectorShape.POINT: + self.parent.statusBar().showMessage( + "tcBF requires an area detector!", 5_000 + ) + return + + mask = detector["mask"] + + if self.max_shift_box.text() == "": + self.parent.statusBar().showMessage("Max Shift must be specified") + return + + rotation = np.radians(float(self.rotation_box.text() or 0.0)) + transpose = self.transpose_box.checkState() + max_shift = float(self.max_shift_box.text()) + + x, y = np.meshgrid( + np.arange(datacube.Q_Nx), np.arange(datacube.Q_Ny), indexing="ij" + ) + + mask_comx = np.sum(mask * x) / np.sum(mask) + mask_comy = np.sum(mask * y) / np.sum(mask) + + pix_coord_x = x - mask_comx + pix_coord_y = y - mask_comy + + q_pix = np.hypot(pix_coord_x, pix_coord_y) + # unrotated shifts in scan pixels + shifts_pix_x = pix_coord_x / np.max(q_pix * mask) * max_shift + shifts_pix_y = pix_coord_y / np.max(q_pix * mask) * max_shift + + R = np.array( + [ + [np.cos(rotation), -np.sin(rotation)], + [np.sin(rotation), np.cos(rotation)], + ] + ) + T = np.array([[0.0, 1.0], [1.0, 0.0]]) + + if transpose: + R = T @ R + + shifts_pix = np.stack([shifts_pix_x, shifts_pix_y], axis=2) @ R + shifts_pix_x, shifts_pix_y = shifts_pix[..., 0], shifts_pix[..., 1] + + # generate image to accumulate reconstruction + pad = self.pad_checkbox.checkState() + pad_width = int( + np.maximum(np.abs(shifts_pix_x).max(), np.abs(shifts_pix_y).max()) + ) + + reconstruction = ( + np.zeros((datacube.R_Nx + 2 * pad_width, datacube.R_Ny + 2 * pad_width)) + if pad + else np.zeros((datacube.R_Nx, datacube.R_Ny)) + ) + + qx = np.fft.fftfreq(reconstruction.shape[0]) + qy = np.fft.fftfreq(reconstruction.shape[1]) + + qx_operator, qy_operator = np.meshgrid(qx, qy, indexing="ij") + qx_operator = qx_operator * -2.0j * np.pi + qy_operator = qy_operator * -2.0j * np.pi + + # loop over images and shift + img_indices = np.argwhere(mask) + for mx, my in tqdm( + img_indices, + desc="Shifting images", + file=StatusBarWriter(self.parent.statusBar()), + mininterval=1.0, + ): + if mask[mx, my]: + img_raw = datacube.data[:, :, mx, my] + + if pad: + img = np.zeros_like(reconstruction) + img_raw.mean() + img[ + pad_width : img_raw.shape[0] + pad_width, + pad_width : img_raw.shape[1] + pad_width, + ] = img_raw + else: + img = img_raw + + reconstruction += np.real( + np.fft.ifft2( + np.fft.fft2(img) + * np.exp( + qx_operator * shifts_pix_x[mx, my] + + qy_operator * shifts_pix_y[mx, my] + ) + ) + ) + + # crop away padding so the image lines up with the original + if pad: + reconstruction = reconstruction[pad_width:-pad_width, pad_width:-pad_width] + + self.parent.set_virtual_image(reconstruction, reset=True)