Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
81 commits
Select commit Hold shift + click to select a range
ce62471
adding dense NN and cnn_dense NN
arthurmccray Dec 2, 2025
267775d
refactoring and adding docstrings to core ml code
arthurmccray Dec 17, 2025
89929c3
changing setting device to cpu resets to gpu0 as well
arthurmccray Jan 5, 2026
5c1750b
adding torchinfo for summaries of models. also changing denseNN to no…
arthurmccray Jan 6, 2026
7f58cce
fixing the activation function caching for CNNs
arthurmccray Jan 6, 2026
64fb316
cleaning up HSiren and CNNDense networks
arthurmccray Jan 6, 2026
02d7242
minor changes to autoencoder formatting
arthurmccray Jan 8, 2026
c49994a
bugfix that was causing torch to be initialized to gpu by default. re…
arthurmccray Jan 9, 2026
a8630aa
Merge branch 'dev' into ml
arthurmccray Jan 12, 2026
cd7f661
Merge branch 'dev' into ml
arthurmccray Jan 15, 2026
9559139
Add grid visualization to Dataset3d.show() with ncols and returnfig o…
bobleesj Jan 17, 2026
0cb23cb
Add NumPy-style docstrings and enhance Dataset3d.show()
bobleesj Jan 17, 2026
82119ff
center grid prior to rotating
gvarnavi Jan 18, 2026
9c25168
chore: update lock file
quantem-bot Jan 19, 2026
849a96e
updating config str handling
arthurmccray Jan 20, 2026
5d9a785
refactoring "block" to "layer" for DenseNN
arthurmccray Jan 20, 2026
7813b54
Merge pull request #131 from electronmicroscopy/ml
cedriclim1 Jan 20, 2026
8b1c8bd
add radial averaging on different bins
gvarnavi Jan 20, 2026
c967d41
Merge remote-tracking branch 'origin/dev' into diffractive_imaging
gvarnavi Jan 20, 2026
5be78df
Merge pull request #152 from electronmicroscopy/chore/update-lockfile…
gvarnavi Jan 20, 2026
ce3ec69
Merge remote-tracking branch 'origin/dev' into diffractive_imaging
gvarnavi Jan 20, 2026
fdca2dc
fixing "gpu" bug in config and adding pytest
arthurmccray Jan 20, 2026
c8743cf
adding flag to prevent pytest warnings for test_autoserialize
arthurmccray Jan 20, 2026
8f6a6b4
adding mps fallback for selecting device of "gpu"
arthurmccray Jan 20, 2026
5be5642
minor typing changes, before removing streaming parallax
gvarnavi Jan 21, 2026
c829286
remove streaming parallax
gvarnavi Jan 21, 2026
1d17127
test mps-cuda bug
gvarnavi Jan 21, 2026
f96f2c6
Merge pull request #153 from electronmicroscopy/hotfix-config
gvarnavi Jan 21, 2026
eafb601
Refactor: move show_2d import to module level
bobleesj Jan 22, 2026
c1f1451
adding pytest-cov to test deps
gvarnavi Jan 22, 2026
2fd0201
reverting rotation bug
gvarnavi Jan 22, 2026
faa9680
dataset3d show - support min, max, step, negative step
bobleesj Jan 23, 2026
540e373
consistent suptitle spacing for show() in dataset3d
bobleesj Jan 23, 2026
33597e5
simply padding logic for grid
bobleesj Jan 23, 2026
a3f463e
Merge pull request #154 from electronmicroscopy/diffractive_imaging
smribet Jan 23, 2026
fc1c30f
add tests - show function in dataset3d
bobleesj Jan 23, 2026
2686710
Merge pull request #150 from bobleesj/show3d
arthurmccray Jan 23, 2026
8ba1e92
adding workflow to regenerate lock on PR into main
gvarnavi Jan 23, 2026
f7a79fd
no need for widget
gvarnavi Jan 23, 2026
748c8ff
perhaps --no-workspace was wrong
gvarnavi Jan 23, 2026
a8455db
Merge pull request #156 from electronmicroscopy/actions-fix
quantem-bot Jan 23, 2026
464cf10
remove macos-15-intel
gvarnavi Jan 23, 2026
07ca2df
Merge pull request #159 from electronmicroscopy/actions-fix
quantem-bot Jan 23, 2026
a93c5f3
headless backend to pass on windows
gvarnavi Jan 23, 2026
9f43f47
Merge remote-tracking branch 'origin/dev' into actions-fix
gvarnavi Jan 23, 2026
6501adf
Merge pull request #161 from electronmicroscopy/actions-fix
quantem-bot Jan 23, 2026
e365839
Bump version to 0.1.8
quantem-bot Jan 23, 2026
a7a8fd7
Merge pull request #163 from electronmicroscopy/release/0.1.8
gvarnavi Jan 23, 2026
d865a63
Merge pull request #164 from electronmicroscopy/main
gvarnavi Jan 23, 2026
4d08112
chore: update lock file
quantem-bot Feb 2, 2026
502ec63
Merge pull request #170 from electronmicroscopy/chore/update-lockfile…
gvarnavi Feb 2, 2026
1711c22
Fix type hints for modify_in_place overloads in Dataset
arthurmccray Feb 3, 2026
38367e7
Merge pull request #171 from electronmicroscopy/fix/dataset-type-hints
gvarnavi Feb 4, 2026
377ef6f
chore: update lock file
quantem-bot Feb 9, 2026
9bccb0a
Merge pull request #173 from electronmicroscopy/chore/update-lockfile…
gvarnavi Feb 14, 2026
82fde9a
chore: update lock file
quantem-bot Feb 16, 2026
a95bc3c
Merge pull request #175 from electronmicroscopy/chore/update-lockfile…
gvarnavi Feb 16, 2026
b6a43d3
updates to ptycho viz and inr
arthurmccray Feb 2, 2026
e49e257
chore: update lock file
quantem-bot Feb 23, 2026
b96f3b8
Merge pull request #181 from electronmicroscopy/chore/update-lockfile…
gvarnavi Feb 24, 2026
20f210b
Merge remote-tracking branch 'upstream/dev' into fitting_models
arthurmccray Feb 24, 2026
57e1dab
Merge pull request #176 from arthurmccray/cherrypick-075bb7e
cedriclim1 Feb 24, 2026
57563fb
dataset4dstem.from_file consistent with read_4dstem args
arthurmccray Feb 25, 2026
b2f5253
refactor in progress, base and rendering largely done, working not pe…
arthurmccray Feb 26, 2026
c7f32a5
adding state saving to ModelDiffraction
arthurmccray Feb 27, 2026
2a9d640
switching to state_dict saving
arthurmccray Feb 27, 2026
a9b4eaa
first version of FitBase
arthurmccray Feb 27, 2026
b79e32c
cleaning up FitBase and ModelDiffraction
arthurmccray Feb 28, 2026
e0aeb0b
moving more stuff to FitBase
arthurmccray Feb 28, 2026
2c67dfe
reorganizing classes -- no functional change
arthurmccray Feb 28, 2026
fc283c7
splitting off ModelDiffractionVisualizations into separate file
arthurmccray Feb 28, 2026
59aba08
adding hard constraints like force_center for DiskTemplate
arthurmccray Mar 2, 2026
7443869
adding docstrings and cleaning
arthurmccray Mar 2, 2026
136f626
adding visualizations and overlays
arthurmccray Mar 3, 2026
6429e25
adding turning on/off individual components and parameters
arthurmccray Mar 3, 2026
5588609
fixing center disk duplication and a couple viz bugs
arthurmccray Mar 3, 2026
1fbc76c
adding back parameter bounds
arthurmccray Mar 3, 2026
20b2c58
updating colormaps
arthurmccray Mar 3, 2026
f68742e
more consistent hard constraints of ranges
arthurmccray Mar 3, 2026
b787564
cleaning up naming of Components
arthurmccray Mar 3, 2026
451fba6
Merge branch 'dev' into fitting_models
arthurmccray Mar 3, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 12 additions & 4 deletions .github/workflows/check-pr-main-version.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,24 @@ jobs:

steps:
- name: Checkout PR branch (head)
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
ref: ${{ github.head_ref }}
fetch-depth: 0

- name: Install uv
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@v7

- name: Ensure uv.lock is up to date
run: |
if ! uv lock --check; then
echo "❌ uv.lock is out of date."
echo "Please update uv.lock before versioning can proceed."
exit 1
fi

- name: Install the project
run: uv sync --all-extras --dev --locked
run: uv sync --locked --group test

- name: Get PR branch version
id: pr_version
Expand Down Expand Up @@ -86,7 +94,7 @@ jobs:
echo "Version matches — bumping version and creating new release branch"

uv version --bump patch
uv sync --locked
uv sync --locked --group test
NEXT_VERSION=$(uv version --short)

if [[ "$NEXT_VERSION" == "$PR_VERSION" ]]; then
Expand Down
83 changes: 83 additions & 0 deletions .github/workflows/check-uv-lock.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
name: Check uv.lock is up to date

on:
pull_request:
branches:
- main

permissions:
contents: write
pull-requests: write

jobs:
check-uv-lock:
runs-on: ubuntu-latest

steps:
- name: Checkout PR branch
uses: actions/checkout@v6
with:
ref: ${{ github.head_ref }}
fetch-depth: 0

- name: Install uv
uses: astral-sh/setup-uv@v7

- name: Check lock file
id: lockcheck
run: |
if uv lock --check; then
echo "up_to_date=true" >> $GITHUB_OUTPUT
else
echo "up_to_date=false" >> $GITHUB_OUTPUT
fi

- name: Exit early if lock is up to date
if: steps.lockcheck.outputs.up_to_date == 'true'
run: echo "uv.lock is up to date ✔"

- name: Regenerate lock file
if: steps.lockcheck.outputs.up_to_date == 'false'
run: |
if [[ "${GITHUB_HEAD_REF}" == release/* ]]; then
echo "❌ Release branch — uv.lock must already be up to date."
exit 1
fi

uv lock

- name: Commit updated lock file
if: steps.lockcheck.outputs.up_to_date == 'false'
run: |
SAFE_REF=$(echo "${GITHUB_HEAD_REF}" | tr '/' '-')

git config user.name "quantem-bot"
git config user.email "quantembot@gmail.com"

git checkout -b lockfix/${SAFE_REF}
git commit -am "Update uv.lock"
git push origin lockfix/${SAFE_REF}

- name: Create PR for updated lock file
if: steps.lockcheck.outputs.up_to_date == 'false'
env:
GH_TOKEN: ${{ secrets.QUANTEM_BOT_PAT }}
run: |
SAFE_REF=$(echo "${GITHUB_HEAD_REF}" | tr '/' '-')

if gh pr list --head "lockfix/${SAFE_REF}" --json number --jq 'length > 0'; then
echo "Lockfix PR already exists — skipping."
exit 1
fi

gh pr create \
--base "${{ github.base_ref }}" \
--head "lockfix/${SAFE_REF}" \
--title "Update uv.lock" \
--body "This PR updates \`uv.lock\` to match the current dependency specification."

- name: Fail original PR
if: steps.lockcheck.outputs.up_to_date == 'false'
run: |
echo "❌ uv.lock was out of date — a fix PR has been opened."
exit 1
4 changes: 2 additions & 2 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,10 @@ jobs:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_TOKEN }}

steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6

- name: Install uv
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@v7

- name: Verify lock file is up to date
run: uv lock --check
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/sync-dev-main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4
uses: actions/checkout@v6

- name: Open PR from main to dev
env:
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/update-lock-file-cron.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ jobs:
update-deps:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
with:
ref: dev

- name: Install uv
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@v7

- name: Update dependencies
env:
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/uv-pytests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,23 +13,23 @@ jobs:
strategy:
matrix:
# Test other OSs main branch PRs
os: ${{ github.base_ref == 'main' && fromJSON('["ubuntu-latest", "windows-latest", "macos-latest", "macos-15-intel"]') || fromJSON('["ubuntu-latest"]') }}
os: ${{ github.base_ref == 'main' && fromJSON('["ubuntu-latest", "windows-latest", "macos-latest"]') || fromJSON('["ubuntu-latest"]') }}
python-version:
- "3.11"
- "3.12"
- "3.13"
- "3.14"

steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6

- name: Install uv and set python version
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@v7
with:
python-version: ${{ matrix.python-version }}

- name: Install the project
run: uv sync --locked --all-extras --group test
run: uv sync --locked --group test

- name: Run pytests
run: uv run pytest tests
4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ members = ["widget"]

[project]
name = "quantem"
version = "0.1.7"
version = "0.1.8"
description = "quantitative electron microscopy analysis toolkit."
keywords = ["EM","TEM","STEM","4DSTEM"]
readme = "README.md"
Expand All @@ -53,6 +53,7 @@ dependencies = [
"torchmetrics>=1.7.3",
"optuna>=4.5.0",
"hdf5plugin>=6.0.0",
"torchinfo>=1.8.0",
]

[project.optional-dependencies]
Expand All @@ -72,6 +73,7 @@ only-include = [
[dependency-groups]
test = [
"pytest>=8.3.5",
"pytest-cov>=7.0.0",
]
dev = [
{ include-group = "test" },
Expand Down
62 changes: 41 additions & 21 deletions src/quantem/core/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -270,6 +270,7 @@ def update(

"""
for k, v in new.items():
k, v = check_key_val(k, v)
k = canonical_name(k, old)

if isinstance(v, Mapping):
Expand Down Expand Up @@ -480,11 +481,10 @@ def check_key_val(key: str, val: Any, deprecations: dict = deprecations) -> tupl
new_val = "cpu"
else:
new_val, gpu_id = validate_device(new_val)
if config["has_torch"]:
if torch.cuda.is_available():
torch.cuda.set_device(f"cuda:{gpu_id}")
if config["has_cupy"]:
cp.cuda.runtime.setDevice(gpu_id)
if "cuda" in new_val:
torch.cuda.set_device(gpu_id)
if config["has_cupy"]:
cp.cuda.runtime.setDevice(gpu_id)
return key, new_val


Expand All @@ -503,33 +503,42 @@ def validate_device(dev: str | int | torch.device | None = None) -> tuple[str, i
dev = torch.device(
"cuda" if torch.cuda.is_available() else "mps" if torch.mps.is_available() else "cpu"
)
elif isinstance(dev, str) and dev.lower() == "gpu":
if torch.cuda.is_available():
dev = torch.device("cuda")
elif torch.mps.is_available():
dev = torch.device("mps")
else:
raise RuntimeError("Requested 'gpu' device, but neither CUDA nor MPS is available.")

# Convert to torch.device early
if isinstance(dev, (str, int)):
if isinstance(dev, int):
elif isinstance(dev, str):
if "cuda" in dev.lower():
dev = torch.device(dev)
elif "gpu" in dev.lower():
if torch.cuda.is_available():
dev = torch.device(f"cuda:{dev}")
dev = torch.device("cuda")
elif torch.mps.is_available():
dev = torch.device("mps")
else:
dev = torch.device("cpu")
raise RuntimeError("gpu requested but cuda and mps are not available.")
elif dev.lower() == "mps":
dev = torch.device("mps")
elif dev.lower() == "cpu":
dev = torch.device("cpu")
else:
dev = torch.device(dev)
raise ValueError(
f"Requested unknown device type: {dev} (must be 'cuda', 'mps', or 'cpu')"
)
elif isinstance(dev, int):
if dev < 0:
raise ValueError(f"Requested negative GPU index: {dev} (must be >= 0)")
if torch.cuda.is_available():
dev = torch.device(f"cuda:{dev}")
else:
raise RuntimeError(f"Requested GPU index '{dev}' device, but cuda is not available.")
elif not isinstance(dev, torch.device):
raise TypeError(f"Unsupported device type: {type(dev)} ({dev})")

# Normalize to supported device types
if dev.type == "cuda":
if not torch.cuda.is_available():
raise RuntimeError("CUDA device requested but not available.")
index = dev.index if dev.index is not None else torch.cuda.current_device()
if index >= NUM_DEVICES:
raise RuntimeError(
f"CUDA device index {index} is out of range for {NUM_DEVICES} available devices."
)
return f"cuda:{index}", index

elif dev.type == "mps":
Expand Down Expand Up @@ -560,7 +569,18 @@ def write(path: Path | str = PATH / "config.yaml") -> None:
yaml.dump(config, f)


def set_device(dev: str | int) -> None:
def set_device(dev: str | int | "torch.device") -> None:
"""Set the current device. Accepts a torch-style string, an integer index, or a
torch.device object.
Examples
--------
>>> set_device("cuda:0")
>>> set_device(0)
>>> set_device(torch.device("cuda:0"))
>>> set_device("mps")
>>> set_device("cpu")
>>> set_device("gpu")
"""
set({"device": dev})


Expand Down
Loading