
The repository contains code and data for translating noisy detector hits (discrete electron-impact coordinates) into smooth continuous diffraction patterns predicted by wave optics. The core idea is to train an autoencoder that denoises and converts a rasterised point-cloud image (what a detector registers) into a continuous intensity image (theoretical profile). This is useful for removing detector noise and recovering physically plausible continuous distributions from sparse point data.
Examples in the repo show side-by-side triplets:
- rasterised detector image (points)
- continuous theoretical target (sinc envelope x interference)
- autoencoder reconstruction (denoised continuous)
Image size: 64x64. Latent vector length: 64 (chosen because the 1D continuous profile is sampled at 64 x-values).
Synthetic dataset diffraction_point_and_continuous.npz of 1000 paired images:
X_points
— rasterised point images, shape(1000, 64, 64)
float32X_cont
— continuous theoretical images, shape(1000, 64, 64)
float32params
— array with(slit_width, slit_sep, L, wavelength)
per sample
Download the model pre-trained for 30 epochs from mega.nz:
- Autoencoder: point2cont_autoencoder.keras
- Decoder: point2cont_decoder.keras
- Encoder: point2cont_encoder.keras
- Generates point clouds sampled from a 1D diffraction intensity PDF (sinc envelope × interference), rasterises them and applies gaussian smoothing to simulate detector imaging.
- Generates continuous target images using the same analytic formula (sinc × cos²), normalised and vertically smoothed.
- Paired dataset for supervised denoising:
point image
tocontinuous image
. - Convolutional autoencoder: encoder (Conv2D ×3 + pooling + Dense latent), decoder (Dense + Conv2DTranspose ×3 + sigmoid).
- Trained with MSE loss to reconstruct continuous images from point images.
Convergence reported as good during experiments. The final MSE is 0.0149.