Skip to content

NickKarpowicz/LightwaveExplorer

Repository files navigation

Lightwave Explorer

Nick Karpowicz
Max Planck Institute of Quantum Optics


New!

Publication!

Tutorials on YouTube!


Latest release: 2024.02

Windows: Download .zip

Mac: Download .zip (Intel native or Rosetta on Apple silicon) or compile it yourself (Apple silicon native)

Linux: Get it on Flathub!

This release adds the following fixes and improvements:

  • FDTD mode now supports importing a user-generated multi-material grid (tutorial upcoming).
  • New compressed file format (produces self-contained .zip files).
  • Saving and loading of files now done entirely through standard system file dialogs.
  • File access now supports XDG portals; Flatpak no longer needs nor requests home folder access.
  • Improved interaction with clusters/SLURM script generation.
  • Made the user interface more compact and organized.
  • Support dynamic loading of new crystal database files.

What and why

Lightwave explorer is an open source nonlinear optics simulator, intended to be fast, visual, and flexible for students and researchers to play with ultrashort laser pulses and nonlinear optics without having to buy a laser first.

The simulation was written CUDA in order to run quickly on modern graphics cards. I've subsequently generalized it so that it can be run in two other ways: SYCL on CPUs and Intel GPUs, and using OpenMP to run on CPUs. Accordingly, I hope that the results are fast enough that even complicated systems can be simulated within a human attention span.


Main goals:

  • Easily extensible database of materials: Eveything the program knows about nonlinear materials comes from a human-readable text file giving the appropriate coefficients and tensors. If you want to use a new material, or you've done a measurement in a new range where typical extrapolations from older data isn't relevant, it's easy to add and correct. There are places for references for the key parameters, and these references are stored in the saved simulation results for future reference. Especially if you have simulations that you checked against experiments, I'd be very happy for you to add your crystal definitions to the central database in the project Github.
  • Accurate modeling of nonlinear optics using multiple, user-selectable physical models, including the unidirectional nonlinear wave equation and finite-difference time-domain approaches. This allows calculations that accommodate large systems where forward-propagation is an appropriate assumption, but also of etalon effects in thin crystals where reflections cannot be neglected.
  • Efficient code so that complicated systems can be simulated in 3D: Real laser pulses can be messy, and if they weren't so before a nonlinear crystal, there's a good chance they are after (but not always). If things are slow, it's hard to go beyond one dimension on tolerable time scales, and then you miss out on the whole weird world of spatiotemporal couplings. Here you have options for rather fast simulations when there's a symmetry to apply (e.g. cylindrical or along one Cartesian dimension), alongside fully 3D propagation. Runs natively on both GPU and CPU to make use of whatever you have to work with.
  • A graphical interface that lets you see what you're doing: A lot of us think in visual terms. Being able to adjust and scan parameters and immediately see what happens can really make it easier to understand what you're looking at.
  • A flexible sequence mode: By stringing together elements, not just nonlinear crystals but also spherical or parabolic mirrors, apertures, filters, free space propagation and other elements, simulate how one interaction affects another. Sequences of events can be scripted and even programmed with loop functions to see how things change over the course of repeated interactions.
  • Fitting modes: Sometimes the data that we measure depends in an interesting way on a parameter, and we'd actually like to go back and figure out what that parameter was from the data. Solving this kind of inverse problem can be tough when the parameter lives inside a partial differential equation, but by simulating the whole thing and doing a fit, you have a chance to do it! The fitting algorithm can be used to narrow down a huge space of variables to come at your best estimation of what was happening in an experiment, or to adjust your experimental system to maximize output at a given frequency.
  • A Python module for easy postprocessing of the results: I hope that you get something interesting out that you want to plot and maybe publish. One of the nicest platforms for making nice plots is Python in my opinion (that's why the documentation is in a Jupyter notebook), so purely out of self interest I tried to make it easy to load the results in Python. The module also has some functions related to typical operations you'd like to do on the data to make it easy for all of us. The program also gives you a Matlab loading script for those who want to use that.
  • Multiplatform: Works on Windows, Linux, and Mac.
  • Command line interface for running on Linux/clusters: the simulation core can be compiled as a command line application to be controlled via the SLURM system. The GUI app can automatically configure the SLURM script, as well. I use this to run it on the clusters of the Max Planck Society, and other institutes and universities likely have similar systems. This lets you do a lot more if your personal resources are limited but you want to run simulations on a large grid or cover a lot of different parameters!

Publications

Lightwave Explorer has been used to perform the nonlinear optics simulations in the following papers!


Installation on a Windows PC

Once you've downloaded the file from the latest release above, you should just unzip it and run the exe file inside.

If you want to use SYCL for propagation, you need to install the Intel® oneAPI DPC++/C++ Compiler Runtime for Windows.

The Python module for working with the results is here in this repo; I'd recommend putting it somewhere in your Python path if you're going to work with it a lot, otherwise just copy it into your working folder.


Installation on Mac

The Mac version is also available directly from the Github relases. The first time you run it, you have to right-click (or command-click) on it and select "open". You have to do this because of how Apple expects developers to pay them a subscription to release applications on their platform, and I'd rather not. For the same reason, if you want the M1,M2,M3 .etc native version, you need to compile it on your machine using the directions below.

This version makes use of the FFTW library for Fourier transforms and is therefore released under the GNU Public License v3.

The application bundle contains all the required files. If you want to edit the crystal database or default settings, open the app as a folder (right click or control-click on the app and select "show package contents") - You will find them in the Resources folder.


Compilation on Windows

You will need Visual Studio 2022. You will also need vcpkg, and use that to install dlib (make sure you get the 64-bit version, not the 32-bit one) and miniz.

Next, install the CUDA development kit from NVIDIA, and Intel OneAPI (including the Math Kernel Library and the DPC++ compiler).

Next, you'll need a compiled version of GTK4. The resulting compiled thing should be kept in a folder next to the LightwaveExplorer folder (e.g. they're both in the same parent folder).


Compiling the GUI app on Linux (Easy CPU-only version)

The easiest version to compile on Linux is the GPL3 version, which doesn't include the CUDA or OneAPI propagators. This means it will only run on CPU, but if you don't have a compatible GPU anyway, it makes use of FFTW for the FFTs, which may be faster on your hardware in any case.

The prerequisite packages are: gcc, cmake, GTK4, and FFTW (plus git to download the repo). Their exact names in your package manager may vary...

If you are on an Ubuntu-based distro, you can use this to grab everything:

sudo apt install gcc git cmake libgtk-4-1 libgtk-4-dev libfftw3-3 libfftw3-dev

On OpenSUSE Tumbleweed, I needed:

sudo zypper install git gcc-c++ cmake gtk4-devel fftw-devel fmt-devel

Once you have that, type the following into the terminal:

git clone https://github.com/NickKarpowicz/LightwaveExplorer
mkdir LightwaveExplorer/build
cd LightwaveExplorer/build
cmake ..
make

It should then spend a bit of time building and finally produce a LightwaveExplorer executable in the build directory.

You can install the application in your default location (probably /usr/local) with the command:

sudo cmake --install .

If you want to install it somewhere else, append --prefix "/where/you/want/it/to/go"

Installing will also place the CrystalDatabase.txt and DefaultValues.ini text files in the /share/LightwaveExplorer folder alongside the /bin folder where the binary ends up. You can edit these freely to add crystals or changes the values that populate the program's interface when it starts.

Compiling the GUI app on Linux (CUDA and SYCL version)

You'll need everything required to build the GPL3 version above, except for FFTW, which isn't used in this version. I'd recommend building the one above first to make sure it works. Next, install the prerequisites for this version:

  • Intel OneAPI

  • NVIDIA CUDA - this might already be in your package manager, but I'd recommend at least version 11.6.

Now that you have everything, in order to build the full version, first you have to set the OneAPI environment variables, typically with:

. ~/intel/oneapi/setvars.sh

if you installed OneAPI as a normal user or

. /opt/intel/oneapi/setvars.sh

if you installed as root.

Then, build the executable with:

git clone https://github.com/NickKarpowicz/LightwaveExplorer
mkdir LightwaveExplorer/build
cd LightwaveExplorer/build
cmake -DMAKEFULL=TRUE -DCMAKE_CXX_COMPILER=icpx -DCMAKE_CUDA_HOST_COMPILER=clang++ -DCMAKE_CUDA_COMPILER=nvcc -DCMAKE_CUDA_ARCHITECTURES=75 ..
make

Replace the CUDA_ARCHITECTURES number with one that matches your GPU. If it doesn't fail, you should now have an executable file named LightwaveExplorer in the build folder. You can install using the same process as the CPU-only version above.

Depending on your distro, your version of Clang or GCC might be too new to work with CUDA, in which case you might need to install an older one and specify that in the call to cmake, i.e. -DCMAKE_CUDA_HOST_COMPILER=clang++15 after installing the clang15 package.


Compiling on Mac

The first thing you'll need is Homebrew. If you go there, you'll see a command that you have to run in the terminal. Just paste it and follow the instructions.

I also made a build script that you can run in the same way; just copy and paste the command below that matches your system and it will compile everything it needs and put the application in your Applications folder. It will take a while, so go get a coffee!

Apple Silicon (M1, M2, .etc) version:

curl -s https://raw.githubusercontent.com/NickKarpowicz/LightwaveExplorer/master/Source/BuildResources/macAutoBuild.sh | zsh -s

Intel version:

curl -s https://raw.githubusercontent.com/NickKarpowicz/LightwaveExplorer/master/Source/BuildResources/macAutoBuildIntel.sh | zsh -s

Compilation on clusters

A script is provided to compile the CUDA command line version on Linux. This is made specifically to work on the clusters of the MPCDF but will likely work with small modifications on other distributions depending on the local environment. The CUDA development kit and Intel OneAPI should be available in advance. With these prerequisites, the following command should work:

curl -s https://raw.githubusercontent.com/NickKarpowicz/LightwaveExplorer/master/Source/BuildResources/compileCommandLineLWEfromRepos.sh | tcsh -s

On other clusters you might have to instead dowload the script (e.g. with wget) and change it to suit that system before you run it.

If you have the GUI version installed locally, you can set up your calculation and then generate a SLURM script to run on the cluster (it will tell you what to do).


Libraries used

Thanks to the original authors for making their work available! They are all freely available, but of course have their own licenses .etc.

  • NVIDIA CUDA: This provides the basic CUDA runtime, compiler, and cuFFT, for running the simulations on NVIDIA GPUs, and is the basis of the fastest version of this code.
  • Intel OneAPI, specifically the Math Kernel Library: This is used for performing fast fourier transforms when running in CPU mode. The DPC++ compiler allows the program to run on both CPUs and a wider range of GPUs, including the integrated ones on Intel chips. I found that on my rather old laptop, SYCL on the GPU is several times faster than running on CPU, so it's useful even for systems without dedicated GPUs.
  • Dlib: This library is the basis of the optimization routines. I make use of the global optimization functions for the fitting/optimization modes. The library is available on Github, and their excellent documentation and further information is on the main project website.
  • GTK: The new version of the user interface uses GTK 4; this is why it looks pretty much the same on Windows, Linux, and Mac. It was pretty easy to get working cross-platform, which again is nice for the goal that everybody should be able to reproduce calculations in LWE.
  • FFTW: This is used for Fast Fourier Transforms in the GPL 3.0 version (i.e. the CPU-only Linux and Mac versions). On a given CPU this is on average the fastest FFT you can find.
  • miniz: Nice and easy to use C library for making/reading .zip archives.

Programming note

The code is written in a "trilingual" way - a single core code file is compiled (after some includes and preprocessor definitions) by the three different compilers, Nvidia nvcc, a c++ compiler (either Microsoft's, g++, or clang++ have all worked), and Intel dpc++.

Although CUDA was the initial platform and what I use (and test) most extensively, I've added two additional languages for those who don't have an Nvidia graphics card.

One is in c++, with multithreading done with OpenMP.

The other language is SYCL. This also allows the simulation to run on the CPU and should allow it to run on Intel's graphics cards, as well as the integrated graphics of many Intel CPUs. The same language should be able to run on AMD cards, but support for the DPC++ toolchain with the HipSYCL backend is quite new, and I don't have an AMD card to test it on.

The different architectures are using the same algorithm, aside from small differences in their floating point math and intrinsic functions. So when I make changes or additions, there will never be any platform gaining over the other (again, reproducibility by anyone is part of the goals here).