Home

Awesome

<p align="center"><img src="Source/BuildResources/Icon.svg" width="256" height="256"></p>

<p align="center">Lightwave Explorer</p>

<p align="center">Nick Karpowicz<br> Max Planck Institute of Quantum Optics</p>

New!

Publication!

Tutorials on YouTube!


Latest release: 2024.05

Windows: Download .zip (You might need the latest Microsoft C++ Runtime if it fails to start).

Mac: Download .dmg (Intel native or Rosetta on Apple silicon) or compile it yourself (Apple silicon native)

Linux: Get it on Flathub!

This version fixes issues with biaxial crystals, and introduces a pair of sequence functions, rotateIntoBiaxial() and rotateFromBiaxial() to move in and out of the refractive index ellipse minor and major axes, even for arbitrary crystal orientations.

It also includes a fix for a bug where files could not be loaded if their names were changed.


What and why

Lightwave explorer is an open source nonlinear optics simulator, intended to be fast, visual, and flexible for students and researchers to play with ultrashort laser pulses and nonlinear optics without having to buy a laser first.

<p style="text-align: center;"><img src="Documentation/Images/flatpakScreenshot.png"></p>

The simulation was written CUDA in order to run quickly on modern graphics cards. I've subsequently generalized it so that it can be run in two other ways: SYCL on CPUs and Intel GPUs, and using OpenMP to run on CPUs. Accordingly, I hope that the results are fast enough that even complicated systems can be simulated within a human attention span.


Main goals:


Publications

Lightwave Explorer has been used to perform the nonlinear optics simulations in the following papers!


Installation on a Windows PC

Once you've downloaded the file from the latest release above, you should just unzip it and run the exe file inside.

If you want to use SYCL for propagation, you need to install the Intel® oneAPI DPC++/C++ Compiler Runtime for Windows.

The Python module for working with the results is here in this repo; I'd recommend putting it somewhere in your Python path if you're going to work with it a lot, otherwise just copy it into your working folder.


Installation on Mac

The Mac version is also available directly from the Github relases. The first time you run it, you have to right-click (or command-click) on it and select "open". You have to do this because of how Apple expects developers to pay them a subscription to release applications on their platform, and I'd rather not. For the same reason, if you want the M1,M2,M3 .etc native version, you need to compile it on your machine using the directions below.

This version makes use of the FFTW library for Fourier transforms and is therefore released under the GNU Public License v3.

The application bundle contains all the required files. If you want to edit the crystal database or default settings, open the app as a folder (right click or control-click on the app and select "show package contents") - You will find them in the Resources folder.


Compilation on Windows

You will need:

If you've cloned the repo, from that folder, first make the SYCL version as a DLL:

mkdir build
cd build 
cmake -DMAKESYCL=1 .. -G "Visual Studio 17 2022" -A x64 -DCMAKE_TOOLCHAIN_FILE="C:/dev/vcpkg/scripts/buildsystems/vcpkg.cmake" -T "Intel(R) oneAPI DPC++ Compiler 2024"
cmake --build . --config Release

Next build the main application together with the CUDA version:

cmake --fresh .. -G "Visual Studio 17 2022" -A x64 -DCMAKE_TOOLCHAIN_FILE="C:/dev/vcpkg/scripts/buildsystems/vcpkg.cmake" -DCMAKE_CUDA_ARCHITECTURES="75;86"
cmake --build . --config Release

There's also a powershell script named WinBuild.ps1 in the BuildResources folder that does all of this, so you can just run "./Source/BuildResources/WinBuild.Ps1" from the repo root directory to build the whole thing.


Compiling the GUI app on Linux

You'll need the same set of libraries on Linux, however there are a few options so that if you're compiling for your system, you don't have to install the libraries for hardware you won't use.

You will at least need the following (these are what they are called on Fedora/dnf, the names might slightly differ on your repo):

fmt-devel, qt6-qtbase-devel, cairo-devel, tbb-devel

Next, choose your FFT libraries:

You'll need at least one CPU library (i.e. FFTW or MKL) to handle the GUI.

You can determine which version you get with flags sent to cmake. Here are some useful combinations:

This builds the full thing, requiring both CUDA and the oneAPI toolkit, using MKL for the CPU FFTs (CUDA architecture set to 30 series, and using clang 17 because at the time of writing CUDA won't accept GCC 14 or Clang 18 as host compiler):

cmake -DMAKEFULL=TRUE -DCMAKE_CXX_COMPILER=icpx -DCMAKE_CUDA_HOST_COMPILER=clang++-17 -DCMAKE_CUDA_COMPILER=nvcc -DCMAKE_CUDA_ARCHITECTURES=86 .. -G Ninja

This builds the CUDA version, without OneAPI, using FFTW for the CPU FFTs

cmake -DMAKECUDA=1 -DUSEFFTW=1 -DCMAKE_CUDA_HOST_COMPILER=clang++-17 -DCMAKE_CUDA_COMPILER=nvcc -DCMAKE_CUDA_ARCHITECTURES=86 .. -G Ninja

This builds the SYCL version, without needing the CUDA toolkit:

cmake -DMAKESYCL=TRUE -DCMAKE_CXX_COMPILER=icpx .. -G Ninja

This will make a CPU-only (FFTW) version that doesn't need CUDA or oneAPI (i.e. it only uses things that are probably in your normal repo):

cmake .. -G Ninja

No matter what configuration you pick, you can now just do

cmake --build . --config Release

and you should have a binary to run. You should either install it (sudo cmake --install .) or copy the files CrystalDatabase.txt and DefaultValues.ini to the build folder and run it....


Compiling on Mac

The first thing you'll need is Homebrew. If you go there, you'll see a command that you have to run in the terminal. Just paste it and follow the instructions.

I also made a build script that you can run in the same way; just copy and paste the command below that matches your system and it will compile everything it needs and put the application in your Applications folder. It will take a while, so go get a coffee!

(Please note that what's happening here is a shell script from the internet piped into the terminal. Never do this if you don't trust the developer, and even then it's a good idea to check the contents of the script by pasting the URL into your browser. Essentially, this is like letting me type into your Mac's terminal. I'm using it to compile the code and copy the resulting app, but someone at your terminal can also delete or copy your files.)

Apple Silicon (M1, M2, .etc) version:

curl -s https://raw.githubusercontent.com/NickKarpowicz/LightwaveExplorer/master/Source/BuildResources/macAutoBuild.sh | zsh -s

Intel version:

curl -s https://raw.githubusercontent.com/NickKarpowicz/LightwaveExplorer/master/Source/BuildResources/macAutoBuildIntel.sh | zsh -s

Compilation on clusters

A script is provided to compile the CUDA command line version on Linux. This is made specifically to work on the clusters of the MPCDF but will likely work with small modifications on other distributions depending on the local environment. The CUDA development kit and Intel OneAPI should be available in advance. With these prerequisites, the following command should work:

curl -s https://raw.githubusercontent.com/NickKarpowicz/LightwaveExplorer/master/Source/BuildResources/compileCommandLineLWEfromRepos.sh | tcsh -s

On other clusters you might have to instead dowload the script (e.g. with wget) and change it to suit that system before you run it.

If you have the GUI version installed locally, you can set up your calculation and then generate a SLURM script to run on the cluster (it will tell you what to do).


Libraries used

Thanks to the original authors for making their work available! They are all freely available, but of course have their own licenses .etc.


Programming note

The code is written in a "trilingual" way - a single core code file is compiled (after some includes and preprocessor definitions) by the three different compilers, Nvidia nvcc, a c++ compiler (either Microsoft's, g++, or clang++ have all worked), and Intel dpc++.

Although CUDA was the initial platform and what I use (and test) most extensively, I've added two additional languages for those who don't have an Nvidia graphics card.

One is in c++, with multithreading done with either with OpenMP or using C++ parallel execution policies.

The other language is SYCL. This also allows the simulation to run on the CPU and should allow it to run on Intel's graphics cards, as well as the integrated graphics of many Intel CPUs. The same language should be able to run on AMD cards, but support for the DPC++ toolchain with the HipSYCL backend is quite new, and I don't have an AMD card to test it on.

The different architectures are using the same algorithm, aside from small differences in their floating point math and intrinsic functions. So when I make changes or additions, there will never be any platform gaining over the other (again, reproducibility by anyone is part of the goals here).