Home

Awesome

<p align="center"><img src="Source/BuildResources/Icon.svg" width="256" height="256"></p>

<p align="center">Lightwave Explorer</p>

<p align="center">Nick Karpowicz<br> Max Planck Institute of Quantum Optics</p>

New!

Publication!

Tutorials on YouTube!


Latest release: 2024.05

Windows: Download .zip (You might need the latest Microsoft C++ Runtime if it fails to start).

Mac: Download .dmg (Intel native or Rosetta on Apple silicon) or compile it yourself (Apple silicon native)

Linux: Get it on Flathub!

This version fixes issues with biaxial crystals, and introduces a pair of sequence functions, rotateIntoBiaxial() and rotateFromBiaxial() to move in and out of the refractive index ellipse minor and major axes, even for arbitrary crystal orientations.

It also includes a fix for a bug where files could not be loaded if their names were changed.


What and why

Lightwave explorer is an open source nonlinear optics simulator, intended to be fast, visual, and flexible for students and researchers to play with ultrashort laser pulses and nonlinear optics without having to buy a laser first.

<p style="text-align: center;"><img src="Documentation/Images/flatpakScreenshot.png"></p>

The simulation was written CUDA in order to run quickly on modern graphics cards. I've subsequently generalized it so that it can be run in two other ways: SYCL on CPUs and Intel GPUs, and using OpenMP to run on CPUs. Accordingly, I hope that the results are fast enough that even complicated systems can be simulated within a human attention span.


Main goals:


Publications

Lightwave Explorer has been used to perform the nonlinear optics simulations in the following papers!


Installation on a Windows PC

Once you've downloaded the file from the latest release above, you should just unzip it and run the exe file inside.

If you want to use SYCL for propagation, you need to install the Intel® oneAPI DPC++/C++ Compiler Runtime for Windows.

The Python module for working with the results is here in this repo; I'd recommend putting it somewhere in your Python path if you're going to work with it a lot, otherwise just copy it into your working folder.


Installation on Mac

The Mac version is also available directly from the Github relases. The first time you run it, you have to right-click (or command-click) on it and select "open". You have to do this because of how Apple expects developers to pay them a subscription to release applications on their platform, and I'd rather not. For the same reason, if you want the M1,M2,M3 .etc native version, you need to compile it on your machine using the directions below.

This version makes use of the FFTW library for Fourier transforms and is therefore released under the GNU Public License v3.

The application bundle contains all the required files. If you want to edit the crystal database or default settings, open the app as a folder (right click or control-click on the app and select "show package contents") - You will find them in the Resources folder.


Compiling the GUI app on Linux

You will at least need the development versions of following installed: fmt, Qt, Cairo, and TBB (these are what they are called on Fedora/dnf, the names might slightly differ on your repo):

fmt-devel, qt6-qtbase-devel, cairo-devel, tbb-devel

Next, you need a CPU-based FFT library, options are:

FFTW is likely available in your distribution, e.g. fftw-devel.

Next, the basic command is to use cmake in the usual way:

mkdir build && cd build
cmake ..
cmake --build . --config Release

and you should have a binary to run. You should either install it (sudo cmake --install .) or copy the files CrystalDatabase.txt and DefaultValues.ini to the build folder and run it.

The basic build will run on your CPU only.

In order to run on a GPU, the options are either CUDA (Nvidia) or SYCL (Intel, AMD or Nvidia).

CUDA

To enable CUDA, you need additional flags. Here's an example:

cmake -DUSE_CUDA=1 -DCMAKE_CUDA_HOST_COMPILER=clang++-17 -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc -DCMAKE_CUDA_ARCHITECTURES=86 ..

SYCL

A different set of flags will let you compile to use SYCL. You'll need a SYCL compiler. For Intel, you should use the one in the OneAPI basekit. For AMD, use the open source version.

Here's an example for AMD:

cmake -DUSE_SYCL=1 -DBACKEND_ROCM=gfx906 -DROCM_LIB_PATH=/usr/lib/clang/18/amdgcn/bitcode -DCMAKE_CXX_COMPILER=clang++ ..

Here's an example for Intel:

cmake -DUSE_SYCL=1 -DCMAKE_CXX_COMPILER=icpx ..
. /opt/intel/oneapi/setvars.sh

You can also use -DBACKEND_CUDA=1 to use SYCL on an Nvidia GPU.

Additional compiler flags:


Compiling on Mac

The first thing you'll need is Homebrew. If you go there, you'll see a command that you have to run in the terminal. Just paste it and follow the instructions.

I also made a build script that you can run in the same way; just copy and paste the command below that matches your system and it will compile everything it needs and put the application in your Applications folder. It will take a while, so go get a coffee!

(Please note that what's happening here is a shell script from the internet piped into the terminal. Never do this if you don't trust the developer, and even then it's a good idea to check the contents of the script by pasting the URL into your browser. Essentially, this is like letting me type into your Mac's terminal. I'm using it to compile the code and copy the resulting app, but someone at your terminal can also delete or copy your files.)

Apple Silicon (M1, M2, .etc) version:

curl -s https://raw.githubusercontent.com/NickKarpowicz/LightwaveExplorer/master/Source/BuildResources/macAutoBuild.sh | zsh -s

Intel version:

curl -s https://raw.githubusercontent.com/NickKarpowicz/LightwaveExplorer/master/Source/BuildResources/macAutoBuildIntel.sh | zsh -s

Compilation on clusters

A script is provided to compile the CUDA command line version on Linux. This is made specifically to work on the clusters of the MPCDF but will likely work with small modifications on other distributions depending on the local environment. The CUDA development kit and Intel OneAPI should be available in advance. With these prerequisites, the following command should work:

curl -s https://raw.githubusercontent.com/NickKarpowicz/LightwaveExplorer/master/Source/BuildResources/compileCommandLineLWEfromRepos.sh | tcsh -s

On other clusters you might have to instead dowload the script (e.g. with wget) and change it to suit that system before you run it.

If you have the GUI version installed locally, you can set up your calculation and then generate a SLURM script to run on the cluster (it will tell you what to do).


Compilation on Windows

You will need:

If you've cloned the repo, from that folder, first make the SYCL version as a DLL:

mkdir build
cd build 
cmake -DMAKESYCL=1 .. -G "Visual Studio 17 2022" -A x64 -DCMAKE_TOOLCHAIN_FILE="C:/dev/vcpkg/scripts/buildsystems/vcpkg.cmake" -T "Intel(R) oneAPI DPC++ Compiler 2024"
cmake --build . --config Release

Next build the main application together with the CUDA version:

cmake --fresh .. -G "Visual Studio 17 2022" -A x64 -DCMAKE_TOOLCHAIN_FILE="C:/dev/vcpkg/scripts/buildsystems/vcpkg.cmake" -DCMAKE_CUDA_ARCHITECTURES="75;86"
cmake --build . --config Release

There's also a powershell script named WinBuild.ps1 in the BuildResources folder that does all of this, so you can just run "./Source/BuildResources/WinBuild.Ps1" from the repo root directory to build the whole thing.


Libraries used

Thanks to the original authors for making their work available! They are all freely available, but of course have their own licenses .etc.


Programming note

The code is written in a "trilingual" way - a single core code file is compiled (after some includes and preprocessor definitions) by the three different compilers, Nvidia nvcc, a c++ compiler (either Microsoft's, g++, or clang++ have all worked), and Intel dpc++.

Although CUDA was the initial platform and what I use (and test) most extensively, I've added two additional languages for those who don't have an Nvidia graphics card.

One is in c++, with multithreading done with either with OpenMP or using C++ parallel execution policies.

The other language is SYCL. This also allows the simulation to run on the CPU and should allow it to run on Intel's graphics cards, as well as the integrated graphics of many Intel CPUs, and GPUs from AMD.

The different architectures are using the same algorithm, aside from small differences in their floating point math and intrinsic functions. So when I make changes or additions, there will never be any platform gaining over the other (again, reproducibility by anyone is part of the goals here).