ANSYS Lumerical
ANSYS Lumerical, part of the ANSYS Optics suite, is an optics simulation package.
Using ANSYS Lumerical interactive
(Graphical User Interface)
Use your browser to connect to Umbrella On Demand
Using ANSYS Lumerical in SLURM batch jobs
(Command Line Interface)
Test ANSYS Lumerical Shared Memory
Load the modules
[user@umbrella]$ module purge
[user@umbrella]$ module load Lumerical/2024-R1.3
[user@umbrella]$ fdtd-engine -v
Ansys Lumerical 2024 R1.3 FDTD Solver Version 8.31.3766 (Linux 64bit)
Test ANSYS Lumerical Intel MPI
Load the modules
[user@umbrella]$ module purge
[user@umbrella]$ module load intel/2023a
[user@umbrella]$ module load Lumerical/2024-R1.3
[user@umbrella]$ fdtd-engine-impi-lcl -v
Ansys Lumerical 2024 R1.3 FDTD Solver Version 8.31.3766 (Linux 64bit)
Lumerical SLURM sbatch jobscript example using Shared Memory
#!/bin/bash
#SBATCH --job-name=test_lumerical
#SBATCH --output=test_lumerical-%j.log
#SBATCH --partition=tue.default.q
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem-per-cpu=2gb
#SBATCH --time=00:05:00
module purge
module load Lumerical/2024-R1.3
fdtd-engine -t ${SLURM_CPUS_PER_TASK} -logall -fullinfo example.lsf
Lumerical SLURM sbatch jobscript example using intelMPI
#!/bin/bash
#
#SBATCH --job-name=test_lumerical
#SBATCH --error=test_lumerical-%j.log
#SBATCH --partition=tue.default.q
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=2gb
#SBATCH --time=00:05:00
module purge
module load intel/2023a
module load Lumerical/2024-R1.3
mpirun fdtd-engine-impi-lcl -logall -fullinfo example.lsf
Method 1
We will make Lumerical believe it is running on a "local computer" (which is in fact the HPC system). The approach has some benefits:
- It is relatively easy to set up.
- It is slightly less prone to issues with e.g. MPI.
The downsides are:
- Parameter sweeps cannot be done in parallel; for this a tighter integration with the scheduler (Slurm) is needed.
- It is probably unsafe to run multiple instances of Lumerical simultaneously with this approach.
- The user must set some maximum number of threads beforehand, and must consistently request the same amount from the scheduler, which is prone to errors.
In the following we will restrict Lumerical to at most 4 threads. Any other number of threads would work as well, but using more threads can lead to longer waiting times in the queue.
Workflow
This is a suggested workflow for Lumerical on the Umbrella cluster:
- Initial setup of Lumerical (first-time use only!)
- Prepare LSF file using the Lumerical GUI on the cluster
- Submit Lumerical job
- Postprocess/view results using the Lumerical GUI on the cluster
Each of these steps is detailed below. It is also possible to run non-interactive Python jobs; this is a detailed below as well.
Initial setup
- Log on to the cluster using Open OnDemand, and start an interactive Lumerical session.
- Within the interactive session, click "Solvers" → "New" → "FDTD". A new window opens.
- From the "Simulation" menu, choose "Resource configuration". A new window opens.
- In the "Resource configuration" window, in the FDTD solver tab, delete all but one profile.
- Click the "Edit" button to edit the remaining profile. A new window opens.
- In the "Resource advanced options" window, make the following changes, and then hit OK.
- Job launching preset: Local Computer
- FDTD options:
- extra command line options: (empty)
- checkpoint directory: (empty)
- no default options: unchecked
- create log for all processes: unchecked
- In the "Resource configuration window", make sure the one remaining profile has the following values set. Values can be changed by double clicking the value. -
- active: true
- threads: 4 (or whichever number you decided on)
- capacity: 1
- Optionally, hit "Run tests". After a few seconds it should say "Tests completed successfully".
- In the Design environment tab, make the following changes:
- auto detect: unchecked
- max threads: 4 (or whichever number you decided on)
- Hit "Save".
- Close Lumerical.
Prepare/postprocess with Lumerical GUI
The Lumerical GUI can be run on the cluster through Open OnDemand.
Submitting a Lumerical job
It may be unsafe to run multiple Lumerical jobs simultaneously! This is due to the use of the "local computer" resource configuration profile.
This is an example job script:
#!/usr/bin/bash
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4 # <--- replace with desired number of threads
#SBATCH --partition=tue.default.q # <--- replace with your partition
module load lumerical/2021-R1
xvfb-run fdtd-solutions -nw -run [script.lsf] # <--- replace with your script
The above script uses xvfb-run
to make Lumerical believe it is connected to a monitor.
Submitting a Lumerical Python job
It may be unsafe to run multiple Lumerical jobs simultaneously! This is due to the use of the "local computer" resource configuration profile.
This is an example job script:
#!/usr/bin/bash
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4 # <--- replace with desired number of threads
#SBATCH --partition=tue.default.q
module load lumerical/2021-R1
set -e
LUMDIR=$(dirname $(dirname $(which fdtd-solutions)))
#export PATH="$LUMDIR/python/bin:$PATH" # <--- uncomment to use the version of Python bundled with Lumerical
export PYTHONPATH="$LUMDIR/api/python"
# -u to keep stdout/stderr unbuffered
xvfb-run python3 -u myscript.py
Also see the following websites:
- Setting Lumerical environment variables: here.
Method 2
To be done: configure Lumerical so it makes use of the cluster's MPI system. This implies a tighter integration with the scheduler, and is therefore less prone to user error.
Method 3
Direct resource integration of the HPC node in the Lumerical GUI. Lumerical will handle file transfer from your PC to the HPC, run the simulation on HPC and will download the simulated file back to your PC. By default, the HPC node will not use the GUI license, but just a runner license.
Workflow
- Initial setup of Lumerical (first-time use only!)
- Prepare Lumerical file on your local PC using the GUI
- Run Lumerical job (which will transfer to HPC, run and transfer back)
- Postprocess/view results of Lumerical file on your local PC using the GUI
Initial setup
- Configure SSH keys to connect to the HPC from you local PC using this method
- Configure Lumerical config settings on the HPC
- Connect to HPC using SSH
- Add a new directory using:
mkdir -p ~/.config/Lumerical/
- Create a new file named License.ini inside the folder:
nano License.ini
- Add the following contents
[license] domain=2 default=user ansysserver\host=1055@tue032938.ele.tue.nl flexserver\host=27011@tue032938.ele.tue.nl
- Save and exit using crtl+x and press y to confirm saving
- Configure slurm.py on your local PC (if your Lumerical version is older than 2023 R2.2)
- Locate the slurm.py file on your local Lumerical installation usually at
C:\Program Files\Lumerical\vxxx\scripts\job_schedulers\slurm.py
- Open the file and change the following lines to:
USE_SSH = True USE_SCP = True CLUSTER_CWD = '' if USE_SSH: USER_NAME = "username" # TU/e username SSH_LOGIN = f"{USER_NAME}@hpc.tue.nl" SSH_KEY = expanduser('~/.ssh/privkeyname') # Location of your private key on your PC
- Save the file
- Configure job_scheduler_input.json on your local PC (if your Lumerical version is 2023 R2.2 or newer)
- Please update according to Lumericals documentation
- Add the HPC as resource in the Lumerical GUI
- Open your Lumerical software tool (like FDTD)
- Press the resources button in the top ribbon
- Add new resource, select it and press edit
- Change the job launcher preset to 'Job Scheduler: Slurm'
- In the command field, add
sbatch -N 1
- In the submission script field add
#!/usr/bin/bash #SBATCH --nodes=1 #SBATCH --ntasks=16 #SBATCH --partition=elec-phi.gpu.q #SBATCH --error=slurm-%j.err #SBATCH --output=slurm-%j.out module purge module load intel/2023a module load lumerical/2024-R1.1 mpirun fdtd-engine-impi-lcl {PROJECT_FILE_PATH}
Prepare job & run on HPC
Now that everything is configured, you can simply prepare your FDTD file and run the job like you are used to. The software will automatically upload the .fsp file to the HPC, and schedule the job. When the job is finished, it will automatically download it back to your PC.
Debug in case of errors
In case the simulation is not running, or it stopped really quickly, you can look at the error and log files on the HPC. They will be automatically added when the file is transferred to the HPC. You can connect to HPC over SSH and check the contents of the log files to debug.
Troubleshooting
Failed to start messaging, check licenses...
This is a poorly-worded error message that may indicate Lumerical cannot connect to an X11 server. Lumerical is a GUI application; even when started with -nw (no window) it needs to have an X11 server available. Only the solvers themselves do not require X11.