profile photo

Steffen Schotthöfer

I am a Research Scientist at Oak Ridge National Laboratory. My research mission consists of pushing the frontier of efficient high-performance scientific simulation and AI training. I unlock higher efficiency through first-principled surrogate modeling: I am to reduce compute costs while preserving robustness for safety-critical scientific applications.

Bio: I received my Dr. rer. nat. from the Karlsruhe Institute of Technology (KIT) in 2023, where I was advised by Prof. Dr. Martin Frank. I received my M.Sc. and B.Sc. in Mathematics from TU Kaiserslautern.

GitHub  |  LinkedIn  |  Google Scholar  |  contact.schotthoefer [at] gmail [dot] com


News

  • Dec 2025 — ORNL Early Career Award to lead a 2-year project on automating high-stakes scientific simulation workflows using agentic orchestration
  • Dec 2025 — Distinguished Paper Award, Oak Ridge National Laboratory (Computational Science & Mathematics Division), for Dynamical Low-Rank Compression of Neural Networks with Robustness under Adversarial Attacks
  • Dec 2025 — Dec 2025 — NeurIPS 2025: oral presentation, two main-track papers; top reviewer award
  • June 2025 — Accepted position paper at the US DOE ASCR Workshop on Inverse Methods for Complex Systems: Uncertainty-aware inverse modeling for federated scientific discovery across DOE facilities
  • April 2025 — ICLR 2025: Main track paper GeoLoRA — Geometric Integration for Parameter Efficient Fine-Tuning

Below are my current research directions and contributions to funded projects.

Continuous Learning of Scientific Foundation Models
DOE Genesis Mission
Within DOE Genesis, this research develops agentic continuous-learning frameworks for Self-Improving Foundation Models, enabling robust, automated model updating across safety-critical scientific applications and deployment with model teams spanning multiple national laboratories.

Agentic Orchestration of Scientific Simulation Workflows
ORNL Early Career Award
This research develops LLM-based agentic methods for automating high-stakes scientific simulation workflows, coordinating portfolios of solvers, surrogate models, and uncertainty quantification tools with minimal human intervention. By reducing expert wall-time in design-space exploration, this work accelerates innovation cycles across national energy and security applications.

Scalable Low-Rank Methods for Efficient Deep Learning
Householder Fellowship
This research develops scalable low-rank methods for compute- and energy-efficient deep learning at scale, focusing on adaptive rank compression during training and fine-tuning. By dynamically allocating rank where it matters most, these approaches reduce memory, compute, and communication costs while preserving accuracy and robustness, enabling efficient training and deployment of large neural networks on modern accelerator hardware.

AI-based Surrogate Models for High-Dimensional PDEs
CHaRMNET Research Center
This research develops scalable, efficient, and trustworthy AI-based surrogate models for high-dimensional partial differential equations, mainly for applications in neutron and radiation transport for the ChARMNET initiative. ChARMNET is a DOE Mathematical Multifaceted Integrated Capability Center (MMICC) led by Michigan State University and Los Alamos National Laboratory, bringing together researchers from seven universities and four DOE national laboratories to address the curse of dimensionality in plasma and transport modeling.

2025

Dynamical Low-Rank Compression of Neural Networks with Robustness under Adversarial Attacks
Schotthöfer, Yang, Schnake
NeurIPS 2025 [oral]

A geometric framework for momentum-based optimizers for low-rank training
Schotthöfer, Klein, Kusch
NeurIPS 2025

GeoLoRA: Geometric integration for parameter efficient fine-tuning
Schotthöfer, Zangrando, Ceruti, Tudisco, Kusch
ICLR 2025

Dynamic Low-Rank Training with Spectral Regularization: Achieving Robustness in Compressed Representations
Schotthöfer, Yang, Schnake
ICML 2025 Workshop on Methods and Opportunities at Small Scale

Structure-preserving neural networks for the regularized entropy-based closure of a linear, kinetic, radiative transport equation
Schotthöfer, Laiu, Frank, Hauck
Journal of Computational Physics

Reference solutions for linear radiation transport: the Hohlraum and Lattice benchmarks
Schotthöfer, Hauck
Journal of Computational and Theoretical Transport

An Augmented Backward-Corrected Projector Splitting Integrator for Dynamical Low-Rank Training
Kusch, Schotthöfer, Walter
Arxiv Preprint

Compressing Vision Transformers in Geospatial Transfer Learning with Manifold-Constrained Optimization
Snyder, Yang, Schnake, Schotthöfer
NeurIPS 2025 Workshop on Constrained Optimization for Machine Learning

2024

Geometry-aware training of factorized layers in tensor Tucker format
Schotthöfer, Zhou, Albring, Gauger
NeurIPS 2024

Federated dynamical low-rank training with global loss convergence guarantees
Schotthöfer, Laiu
Arxiv Preprint

Structure-preserving operator learning: Modeling the collision operator of kinetic equations
Lee, Schotthöfer, Xiao, Krumscheid, Frank
Arxiv Preprint

2023

Conservation properties of the augmented basis update & Galerkin integrator for kinetic problems
Einkemmer, Kusch, Schotthöfer
Arxiv Preprint

Synergies between Numerical Methods for Kinetic Equations and Neural Networks
Schotthöfer
Dissertation

Predicting continuum breakdown with deep neural networks
Xiao, Schotthöfer, Frank
Journal of Computational Physics

KiT-RT: An extendable framework for radiative transfer and therapy
Kusch, Schotthöfer, Stammer, Wolters, Xiao
ACM Transactions on Mathematical Software

2022

Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations
Schotthöfer, Zangrando, Kusch, Ceruti, Tudisco
NeurIPS 2022

Structure Preserving Neural Networks: A Case Study in the Entropy Closure of the Boltzmann Equation
Schotthöfer, Xiao, Frank, Hauck
ICML 2022 [Spotlight]

2021

A structure-preserving surrogate model for the closure of the moment system of the Boltzmann equation using convex deep neural networks
Schotthöfer, Xiao, Frank, Hauck
AIAA Aviation 2021

2020

Regularization for Adjoint-Based Unsteady Aerodynamic Optimization Using Windowing Techniques
Schotthöfer, Zhou, Albring, Gauger
AIAA Journal [AIAA AVIATION 2020 Best Student Paper Award]

2018

A Numerical Comparison of Consensus-Based Global Optimization to other Particle-based Global Optimization Schemes
Totzeck, Pinnau, Blauth, Schotthöfer
PAMM

Most of my publications have an associated GitHub repository with code to reproduce the results. Below are some highlighted software projects.

KiT-RT Logo

KiT-RT
The main focus of the KiT-RT software suite is radiotherapy planning for cancer treatment and research in radiative transfer. It supports easy extension with additional methods and techniques. KiT-RT is a high-performance open source C++ platform. Used for foundation model training by Lawrence Livermore National Laboratory and Nvidia; for multi-fidelity uncertainty quantification by Sandia National Laboratories and Oak Ridge National Laboratory.
[GitHub] [Docs] [paper]

SU2 Logo

SU2 – Windowing Regularization
Drag reduction of airplane wings via windowing regularization for robust PDE constrained optimization of unsteady flows. 30% drag reduction on NACA0012 airfoil with turbulent flow at high angle of attack. Awarded 1st place at the MDO Student Paper Competition at AIAA Aviation Forum 2020.
[SU2] [tutorial]

RobustDLRT
A Python toolbox for analyzing the adversarial robustness of neural networks and training robust low-rank compressed models. It provides tools for generating adversarial examples, evaluating adversarial accuracy, and performing compressed transfer learning with Vision Transformers for geospatial applications. Associated work was presented as an oral presentation at NeurIPS 2025.
[GitHub] [paper]

Awards & Fellowships

Fellowships & PI Awards

Alston S. Householder Fellowship, Oak Ridge National Laboratory, 2023
Prestigious DOE-funded postdoctoral fellowship in applied mathematics and scientific computing.

LDRD Early Career Award, Oak Ridge National Laboratory, 2024
Principal Investigator for two-year project on agentic AI for scientific simulation workflows.

DFG Priority Programme Fellow, SPP 2298 “Theoretical Foundations of Deep Learning”, 2021–2024

Paper Awards

Oral Presentation, NeurIPS 2025 — Oral acceptance rate: 0.36%

Distinguished Paper Award, ORNL Computational Sciences and Mathematics Division, 2025

Best Student Paper Award, AIAA Aviation Forum 2020


Selected Presentations

NeurIPS 2025 Oral: Dynamical Low-Rank Compression of Neural Networks with Robustness under Adversarial Attacks

ASCR Workshop 2025: Uncertainty-aware inverse modeling for federated scientific discovery across DOE facilities

ICML 2022 Spotlight: Structure Preserving Neural Networks: A Case Study in the Entropy Closure of the Boltzmann Equation

AIAA AVIATION 2020 Best Student Paper: Windowing Regularization Techniques for Unsteady Aerodynamic Shape Optimization


Mentorship

• Chinmay Patwardhan, visiting researcher at ORNL [2026], Ph.D. student at KIT

• Thomas Snyder, summer intern at ORNL [2024 & 2025], undergrad at Yale

• Hanna Park, summer intern at ORNL [2025], Ph.D. student at Mississippi State

• Jakob Maisch, Bachelor Thesis at KIT [2023], M.Sc. student at KIT

• Tim Stoll, Bachelor Thesis at KIT [2022]

• Martin Sadric, Bachelor Thesis at KIT [2022]

• Gabriel Moser, Bachelor Thesis at KIT [2021]

• Florian Buk, Master Thesis at KIT [2020]


Service

Organization

International Workshop on Moment Methods in Kinetic Theory IV

• Minisymposium "Advances in High Dimensional PDE Methods using Sparse Grids and Low-Rank" at SIAM CSE 2025

• Minisymposium "Federated Learning and Data Mining in Distributed Environments" at SIAM SEAS 2025

Reviewing

• Reviewing: Program committees and peer review for NeurIPS (top reviewer), ICML, ICLR, AAAI; Journal reviewing includes Journal of Computational Physics, SIAM MMS, Journal of Scientific Computing, and Journal of Optimization Theory and Applications.