Structural Kernel Search via Bayesian Optimization and Symbolical Optimal Transport
Matthias Bitzer, Mona Meister, Christoph Zimmer · NeurIPS 2022
Efficient search through structured kernel spaces for Gaussian processes using a kernel-kernel over symbolic representations and optimal transport, avoiding costly function-space distances.
Hierarchical-Hyperplane Kernels for Actively Learning Gaussian Process Models of Nonstationary Systems
Matthias Bitzer, Mona Meister, Christoph Zimmer · Proceedings of The 26th International Conference on Artificial Intelligence and Statistics (AISTATS 2023)
TL;DR: A kernel family with learnable input-partitioning for active learning of nonstationary GPs: learnable via gradients, flexible geometry, and applicable in low-data regimes as a good prior for active learning.
Amortized Inference for Gaussian Process Hyperparameters of Structured Kernels
Matthias Bitzer, Mona Meister, Christoph Zimmer · UAI 2023
Addresses the computational bottleneck of learning kernel parameters by amortizing inference over datasets and kernel structures with a transformer-based network that accepts datasets and symbolic kernel descriptions.
Analyzing closed-loop training techniques for realistic traffic agent models in autonomous highway driving simulations
Matthias Bitzer, Reinis Cimurs, Benjamin Coors, Johannes Goth, Sebastian Ziesche, Philipp Geiger, Maximilian Naumann · ICCV 2025 Workshop (WDFM-AD)
TL;DR: Comparative analysis of closed-loop training for realistic traffic agents: open vs. closed-loop training, generative adversarial vs. deterministic supervised methods, impact of RL losses, and training with log-replayed agents in highway driving simulations.