SFB seminar series
Unless otherwise stated, the seminar talks will take place at 12:30 in the lecture hall "E1" Rogowski-building at Schinkelstraße 2
07.12.22 Arie Koster (RWTH Aachen)
Title: Discrete Optimization - The high art of decision making
Abstract: In this seminar, we will stride through the mathematical discipline of discrete optimization with sevenleague boots. We highlight not only the theoretical principles like complexity theory and approximation algorithms, but also the available toolbox to solve discrete optimization problems exactly. Optimization under uncertainty completes the talk.
14.12.22 Markus Bachmayr (RWTH Aachen)
Title: Multilevel representations of random fields
Abstract: We consider multilevel representations of stationary random fields on domains and of isotropic random fields on the sphere, implications for sampling such random fields, and how such representations can be utilized in the context of sparse polynomial expansions of solutions of random PDEs. Based on joint works with Albert Cohen, Ron DeVore, Ana Djurdjevac, Dinh Dung, Giovanni Migliorati, Christoph Schwab, and Igor Voulis.
25.01.23 Benjamin Berkels (RWTH Aachen)
Title: From variational exit wave reconstruction to deep unrolling
Abstract: We first revisit the so-called exit wave reconstruction problem in the variational setting. Here, exit wave reconstruction means to reconstruct the complex-valued electron wave in a transmission electron microscope (TEM) right before it passes the objective lens, i.e., the exit wave, from a series of real-valued TEM images acquired with varying focus. This is a non-linear inverse problem that is a variant of the well known phase retrieval problem. We will show existence of minimizers, discuss practical gradient based optimization algorithms and show results on real data. Moreover, we will show how a regularizer for this non-linear inverse problem can be learned from data in the form of the Total Deep Variation.
In the second part of the talk, we will turn to deep algorithm unrolling, a general strategy that allows to re-interpret iterative algorithms as deep neural networks with a very special structure inherited from the algorithm that is unrolled. In particular, unrolling allows to introduce data-driven learning to the unrolled algorithm. To this end, we will discuss the proximal gradient algorithm, which is suitable for unrolling and also applicable to exit wave reconstruction.
01.02.23 Umberto Hryniewicz (RWTH Aachen)
Title: Classical Morse theory and deep learning
Abstract: The goal of this talk is to bring ideas from classical Morse theory closer to some problems arising in deep learning. We would like to discuss possible implications that certain statements about gradient flows of proper real-analytic functions might have in the analysis of “linear” deep learning schemes. An example of such a statement would be that a “typical” anti-gradient trajectory of a proper real-analytic function which is bounded from below converges necessary to a local minimum. The analogous version of this statement holds for a generic smooth function.