Past talks

Fall 2021

December 2nd

Speaker: James Maclaurin, NJIT

Hosted by: Binan Gu, NJIT
Topic: Dynamics of the Spherical Spin Glass
Abstract: This talk will discuss my work to characterize the glassy dynamical phase transition in the spherical p-spin glass. This is a paradigmatic model of a high-dimesional stochastic gradient descent in a disordered energy landscape. I use Large Deviations theory to obtain limiting population density equations describing the dynamics.

November 17th

Speaker: Binan Gu, NJIT

Hosted by: Jim Adriazola, NJIT
Topic: On Continuum Limit of PageRank
Abstract: I will discuss the paper by Jeff Calder et. al. on the continuum limit of PageRank, a direct follow-up from my talk two weeks ago. PageRank is a ranking vector quantifying the importance of nodes in a graph (or webpages in the world wide web, or subsets thereof). This vector can be interpreted as the dominant eigenvector of the transition matrix of a modified random walk (with restarts), i.e. the stationary distribution. Calder et. al. aims to study the continuum limit of the PageRank algorithm as the number of nodes goes to infinity in some rescaled fashion and provides a governing PDE that captures the spatial dependence of the PageRank score. In this talk, we investigate the ground work for this finding, point out the necessary knowledge needed for this derivation and many other similar problems, and connect to mean-field limits of stochastic games.

November 11th

Speaker: Hamza M. Ruzayqat, Postdoctoral Research Fellow at King Abdullah University of Science and Technology (KAUST)

Hosted by: Jim Adriazola, NJIT
Topic: Particle Filter in High Dimensions
Abstract: One of the most important applications of fluid dynamics models is numerical weather prediction. Modern numerical weather prediction combines sophisticated nonlinear fluid dynamics models with increasingly accurate high-dimensional data. This process is called data assimilation (or filtering) and it is performed every day at all major operational weather centers across the world. Data assimilation is not limited to fluid dynamics, but it has many applications in statistics and engineering. Filtering in general is a very challenging task as analytical solutions are typically not available and many numerical approximation methods can have a cost that scales exponentially with the dimension. In this talk I will focus upon the class of algorithms that are based on sequential Monte Carlo (SMC) methods. I will present a new method (lagged particle filter) that aims to reduce the cost to O(Nd2), where N is the number of simulated samples in the SMC algorithm and d is the dimension. The bias of our approximation is shown to be uniformly controlled in the dimension and exponentially small in time.

November 4th

Speaker: Binan Gu, NJIT

Hosted by: Axel Turnquist
Topic: PageRank, its related centrality measures and connection to diffusion on large graphs
Abstract: In this talk, I will introduce the well-known PageRank, discuss the notion of centrality and lastly connect to graph partitioning schemes involving modified diffusions on very large graphs.

October 28th

Speaker: Lou Kondic, NJIT

Hosted by: Binan Gu
Topic: Topological Data Analysis Applied to Interaction Networks in Particulate Systems
Abstract: Particulate systems are very common in nature and in a variety of technologically relevant applications. Many of these systems are composed of particles that remain in contact for relatively long periods. These contacts form a network, whose properties are important for the purpose of understanding the system as a whole. However, the contact network provides only partial information about the interaction between the particles. In order to obtain a deeper understanding of a particulate system, the strength of the contacts needs to be considered. This naturally leads to the concept of interaction networks appearing on mesoscale. The properties of these structures are of fundamental importance for the purpose of revealing the underlying physical causes of many phenomena, ranging from the interaction fields of colloidal systems to earthquakes. This presentation focuses on applications of algebraic topology, and in particular of persistent homology, to analysis of such interaction networks. The considered approach allows to simplify the complicated interaction network to a set of topological quantities that describe their global properties. Furthermore, this approach allows to explore not only static but also dynamic properties of interaction networks, so that time dependence of these networks can be quantified as well.

October 21st

Speaker: Travis Askham, NJIT

Hosted by: Axel Turnquist
Topic: Sparsity Constraints: Applications, Approximations, and Algorithms
Abstract: I'll give an opinionated overview of sparsity constraints as they are applied in the contexts of compressed sensing, matrix completion, matrix compression, and image deblurring. I'll review common approximations of such constraints and a few classes of algorithms for computing approximate solutions.

October 14th

Speaker: Georg Stadler, Courante Institute of Mathematical Sciences, NYU

Hosted by: Binan Gu
Topic: Optimal control of systems governed by PDEs with uncertain parameters
Abstract: I will review two formulations of optimization problems under uncertainty. The uncertainty enters the problem through the governing equation, typically a PDE. I am planning to show various application examples to illustrate in what contexts these formulations can be useful.

October 7th

Speaker: Connor Robertson, NJIT

Hosted by: Axel Turnquist
Topic: Neural networks for function approximation and data-driven modeling
Abstract: Artificial neural networks have demonstrated an impressive capacity for classification and prediction in many fields. Applications in public facing fields such as text and image processing have stirred up massive public and academic interest in machine learning generally. This excitement has more recently begun to influence scientific techniques for modeling physical systems. In this talk, I will discuss the use of a variety of neural network architectures for function approximation and data-driven modeling of physical systems. In particular, I will focus on the recent efforts to include ``interpretable'' elements in the network architecture that wholly or partially remove the “black box” from around neural networks. The architectures I will discuss include: convolutional neural networks, autoencoders, recurrent and residual neural networks, physics informed neural networks, reservoir computers, symbolic networks, and hybrid combinations of differential equations and neural networks. I will both give an overview of these popular methods and highlight their strengths and weaknesses as compared to their traditional counterparts in modeling and simulation.

September 30th

Speaker: Brittany Hamfeldt, NJIT

Hosted by: Axel Turnquist
Topic: Full Waveform Inversion Using the Wasserstein Metric
Abstract: We consider the problem of full waveform inversion, which seeks to use surface measurements to determine the structure of the earth's subsurface. While this inverse problem can easily be formulated as an optimization problem, the resulting objective function is often highly non-convex and difficult to minimize in practice. We discuss the use of the Wasserstein metric for measuring the misfit between seismic signals. This talk will focus on two questions. (1) What properties of the Wasserstein metric make it a good choice of misfit for this particular application? (2) How can we use the adjoint state method to quickly construct the gradients that are needed for efficient optimization?

September 23rd

Speaker: Axel Turnquist, NJIT

Hosted by: Binan Gu
Topic: Image Sharpening via Sobolev Gradient Flows
Abstract: The most obvious way to blur an image anisotropically is by using the heat equation (or Gaussian convolutions). What if we wanted to sharpen the image? It is well known that the backwards heat equation is ill-posed. So what other backwards "heat"-like evolution equations are available? Here we discuss a variational approach that yields a nice PDE for forward and backward blurring/sharpening.

September 16th

Speaker: Jim Adriazola, NJIT

Hosted by: Axel Turnquist
Topic: Introduction to Kalman filtering and data assimilation
Abstract: Being able to solve a high dimensional data science problem is becoming an expected skill of most practical applied mathematicians today. In this talk, we’ll discuss the filtering problem which asks us how to establish a “best estimate” for the true value of a system despite us only having access to an incomplete or potentially noisy set of observations. We will discuss how the ensemble Kalman filter (EnKF), first developed in the 90’s, has become a popular algorithm for solving filtering problems common in different geoscientific applications where state dimensions can be in the order of millions.

September 9th

Speaker: Binan Gu, NJIT

Hosted by: Axel Turnquist
Topic: Stochastic Temporal Networks
Abstract: Dynamics on temporal networks are ubiquitous in our daily life, ranging from social interactions, trade networks, power grids, cardiovascular systems and machine learning with graphical methods. At the same time, the underlying structure of the connections can be inherent random, only allowing communication from time to time in a stochastic fashion. In this talk, I will survey the building block of a stochastic temporal network with simple examples such as the Poisson random walk on undirected graph, and discuss the finding of a generalised Montroll-Weiss formula for continuous-time random walks on temporally stochastically varying graphs.

September 2nd

Speaker: Axel Turnquist, NJIT

Hosted by: Binan Gu
Topic: Computing the Distance Between Probability Measures: Wasserstein vs. Fisher-Rao

Spring 2021

May 13th

Speaker: Yuexin Liu, NJIT

Hosted by: Binan Gu
Topic: Proximal Policy Gradient Alg. for Reinforcement Learning

May 6th

Speaker: David Shirokoff, NJIT

Hosted by: Axel Turnquist
Topic: A discussion of open problems related to stochastic gradient descent

April 29th

Speaker: James MacLaurin, NJIT

Hosted by: Binan Gu
Topic: Algorithmic thresholds for principal component analysis of tensors
Abstract: We study the algorithmic thresholds for principal component analysis of Gaussian tensors with a planted rank-one spike, via Langevin dynamics and gradient descent. I consider N stochastic particles moving randomly over the N-sphere. The advection term is a gradient descent: the gradient is biased towards a particular point on the sphere - think of this as the ‘North Pole’. Another component of the gradient consists of an all-to-all interaction term: each interaction involves p particles and is modulated by a static random weight. This random Hamiltonian is an archetype for a high-dimensional disordered energy landscape. We want to efficiently estimate the location of the north-pole signal, despite the static disorder due to the random Hamiltonian, and white noise perturbing each of the particles. To do this I determine autonomous limiting equations that predict the relative influence of the north-pole signal, the static disorder, and the white noise.

April 22nd

Speaker: Zuofeng Shang, NJIT

Hosted by: Binan Gu
Topic: Functional Data Analysis via Deep Neural Networks
Abstract: In functional data analysis, data are displayed as smooth curves, surfaces, or hypersurfaces evaluated at a finite subset of design points. Functional data usually demonstrate complex underlying structures, hence, classic statistical approaches encounter challenges. In this talk, I will describe some recent progress on functional data analysis via deep learning, which is proven superior in handling the complex functional data.

April 15th

Speaker: William McCann, NJIT

Hosted by: Binan Gu
Topic: Stochastic Gradient Descent (SGD)

April 8th

Speaker: Binan Gu, NJIT

Hosted by: Axel Turnquist
Topic: Diffusion Approximations and Nonconvex Optimization

March 25th

Speaker: Jim Adriazola, NJIT

Hosted by: Axel Turnquist
Topic: Nesterov’s method for accelerated gradient descent

May 13th

Speaker: Yixuan Sun, NJIT

Hosted by: Binan Gu
Topic: Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions

May 13th

Speaker: Axel Turnquist, NJIT

Hosted by: Binan Gu
Topic: Solving High-Dim. Parabolic PDE Using Deep Learning

May 13th

Speaker: Jim Adriazola, NJIT

Hosted by: Axel Turnquist
Topic: Deep Learning for Mean-Field Games II

May 13th

Speaker: Jim Adriazola, NJIT

Hosted by: Binan Gu
Topic: Deep Learning for Mean-Field Games I

May 13th

Speaker: Binan Gu, NJIT

Hosted by: Axel Turnquist
Topic: On the Energy Landscape of Deep Networks

May 13th

Speaker: Axel Turnquist, NJIT

Hosted by: Binan Gu
Topic: Deep Residual Networks: Optimal Control Point of View

Return to Main

Fall 2020

December 11th

Speaker: Yuexin Liu, NJIT

Hosted by: Axel Turnquist
Topic: Reinforcement Learning

December 4th

Speaker: Axel Turnquist, NJIT

Hosted by: Binan Gu
Topic: Learning Frameworks

November 20th

Speaker: Axel Turnquist, NJIT

Hosted by: Binan Gu
Topic: Sampling Theory

November 13th

Speaker: Binan Gu, NJIT

Hosted by: Axel Turnquist
Topic: Graph-Based Models & Model Selection

November 6th

Speaker: Yixuan Sun, NJIT

Hosted by: Binan Gu
Topic: GAN/WGAN

October 30th

Speaker: Axel Turnquist, NJIT

Hosted by: Binan Gu
Topic: Matrix Completion and Sparse Recovery

October 23rd

Speaker: Luan Gan, NJIT

Hosted by: Axel Turnquist
Topic: Bayesian Statistics & Machine Learning

October 16th

Speaker: Axel Turnquist, NJIT

Hosted by: Binan Gu
Topic: Dimension Reduction

October 9th

Speaker: Binan Gu, NJIT

Hosted by: Axel Turnquist
Topic: Graph-Based Learning

October 2nd

Speaker: Axel Turnquist, NJIT

Hosted by: Binan Gu
Topic: Information Geometry and Learning

September 25th

Speaker: Axel Turnquist, NJIT

Hosted by: Binan Gu
Topic: Why does the stochastic gradient descent work?

Return to Main