Upcoming talks — 2025–2026
2025–2026
↑ top2024–2025
↑ top2023–2024
↑ top2022–2023
↑ top2021–2022
↑ top2020–2021
↑ top2019–2020
↑ top2018–2019
↑ topAbstract
Stochastic reaction networks are widely used to model various biochemical phenomena. To understand their long-term stochastic dynamics, stationary distributions are often computed. One crucial dynamical property to guarantee the existence of a stationary distribution is positive recurrence. However, it is not easy to provide checkable criteria for stochastic reaction networks, only by topological or graphical structures.
Motivated by this need, this talk contributes to stochastic dynamics of chemical reaction networks (CRNs) with one-dimensional stoichiometric subspace. I will first present a classification of the state space of the underlying continuous time Markov chain (CTMC) and mention how to use this result to discuss the diversity of long-term dynamics of stochastic CRNs.
Moreover, I will present checkable necessary and sufficient network conditions for various dynamical properties: Recurrence (positive and null), transience, (non)explosivity, (non)implosivity, as well as existence of moments of passage times. As a byproduct, any one-dimensional weakly reversible CRN is positive recurrent, confirming the Positive Recurrence Conjecture proposed by Anderson and Kim in 2018 (in 1-d case).
Finally, I will emphasize results on one-species CRNs, regarding stationary distributions and present parameter regions for consistency and inconsistency of stochastic and deterministic one-species CRNs regarding various dynamical properties aforementioned.
Abstract
This talk discusses the asymptotic variance of sample path averages for inhomogeneous Markov chains that evolve alternatingly according to two different π-reversible Markov transition kernels P and Q. More specifically, our main result allows us to compare directly the asymptotic variances of two inhomogeneous Markov chains associated with different kernels Pi and Qi, i ∈ {0, 1}, as soon as the kernels of each pair (P0, P1) and (Q0, Q1) can be ordered in the sense of lag-one autocovariance. As an important application, we use this result for comparing different data-augmentation-type Metropolis-Hastings algorithms. In particular, we compare some pseudo-marginal algorithms and propose a novel exact algorithm, referred to as the random shake algorithm, which is more efficient, in terms of asymptotic variance, than the Grouped Independence Metropolis-Hastings algorithm and has a computational complexity that does not exceed that of the Monte Carlo within Metropolis algorithm.
Abstract
Mixture models arise when we assume the observation is driven by a discrete number of hidden components. Here we consider inference of mixtures where the observation distributions conditional on hidden components are complicated, and the log-likelihood is computationally expensive and highly non-convex. Given the observation model (with known parameters) and observed data, we aim to cluster the data according to the hidden components, as well as to estimate the parameters describing each component. Standard methods with maximum log-marginal-likelihood and the expectation-maximization (EM) algorithm using random starting parameters, perform poorly due to the expensive and non-convex objective function. In this study we attempt to overcome the difficulties in complicated mixture models. We recommend the hard-assigned EM for complicated mixtures to save computational burden, and propose two parameter initialization schemes: one extending the k-means++ to consider arbitrary model distribution, the other relying on data pre-clustering in a space of log-likelihood distances that describe relationships among data. Simulation studies are conducted in a neuroscience and visual attention environment considering three distinct model types with different optimization methods, and results show the proposed methods provide consistently better performance in all studies.
2017–2018
↑ top2016–2017
↑ top2015–2016
↑ top2014–2015
↑ top2013–2014
↑ top2012–2013
↑ top2011–2012
↑ top2010–2011
↑ topAbstract
The extremogram measures serial tail dependence in a time series. It has the interpretation as a limiting correlogram derived from conditional probabilities. This definition opens the door to classical time series analysis, including the spectral analysis of extreme events. We also discuss the estimation of the correlogram by a sample analog. The stationary bootstrap of Politis and Romano (JASA 1994) is a useful technique for constructing confidence bands for the sample extremogram.