The afternoon meeting in Bayesian computation, held at the University of Reading at the end of January 2018, showcased presentations from PhD students, postdoctoral researchers and from industry. In addition, Richard Everitt and Chris Holmes were the keynote speakers at the event.
PhD student Gabriele Abbati, from the University of Oxford, presented an adaptive method for learning the geometry of the parameter space when using optimisation or sampling methods on a target distribution, which are at the core of many applications in statistics and computer science. The second PhD student to talk was Jack Baker from Lancaster University, he presented a new gradient estimate used in the stochastic gradient Langevin dynamics algorithm, proving its usefulness in large datasets. The resulting algorithm has an O(1) computational cost subject to a one-off pre-processing step.
There were also presentations from Yunpeng Li and Matt Moores, from Oxford and Warwick, respectively. Dr Li discussed new approaches when incorporating deterministic particle flows into a particle filter; doing this has the potential benefit of constructing proposal distributions that are close to a desired target distribution. Dr Moores talked about an application of sequential Monte Carlo for spectroscopy; one of the motivating ideas for using a sequential method is that the posterior distribution can be updated as more data becomes available.
A talk from industry was also presented. Stephanie Schapp from DSTL introduced new advances in source term estimation, useful at determining the location and nature of hazardous emissions. The presented approach takes the output of a variational method for the refinement of the Monte Carlo algorithm used for exploring the posterior distribution of interest.
The first keynote speaker was Richard Everitt, based at the University of Reading, who presented a sequential Monte Carlo method for inference in coalescent trees and for Bayesian model comparison. Inference for coalescent tress in population genetics is usually computationally expensive, and as more data arrives (due to decreasing costs of DNA sequencing) one would need to restart the inference process using the new observations. As mentioned before, sequential methods provide a flexible framework for easily incorporating new data into the posterior distribution.
Finally, Chris Holmes from the University of Oxford discussed recent work for updating beliefs and methods for combining information in a joint model. This is of particular interest when integrating data from multiple sources and differing modalities. An encountered problem is when misspecification of any module can lead to contamination of the others resulting in bad estimates and predictions. Therefore, for various cases it is preferable to favour restricted approaches instead of a full model.