Programming languages for molecular and genetic devices
Abstract
Computational nucleic acid devices show great potential for enabling a broad range of biotechnology applications, including smart probes for molecular biology research, in vitro assembly of complex compounds, high-precision in vitro disease diagnosis and, ultimately, computational therapeutics inside living cells. This diversity of applications is supported by a range of implementation strategies, including nucleic acid strand displacement, localisation to substrates, and the use of enzymes with polymerase, nickase and exonuclease functionality. However, existing computational design tools are unable to account for these different strategies in a unified manner. This talk presents a programming language that allows a broad range of computational nucleic acid systems to be designed and analysed. We also demonstrate how similar approaches can be incorporated into a programming language for designing genetic devices that are inserted into cells to reprogram their behaviour. The language is used to characterise the genetic components for programming populations of cells that communicate and self-organise into spatial patterns. More generally, we anticipate that languages and software for programming molecular and genetic devices will accelerate the development of future biotechnology applications.
“How did that get there?” Modelling tissue age evolution of Barrett’s esophagus
Abstract
There is great interest in the molecular characterisation of intestinal metaplasia, such as Barrett’s esophagus (BE), to understand the basic biology of metaplastic development from a tissue of origin. BE is asymptomatic, so it is not generally known how long a patient has lived with this precursor of esophageal adenocarcinoma (EAC) when initially diagnosed in the clinic. We previously constructed a BE clock model using patient-specific methylation data to estimate BE onset times using Bayesian inference techniques, and thus obtain the biological age of BE tissue (Curtius et al. 2016). We find such epigenetic drift to be widely evident in BE tissue (Luebeck et al. 2017) and the corresponding tissue ages show large inter-individual heterogeneity in two patient populations.
From a basic biological mechanism standpoint, it is not fully understood how the Barrett’s tissue first forms in the human esophagus because this process is never observed in vivo, yet such information is critical to inform biomarkers of risk based on temporal features (e.g., growth rates, tissue age) reflecting the evolution toward cancer. We analysed multi-region samples from 17 BE patients to
1) measure the spatial heterogeneity in biological tissue ages, and 2) use these ages to calibrate mathematical models (agent-based and continuum) of the mechanisms for formation of the segment itself. Most importantly, we found that tissue must be regenerated nearer to the stomach, perhaps driven by wound healing caused by exposure to reflux, implying a gastric tissue of origin for the lesions observed in BE. Combining bioinformatics and mechanistic modelling allowed us to infer evolutionary processes that cannot be clinically observed and we believe there is great translational promise to develop such hybrid methods to better understand multiscale cancer data.
References:
Curtius K, Wong C, Hazelton WD, Kaz AM, Chak A, et al. (2016) A Molecular Clock Infers Heterogeneous Tissue Age Among Patients with Barrett's Esophagus. PLoS Comput Biol 12(5): e1004919
Luebeck EG, Curtius K, Hazelton WD, Made S, Yu M, et al. (2017) Identification of a key role of epigenetic drift in Barrett’s esophagus and esophageal adenocarcinoma. J Clin Epigenet 9:113
Untangling heterogeneity in DNA replication with nanopore sequencing
Abstract
Genome replication is a stochastic process whereby each cell exhibits different patterns of origin activation and replication fork movement. Despite this heterogeneity, replication is a remarkably stable process that works quickly and correctly over hundreds of thousands of iterations. Existing methods for measuring replication dynamics largely focus on how a population of cells behave on average, which precludes the detection of low probability errors that may have occurred in individual cells. These errors can have a severe impact on genome integrity, yet existing single-molecule methods, such as DNA combing, are too costly, low-throughput, and low-resolution to effectively detect them. We have created a method that uses Oxford Nanopore sequencing to create high-throughput genome-wide maps of DNA replication dynamics in single molecules. I will discuss the informatics approach that our software uses, our use of mathematical modelling to explain the patterns that we observe, and questions in DNA replication and genome stability that our method is uniquely positioned to answer.
Interfacial dynamics for neurobiological networks: from excitability thresholds to localised spatiotemporal chaos
Applied modelling of the human pulmonary system
Abstract
In this work we will attempt, via virtual models, to interpret how structure and body positioning impact upon the outcomes of Multi-Breath-Washout tests.
By extrapolating data from CT images, a virtual reduced dimensional airway/vascualr network will be constructed. Using this network both airway and blood flow profiles will be calculated. These profiles will then be used to model gas transport within the lungs. The models will allow us to investigate the role of airway restriction, body position during testing and washout gas choice have on MBW measures.
Pareto optimality and complex networks
Abstract
In this talk I will show the nature, the properties and the features of the Pareto Optimality in a diverse set of phenomena modeled as complex networks.
I will present a composite design methodology for multi-objective modeling and optimization of complex networks. The method is based on the synergy of different algorithms and computational techniques for the analysis and modeling of natural systems (e.g., metabolic pathways in prokaryotic and eukaryotic cells) and artificial systems (e.g., traffic networks, analog circuits and solar cells).
“Pareto Optimality in Multilayer Network Growth”
G. Nicosia et al, Phys. Rev. Lett., 2018
A Reynolds-robust preconditioner for the stationary Navier-Stokes in three dimensions
Abstract
When approximating PDEs with the finite element method, large sparse linear systems must be solved. The ideal preconditioner yields convergence that is algorithmically optimal and parameter robust, i.e. the number of Krylov iterations required to solve the linear system to a given accuracy does not grow substantially as the mesh or problem parameters are changed.
Achieving this for the stationary Navier-Stokes has proven challenging: LU factorisation is Reynolds-robust but scales poorly with degree of freedom count, while Schur complement approximations such as PCD and LSC degrade as the Reynolds number is increased.
Building on the work of Schöberl, Olshanskii and Benzi, in this talk we present the first preconditioner for the Newton linearisation of the stationary Navier--Stokes equations in three dimensions that achieves both optimal complexity and Reynolds-robustness. The scheme combines a novel tailored finite element discretisation, discrete augmented Lagrangian stabilisation, a custom prolongation operator involving local solves on coarse cells, and an additive patchwise relaxation on each
level. We present 3D simulations with over one billion degrees of freedom with robust performance from Reynolds number 10 to 5000.
Strategies for Multilevel Monte Carlo for Bayesian Inversion
Abstract
This talk will concern the problem of inference when the posterior measure involves continuous models which require approximation before inference can be performed. Typically one cannot sample from the posterior distribution directly, but can at best only evaluate it, up to a normalizing constant. Therefore one must resort to computationally-intensive inference algorithms in order to construct estimators. These algorithms are typically of Monte Carlo type, and include for example Markov chain Monte Carlo, importance samplers, and sequential Monte Carlo samplers. The multilevel Monte Carlo method provides a way of optimally balancing discretization and sampling error on a hierarchy of approximation levels, such that cost is optimized. Recently this method has been applied to computationally intensive inference. This non-trivial task can be achieved in a variety of ways. This talk will review 3 primary strategies which have been successfully employed to achieve optimal (or canonical) convergence rates – in other words faster convergence than i.i.d. sampling at the finest discretization level. Some of the specific resulting algorithms, and applications, will also be presented.
16:00
On some applications of excursion theory
Abstract
During the talk I will present some new computational technique based on excursion theory for Markov processes. Some new results for classical processes like Bessel processes and reflected Brownian Motion will be shown. The most important point of presented applications will be the new insight into Hartman-Watson (HW) distributions. It turns out that excursion theory will enable us to deduce the simple connections of HW with a hyperbolic cosine of Brownian Motion.