Oxford Mathematician Neave O’Clery recently moved to Oxford from the Center for International Development at Harvard University where she worked on the development of mathematical models to describe the processes behind industrial diversification and economic growth. Here she discusses how network science can help us understand the success of cities, and provide practical tools for policy-makers.
From estimating motion to monitoring complex behaviour in cellular systems
Abstract
Building on advancements in computer vision we now have an array of visual tracking methods that allow the reliable estimation of cellular motion in high-throughput settings as well as more complex biological specimens. In many cases the underlying assumptions of these methods are still not well defined and result in failures when analysing large scale experiments.
Using organotypic co-culture systems we can now mimic more physiologically relevant microenvironments in vitro. The robust analysis of cellular dynamics in such complex biological systems remains an open challenge. I will attempt to outline some of these challenges and provide some very preliminary results on analysing more complex cellular behaviours.
Computational motion models for cancer imaging
Cell cycle regulation by systems-level feedback control
Abstract
In the first part of my presentation, I will briefly summarize a dynamic view of the cell cycle created in collaboration with Prof John Tyson over the past 25 years.
In our view, the decisions a cell must make during DNA synthesis and mitosis are controlled by bistable switches, which provide abrupt and irreversible transition
between successive cell cycle phases. In addition, bistability provides the foundation for 'checkpoints' that can stop cell proliferation if problems arise
(e.g., DNA damage by UV irradiation). In the second part of my talk, I will highlight a few representative examples from our ongoing BBSRC Strategic LoLa grant
(http://cellcycle.org.uk/) in which we are testing the predictions of our theoretical ideas in human cells in collaboration with four experimental groups.
Technological breakthroughs in comprehensive survey of cell phenotypes – can the analytical tools catch up?".
Abstract
The ability to study the transcriptome, proteome – and other aspects – of many individual cells represents one of the most important technical breakthroughs and tools in biology and medical science of the past few years. They are revolutionising study of biological systems and human disease, enabling for example: hypothesis-free identification of rare pathogenic (or protective) cell subsets in chronic diseases, routine monitoring of patient immune phenotypes and direct discovery of mole cular targets in rare cell populations. In parallel, new computational and analytical approaches are being intensively developed to analyse the vast data sets generated by these technologies. However, there is still a huge gap between our ability to generate the data, analyse their technical soundness and actually interpret them. The QBIOX network may provide for a unique opportunity to complement recent investments in Oxford technical capabilities in single-cell technologies with the development of revolutionary, visionary ways of interpreting the data that would help Oxford researchers to compete as leaders in this field.
Please register via https://www.eventbrite.co.uk/e/qbiox-colloquium-trinity-term-2017-ticke…
Computer models in biomedicine: What for?
Abstract
Biomedical research and clinical practice rely on complex and multimodality
datasets for the characterisation of human organs in health and disease. In
computational biomedicine, we often argue that multiscale computational
models are and will be increasingly required as tools for data integration,
for probing the established knowledge of physiological systems, and for
predictions of the effects of therapies and disease. But what has
computational biomedicine delivered so far? This presentation will describe
successes, failures and future directions of computational models in
cardiac research from basic to translational science.
Molecular mechanisms and mathematical models of cell cycle checkpoints
Cost-benefit analysis of data intelligence
Abstract
All data intelligence processes are designed for processing a finite amount of data within a time period. In practice, they all encounter
some difficulties, such as the lack of adequate techniques for extracting meaningful information from raw data; incomplete, incorrect
or noisy data; biases encoded in computer algorithms or biases of human analysts; lack of computational resources or human resources; urgency in
making a decision; and so on. While there is a great enthusiasm to develop automated data intelligence processes, it is also known that
many of such processes may suffer from the phenomenon of data processing inequality, which places a fundamental doubt on the credibility of these
processes. In this talk, the speaker will discuss the recent development of an information-theoretic measure (by Chen and Golan) for optimizing
the cost-benefit ratio of a data intelligence process, and will illustrate its applicability using examples of data analysis and
visualization processes including some in bioinformatics.