Date
Fri, 05 May 2017
Time
14:00 - 15:00
Location
L3
Speaker
Professor Min Chen
Organisation
Oxford e-Research Centre University of Oxford

All data intelligence processes are designed for processing a finite amount of data within a time period. In practice, they all encounter
some difficulties, such as the lack of adequate techniques for extracting meaningful information from raw data; incomplete, incorrect 
or noisy data; biases encoded in computer algorithms or biases of human analysts; lack of computational resources or human resources; urgency in 
making a decision; and so on. While there is a great enthusiasm to develop automated data intelligence processes, it is also known that
many of such processes may suffer from the phenomenon of data processing inequality, which places a fundamental doubt on the credibility of these 
processes. In this talk, the speaker will discuss the recent development of an information-theoretic measure (by Chen and Golan) for optimizing 
the cost-benefit ratio of a data intelligence process, and will illustrate its applicability using examples of data analysis and 
visualization processes including some in bioinformatics.

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.