The mutual information: Detecting and evaluating dependencies between
Journal Article
Search for other works by this author on:
Search for other works by this author on:
Search for other works by this author on:
J. Weise ,
Search for other works by this author on:
Search for other works by this author on:
Published:
01 October 2002
Cite
R. Steuer, J. Kurths, C. O. Daub, J. Weise, J. Selbig, The mutual information: Detecting and evaluating dependencies between variables, Bioinformatics, Volume 18, Issue suppl_2, October 2002, Pages S231–S240, https://doi.org/10.1093/bioinformatics/18.suppl_2.S231
Close
Navbar Search Filter Mobile Enter search term Search
Abstract
Motivation: Clustering co-expressed genes usually requires the definition of `distance' or `similarity' between measured datasets, the most common choices being Pearson correlation or Euclidean distance. With the size of available datasets steadily increasing, it has become feasible to consider other, more general, definitions as well. One alternative, based on information theory, is the mutual information, providing a general measure of dependencies between variables. While the use of mutual information in cluster analysis and visualization of large-scale gene expression data has been suggested previously, the earlier studies did not focus on comparing different algorithms to estimate the mutual information from finite data.
Results: Here we describe and review several approaches to estimate the mutual information from finite datasets. Our findings show that the algorithms used so far may be quite substantially improved upon. In particular when dealing with small datasets, finite sample effects and other sources of potentially misleading results have to be taken into account.
Contact: [email protected]
© Oxford University Press 2002