Linear Characteristic Graphical Models Resources (original) (raw)
Heavy-tailed distributions naturally occur in many real life problems. Unfortunately, it is typically not possible to compute inference in closed-form in graphical models which involve such heavy-tailed distributions.
In this work, we propose a novel simple linear graphical model for independent latent random variables, called linear characteristic model (LCM), defined in the characteristic function domain. Using stable distributions, a heavy-tailed family of distributions which is a generalization of Cauchy, L\'evy and Gaussian distributions, we show for the first time, how to compute both exact and approximate inference in such a linear multivariate graphical model. LCMs are not limited to stable distributions, in fact LCMs are always defined for any random variables (discrete, continuous or a mixture of both).
We provide a realistic problem from the field of computer networks to demonstrate the applicability of our construction. Other potential application is iterative decoding of linear channels with non-Gaussian noise.
Papers
- Inference with multivariate heavy-tails in linear models. D. Bickson and C. Guestrin. In Neural Information Processing Systems (NIPS) 2010, Vancouver, Canada, Dec. 2010. arxiv
Source Code
Acknowledgements
- D. Bickson would like to thank Andrea Pagnani (ISI) for inspiring the direction of this research.
- To John P. Nolan (American University) for sharing parts of his excellent book about stable distribution online,
- To Mark Veillette (Boston University) for sharing his stable distribution code online
- To Jason K. Johnson (LANL) for assisting in the convergence analysis
- To Sapan Bathia, Marc E. Fiuczynski (Princeton University) from the PlanetLab project for providing the PlanetFlow data.
- This research was supported by Army Research Office MURI W911NF0710287.