P. Fierens - Academia.edu (original) (raw)

Papers by P. Fierens

Research paper thumbnail of Interval-valued probability modeling of Internet traffic variables

2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060), 2000

ABSTRACT A methodology to build interval-valued probability models is presented. It is shown that... more ABSTRACT A methodology to build interval-valued probability models is presented. It is shown that this alternative produces temporally stable models of Internet-generated communications variables

Research paper thumbnail of A frequentist understanding of sets of measures

Journal of Statistical Planning and Inference, 2009

We present a mathematical theory of objective, frequentist chance phenomena that uses as a model ... more We present a mathematical theory of objective, frequentist chance phenomena that uses as a model a set of probability measures. In this work, sets of measures are not viewed as a statistical compound hypothesis or as a tool for modeling imprecise subjective behavior. Instead we use sets of measures to model stable (although not stationary in the traditional stochastic sense) physical sources of finite time series data that have highly irregular behavior. Such models give a coarse-grained picture of the phenomena, keeping track of the range of the possible probabilities of the events. We present methods to simulate finite data sequences coming from a source modeled by a set of probability measures, and to estimate the model from finite time series data. The estimation of the set of probability measures is based on the analysis of a set of relative frequencies of events taken along subsequences selected by a collection of rules. In particular, we provide a universal methodology for finding a family of subsequence selection rules that can estimate any set of probability measures with high probability.

Research paper thumbnail of Towards a Frequentist Interpretation of Sets of Measures

We explore an objective, frequentist-related interpretation for a set of measures M such as would... more We explore an objective, frequentist-related interpretation for a set of measures M such as would determine upper and lower envelopes; M also specifies the classical frequentist concept of a compound hypothesis. However, in contrast to the compound hypothesis case, in which there is a true measure µ θ0 ∈ M that is assumed either unknown or random selected, we do not believe that any single measure is the true description for the random phenomena in question. Rather, it is the whole set M, itself, that is the appropriate imprecise probabilistic description. Envelope models have hitherto been used almost exclusively in subjective settings to model the uncertainty or strength of belief of individuals or groups. Our interest in these imprecise probability representations is as mathematical models for those objective frequentist phenomena of engineering and scientific significance where what is known may be substantial, but relative frequencies, nonetheless, lack (statistical) stability.

Research paper thumbnail of Towards a Chaotic Probability Model for Frequentist Probability: The Univariate Case

We adopt the same mathematical model of a set M of probability measures as is central to the theo... more We adopt the same mathematical model of a set M of probability measures as is central to the theory of coherent imprecise probability. However, we endow this model with an objective, frequentist interpretation in place of a behavioral subjective one. We seek to use M to model stable physical sources of time series data that have highly irregular behavior and not to model states of belief or knowledge that are assuredly imprecise. The ap- proach we present in this paper is to understand a set of measures model M not as a traditional compound hypothesis, in which one of the measures in M is a true description, but rather as one in which none of the individual measures in M provides an adequate description of the potential behavior of the physical source as actualized in the form of a long time series. We provide an instrumental interpretation of random process measures consistent with M and the highly irregular physical phenomena we intend to model by M. This construction provides us ...

Research paper thumbnail of Scalable Faceted Ranking in Tagging Systems

Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend... more Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.

Research paper thumbnail of Interval-valued probability modeling of Internet traffic variables

2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060), 2000

ABSTRACT A methodology to build interval-valued probability models is presented. It is shown that... more ABSTRACT A methodology to build interval-valued probability models is presented. It is shown that this alternative produces temporally stable models of Internet-generated communications variables

Research paper thumbnail of A frequentist understanding of sets of measures

Journal of Statistical Planning and Inference, 2009

We present a mathematical theory of objective, frequentist chance phenomena that uses as a model ... more We present a mathematical theory of objective, frequentist chance phenomena that uses as a model a set of probability measures. In this work, sets of measures are not viewed as a statistical compound hypothesis or as a tool for modeling imprecise subjective behavior. Instead we use sets of measures to model stable (although not stationary in the traditional stochastic sense) physical sources of finite time series data that have highly irregular behavior. Such models give a coarse-grained picture of the phenomena, keeping track of the range of the possible probabilities of the events. We present methods to simulate finite data sequences coming from a source modeled by a set of probability measures, and to estimate the model from finite time series data. The estimation of the set of probability measures is based on the analysis of a set of relative frequencies of events taken along subsequences selected by a collection of rules. In particular, we provide a universal methodology for finding a family of subsequence selection rules that can estimate any set of probability measures with high probability.

Research paper thumbnail of Towards a Frequentist Interpretation of Sets of Measures

We explore an objective, frequentist-related interpretation for a set of measures M such as would... more We explore an objective, frequentist-related interpretation for a set of measures M such as would determine upper and lower envelopes; M also specifies the classical frequentist concept of a compound hypothesis. However, in contrast to the compound hypothesis case, in which there is a true measure µ θ0 ∈ M that is assumed either unknown or random selected, we do not believe that any single measure is the true description for the random phenomena in question. Rather, it is the whole set M, itself, that is the appropriate imprecise probabilistic description. Envelope models have hitherto been used almost exclusively in subjective settings to model the uncertainty or strength of belief of individuals or groups. Our interest in these imprecise probability representations is as mathematical models for those objective frequentist phenomena of engineering and scientific significance where what is known may be substantial, but relative frequencies, nonetheless, lack (statistical) stability.

Research paper thumbnail of Towards a Chaotic Probability Model for Frequentist Probability: The Univariate Case

We adopt the same mathematical model of a set M of probability measures as is central to the theo... more We adopt the same mathematical model of a set M of probability measures as is central to the theory of coherent imprecise probability. However, we endow this model with an objective, frequentist interpretation in place of a behavioral subjective one. We seek to use M to model stable physical sources of time series data that have highly irregular behavior and not to model states of belief or knowledge that are assuredly imprecise. The ap- proach we present in this paper is to understand a set of measures model M not as a traditional compound hypothesis, in which one of the measures in M is a true description, but rather as one in which none of the individual measures in M provides an adequate description of the potential behavior of the physical source as actualized in the form of a long time series. We provide an instrumental interpretation of random process measures consistent with M and the highly irregular physical phenomena we intend to model by M. This construction provides us ...

Research paper thumbnail of Scalable Faceted Ranking in Tagging Systems

Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend... more Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.