Adnan Awad - Profile on Academia.edu (original) (raw)
Papers by Adnan Awad
Journal of High Energy Physics, 2003
We show how to systematically construct higher-derivative terms in effective actions in harmonic ... more We show how to systematically construct higher-derivative terms in effective actions in harmonic superspace despite the infinite redundancy in their description due to the infinite number of auxiliary fields. Making an assumption about the absence of certain superspace Chern-Simons-like terms involving vector multiplets, we write all 3-and 4-derivative terms on Higgs, Coulomb, and mixed branches. Among these terms are several with only holomorphic dependence on fields, and at least one satisfies a non-renormalization theorem. These holomorphic terms include a novel 3-derivative term on mixed branches given as an integral over 3/4 of superspace. As an illustration of our method, we search for Wess-Zumino terms in the low energy effective action of N = 2 supersymmetric QCD. We show that such terms occur only on mixed branches. We also present an argument showing that the combination of space-time locality with supersymmetry implies locality in the anticommuting superspace coordinates of for unconstrained superfields.
Estimation of multiple correlation coefficient when observations are missing on one of the variables
Communications in Statistics - Theory and Methods, 1987
On propose un estimateur alternatif a celui de Sylvan (1969), base sur la statistique modifiee de... more On propose un estimateur alternatif a celui de Sylvan (1969), base sur la statistique modifiee de Wilks due a Rao (1956)
ِClusters of Entropy Measures Based on Binomial Distributions
التوجهات الهدفية لدى طلبة جامعة اليرموك و علاقتها بكل من الفاعلية الذاتية و التعلم المنظم ذاتيا =... more التوجهات الهدفية لدى طلبة جامعة اليرموك و علاقتها بكل من الفاعلية الذاتية و التعلم المنظم ذاتيا = Goal orientations Among yarmouk University students and their relationship with Self-Efficacy and self-regulated Iearning، للحصول على النص الكامل يرجى زيارة مكتبة الحسين بن طلال في جامعة اليرموك او زيارة موقعها الالكتروني
Estimating the mean of a random parameter in reliability models
The Shannon entropy of generalized gamma and of related distribution
Prediction interval for future sample mean from an exponential distribution with a shifted parameter
Large sample prediction intervals
Optimality of Bayesian Estimators: A Comparative Study Based on Exponential Progressive Type II Censored Data
Informational Criterion of Censored Data Part I: Progressive Censoring Type II With Application to Pareto Distribution
When applying progressive Type-II right censoring in life-testing experiments, the selection of a... more When applying progressive Type-II right censoring in life-testing experiments, the selection of an optimal censoring scheme is an important issue. In the literature there are several optimality criteria. This paper concentrates on investigating the possibility of using entropy-information measures to design an optimality type-II progressive censoring scheme with an illustrative application to a simple form of Pareto distribution. According to the suggested criterion, a censoring scheme is called optimal if it maximizes an informational efficiency function that is defined as the ratio of available information in the censored sample to the available information in the complete sample. The paper provides mathematical formulas for the efficiency of progressive type-II censoring scheme based on sixteen entropy-information measures. It turns out that all six information measures under consideration do not help in picking an optimal scheme since their values are free of the censoring scheme vector. To select a sup-entropy measure that leads to an optimal scheme, we have designed a Mathematica-7 code that can compute the numerical value for each of the ten sup-entropy measures that are used in this paper. An illustrative numerical example showed that the optimal scheme is a one step censoring from left after observing first failure.
Prediction interval for the difference between two sample means from exponential populations: A Bayesian treatment
Pakistan Journal of Statistics
ABSTRACT
Finite termination of the generalized SPRT if the model is incorrect
ABSTRACT
Prediction intervals for the r-th order statistic: Acomparative study
ABSTRACT
A statistical information measure
ABSTRACT
الاحصاءت الكافية ومقاييس المعلومات
On inverse moments of the positive hypergeometric distribution
ABSTRACT
Power comparisons of simultaneous test procedures for homogeneity in contingency tables
Information measures and some distribution approximations
The Fisher and Kullback- Liebler information measures were calculated from the approximation of a... more The Fisher and Kullback- Liebler information measures were calculated from the approximation of a binomial distribution by both the Poisson and the normal distributions and are applied to the approximation of a Poisson distribution by a normal distribution. In this paper the concept of relative loss in information due to approximating the distribution of a random variable Xn by the distribution of another random variable Yn is introduced, and this concept is used to determine the value of the sample size for which the relative loss in information measure is less than a given level epsilon.
Statistical View of Information Theory
SpringerReference
ABSTRACT Information Theory has origins and applications in several fields such as: thermodynamic... more ABSTRACT Information Theory has origins and applications in several fields such as: thermodynamics, communication theory, computer science, economics, biology, mathematics, probability and statistics. Due to this diversity, there are numerous information measures in the literature. Kullback (1978), Sakamoto et al. (1986), and Pardo (2006) have applied several of these measures to almost all statistical inference problems. According to The Likelihood Principle, all experimental information relevant to a parameter θ is mainly contained in the likelihood function L(θ) of the underlying distribution. Bartlett’s information measure is given by − log(L(θ)). Entropy measures (see Entropy) are expectations of functions of the likelihood. Divergence measures are also expectations of functions of likelihood ratios. In addition, Fisher-like information measures are expectations of functions of derivatives of the log-likelihood. DasGupta (2008, Chap. 2) reported several relations among members of these info ...
A note on characterization based on shannon entropy of record statistics
Statistics, 2001
ABSTRACT In this note we compare the Shannon entropy of record statistics with the Shannon entrop... more ABSTRACT In this note we compare the Shannon entropy of record statistics with the Shannon entropy of the original data and give an application to characterization of the generalized Pareto distribution,
Microelectronics Reliability, 1996
The paper gives the origins of AIC and discusses the main properties of this measure when it is a... more The paper gives the origins of AIC and discusses the main properties of this measure when it is applied to continuous and discrete models. It is illustrated that AIC is not a measure of informativity because it fails to have some expected properties of information measures. Some modifications of AIC are pointed out together with their advantages over AIC.
Journal of High Energy Physics, 2003
We show how to systematically construct higher-derivative terms in effective actions in harmonic ... more We show how to systematically construct higher-derivative terms in effective actions in harmonic superspace despite the infinite redundancy in their description due to the infinite number of auxiliary fields. Making an assumption about the absence of certain superspace Chern-Simons-like terms involving vector multiplets, we write all 3-and 4-derivative terms on Higgs, Coulomb, and mixed branches. Among these terms are several with only holomorphic dependence on fields, and at least one satisfies a non-renormalization theorem. These holomorphic terms include a novel 3-derivative term on mixed branches given as an integral over 3/4 of superspace. As an illustration of our method, we search for Wess-Zumino terms in the low energy effective action of N = 2 supersymmetric QCD. We show that such terms occur only on mixed branches. We also present an argument showing that the combination of space-time locality with supersymmetry implies locality in the anticommuting superspace coordinates of for unconstrained superfields.
Estimation of multiple correlation coefficient when observations are missing on one of the variables
Communications in Statistics - Theory and Methods, 1987
On propose un estimateur alternatif a celui de Sylvan (1969), base sur la statistique modifiee de... more On propose un estimateur alternatif a celui de Sylvan (1969), base sur la statistique modifiee de Wilks due a Rao (1956)
ِClusters of Entropy Measures Based on Binomial Distributions
التوجهات الهدفية لدى طلبة جامعة اليرموك و علاقتها بكل من الفاعلية الذاتية و التعلم المنظم ذاتيا =... more التوجهات الهدفية لدى طلبة جامعة اليرموك و علاقتها بكل من الفاعلية الذاتية و التعلم المنظم ذاتيا = Goal orientations Among yarmouk University students and their relationship with Self-Efficacy and self-regulated Iearning، للحصول على النص الكامل يرجى زيارة مكتبة الحسين بن طلال في جامعة اليرموك او زيارة موقعها الالكتروني
Estimating the mean of a random parameter in reliability models
The Shannon entropy of generalized gamma and of related distribution
Prediction interval for future sample mean from an exponential distribution with a shifted parameter
Large sample prediction intervals
Optimality of Bayesian Estimators: A Comparative Study Based on Exponential Progressive Type II Censored Data
Informational Criterion of Censored Data Part I: Progressive Censoring Type II With Application to Pareto Distribution
When applying progressive Type-II right censoring in life-testing experiments, the selection of a... more When applying progressive Type-II right censoring in life-testing experiments, the selection of an optimal censoring scheme is an important issue. In the literature there are several optimality criteria. This paper concentrates on investigating the possibility of using entropy-information measures to design an optimality type-II progressive censoring scheme with an illustrative application to a simple form of Pareto distribution. According to the suggested criterion, a censoring scheme is called optimal if it maximizes an informational efficiency function that is defined as the ratio of available information in the censored sample to the available information in the complete sample. The paper provides mathematical formulas for the efficiency of progressive type-II censoring scheme based on sixteen entropy-information measures. It turns out that all six information measures under consideration do not help in picking an optimal scheme since their values are free of the censoring scheme vector. To select a sup-entropy measure that leads to an optimal scheme, we have designed a Mathematica-7 code that can compute the numerical value for each of the ten sup-entropy measures that are used in this paper. An illustrative numerical example showed that the optimal scheme is a one step censoring from left after observing first failure.
Prediction interval for the difference between two sample means from exponential populations: A Bayesian treatment
Pakistan Journal of Statistics
ABSTRACT
Finite termination of the generalized SPRT if the model is incorrect
ABSTRACT
Prediction intervals for the r-th order statistic: Acomparative study
ABSTRACT
A statistical information measure
ABSTRACT
الاحصاءت الكافية ومقاييس المعلومات
On inverse moments of the positive hypergeometric distribution
ABSTRACT
Power comparisons of simultaneous test procedures for homogeneity in contingency tables
Information measures and some distribution approximations
The Fisher and Kullback- Liebler information measures were calculated from the approximation of a... more The Fisher and Kullback- Liebler information measures were calculated from the approximation of a binomial distribution by both the Poisson and the normal distributions and are applied to the approximation of a Poisson distribution by a normal distribution. In this paper the concept of relative loss in information due to approximating the distribution of a random variable Xn by the distribution of another random variable Yn is introduced, and this concept is used to determine the value of the sample size for which the relative loss in information measure is less than a given level epsilon.
Statistical View of Information Theory
SpringerReference
ABSTRACT Information Theory has origins and applications in several fields such as: thermodynamic... more ABSTRACT Information Theory has origins and applications in several fields such as: thermodynamics, communication theory, computer science, economics, biology, mathematics, probability and statistics. Due to this diversity, there are numerous information measures in the literature. Kullback (1978), Sakamoto et al. (1986), and Pardo (2006) have applied several of these measures to almost all statistical inference problems. According to The Likelihood Principle, all experimental information relevant to a parameter θ is mainly contained in the likelihood function L(θ) of the underlying distribution. Bartlett’s information measure is given by − log(L(θ)). Entropy measures (see Entropy) are expectations of functions of the likelihood. Divergence measures are also expectations of functions of likelihood ratios. In addition, Fisher-like information measures are expectations of functions of derivatives of the log-likelihood. DasGupta (2008, Chap. 2) reported several relations among members of these info ...
A note on characterization based on shannon entropy of record statistics
Statistics, 2001
ABSTRACT In this note we compare the Shannon entropy of record statistics with the Shannon entrop... more ABSTRACT In this note we compare the Shannon entropy of record statistics with the Shannon entropy of the original data and give an application to characterization of the generalized Pareto distribution,
Microelectronics Reliability, 1996
The paper gives the origins of AIC and discusses the main properties of this measure when it is a... more The paper gives the origins of AIC and discusses the main properties of this measure when it is applied to continuous and discrete models. It is illustrated that AIC is not a measure of informativity because it fails to have some expected properties of information measures. Some modifications of AIC are pointed out together with their advantages over AIC.
This thesis derives conditional central limit theorems to martingales and apply them to obtain n... more This thesis derives conditional central limit theorems to martingales and apply them to obtain new results about asymptotic normality of posterior distributions.