Abdul Rahman Othman | Universiti Sains Malaysia (original) (raw)

Papers by Abdul Rahman Othman

Research paper thumbnail of A comparative study of robust tests for spread: Asymmetric trimming strategies

British Journal of Mathematical and Statistical Psychology, 2008

We examined 633 procedures that can be used to compare the variability of scores across independe... more We examined 633 procedures that can be used to compare the variability of scores across independent groups. The procedures, except for one, were modifications of the procedures suggested by and . We modified their procedures by substituting robust measures of the typical score and variability, rather than relying on classical estimators. The robust measures that we utilized were either based on a priori or empirically determined symmetric or asymmetric trimming strategies. The Levene-type and O'Brien-type transformed scores were used with either the ANOVA F test, a robust test due to , or the Welch (1951) test. Based on four measures of robustness, we recommend a Levene-type transformation based upon empirically determined 20% asymmetric trimmed means, involving a particular adaptive estimator, where the transformed scores are then used with the ANOVA F test.

Research paper thumbnail of Modeling perception of temperature change using the generalized additive model

Research paper thumbnail of A power investigation of Alexander Govern test with adaptive trimmed mean as a central tendency measure

Research paper thumbnail of Adaptive and automatic trimming in testing the equality of two group case

Research paper thumbnail of Unravelling the temporal association between lameness and body condition score in dairy cattle using a multistate modelling approach

Preventive veterinary medicine, Jan 3, 2015

Recent studies have reported associations between lameness and body condition score (BCS) in dair... more Recent studies have reported associations between lameness and body condition score (BCS) in dairy cattle, however the impact of change in the dynamics of BCS on both lameness occurrence and recovery is currently unknown. The aim of this longitudinal study was to investigate the effect of change in BCS on the transitions from the non-lame to lame, and lame to non-lame states. A total of 731 cows with 6889 observations from 4 UK herds were included in the study. Mobility score (MS) and body condition score (BCS) were recorded every 13-15 days from July 2010 until December 2011. A multilevel multistate discrete time event history model was built to investigate the transition of lameness over time. There were 1042 non-lame episodes and 593 lame episodes of which 50% (519/1042) of the non-lame episodes transitioned to the lame state and 81% (483/593) of the lame episodes ended with a transition to the non-lame state. Cows with a lower BCS at calving (BCS Group 1 (1.00-1.75) and Group 2 (2.00-2.25)) had a higher probability of transition from non-lame to lame and a lower probability of transition from lame to non-lame compared to cows with BCS 2.50-2.75, i.e. they were more likely to become lame and if lame, they were less likely to recover. Similarly, cows who suffered a greater decrease in BCS (compared to their BCS at calving) had a higher probability of becoming lame and a lower probability of recovering in the next 15 days. An increase in BCS from calving was associated with the converse effect, i.e. a lower probability of cows moving from the non-lame to the lame state and higher probability of transition from lame to non-lame. Days in lactation, quarters of calving and parity were associated with both lame and non-lame transitions and there was evidence of heterogeneity among cows in lameness occurrence and recovery. This study suggests loss of BCS and increase of BCS could influence the risk of becoming lame and the chance of recovery from lameness. Regular monitoring and maintenance of BCS on farms could be a key tool for reducing lameness. Further work is urgently needed in this area to allow a better understanding of the underlying mechanisms behind these relationships.

Research paper thumbnail of A Robust Alternative to the t - Test

Modern Applied Science, 2012

t-test is a classical test statistics for testing the equality of two groups. However, this test ... more t-test is a classical test statistics for testing the equality of two groups. However, this test is very sensitive to non-normality as well as variance heterogeneity. To overcome these problems, robust method such as F t and S 1 tests statistics can be used. This study proposed the use of a robust estimator that is trimmed mean as the central tendency measure in F t test and median as the central tendency measure in S 1 test when comparing the equality of two groups. The performance of the S 1 test with MAD n was able to give the most convincing result than the other methods. The F t with MAD n showed comparable results with the conventional methods. This study has shown some improvement in the statistical solution of detecting differences between location parameters. These modified methods may serve as alternatives to some other robust statistical methods which are unable to handle either the problem of non-normality, variance heterogeneity or unbalanced design.

Research paper thumbnail of Chemometric Classification of Herb – Orthosiphon stamineus According to Its Geographical Origin Using Virtual Chemical Sensor Based Upon Fast GC

Sensors, 2003

An analytical method using Electronic Nose (E-nose) instrument for analysis of volatile organic c... more An analytical method using Electronic Nose (E-nose) instrument for analysis of volatile organic compound from Orthosiphon stamineus raw samples have been developed. This instrument is a new chemical sensor based on Fast Gas Chromatography and Surface Acoustics Wave (SAW) detector. Chromatographic fingerprint obtained from the headspace analysis of O. stamineus samples were used as a guideline for optimum selection of an array of sensor. Qualitative analysis was carried out based on the responses of each sensor array in order to distinguish the geographical origin of the cultivated sample. The results of the analysis showed variances of volatile chemical compound of the samples even though it is from the same species. However, similarities of main components from all five samples were observed. Usage of pattern recognition chemometric approaches such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Cluster Analysis (CA) for processing Sensors 2003, 3 459 instrumental data provided good classification of O. stamineus samples according to its geographical origin.

Research paper thumbnail of A SAS program to assess the sensitivity of normality tests on non-normal data

Research paper thumbnail of The Phylogenetic Tree of RNA Polymerase Constructed Using MOM Method

2009 International Conference of Soft Computing and Pattern Recognition, 2009

... Nazalan Najimudin School of Biological Sciences Universiti Sains Malaysia 11800 Minden Penang... more ... Nazalan Najimudin School of Biological Sciences Universiti Sains Malaysia 11800 Minden Penang, Malaysia nazalan@usm.my Zeti Azura Mohamed Hussein Faculty of Science and Technology Universiti Kebangsaan Malaysia 43600 Bangi Selangor, Malaysia zeti@ukm.my ...

Research paper thumbnail of The relationship between complexity (taxonomy) and difficulty

Difficulty and complexity are important factors that occur in every test questions. These two fac... more Difficulty and complexity are important factors that occur in every test questions. These two factors will also affect the reliability of the test. Hence, difficulty and complexity must be considered by educators during preparation of the test questions. The relationship between difficulty and complexity is studied. Complexity is defined as the level in Bloom's Taxonomy. Difficulty is represented by the proportion of students scoring between specific score intervals. A chi-square test of independence between difficulty and complexity was conducted on the results of a continuous assessment of a third year undergraduate course, Probability Theory. The independence test showed that the difficulty and complexity are related. However, this relationship is small.

Research paper thumbnail of Simulation and statistics: Like rhythm and song

ABSTRACT Simulation has been introduced to solve problems in the form of systems. By using this t... more ABSTRACT Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.

Research paper thumbnail of Comparative Performance of Pseudo-Median Procedure, Welch’s Test and Mann-Whitney-Wilcoxon at Specific Pairing

Modern Applied Science, 2011

The objective of this study is to investigate the performance of two-sample pseudo-median based p... more The objective of this study is to investigate the performance of two-sample pseudo-median based procedure in testing differences between groups. The procedure is the modification of one-sample Wilcoxon procedure using pseudo-median of differences between group values as the central measure of location. The test was conducted on two groups setting with moderate sample sizes of symmetric and asymmetric distributions. The performance of the procedure was measured and evaluated in terms of Type I error and power rates obtained via Monte Carlo methods. Type I error and power rates of the procedure were then compared with the alternative parametric and nonparametric procedures namely the Welch's test and Mann-Whitney-Wilcoxon test. The findings revealed that the pseudo-median procedure is capable in controlling its Type I error close to the nominal level when heterogeneity of variances exists. In terms of robustness, the pseudo-median procedure outperforms the Welch's and Mann Whitney Wilcoxon tests when distributions are skewed. The pseudo-median procedure is also capable in maintaining high power rates especially for negative pairing.

Research paper thumbnail of Type I Error Rates of Ft Statistic with Different Trimming Strategies for Two Groups Case

Modern Applied Science, 2011

When the assumptions of normality and homoscedasticity are met, researchers should have no doubt ... more When the assumptions of normality and homoscedasticity are met, researchers should have no doubt in using classical test such as t-test and ANOVA to test for the equality of central tendency measures for two and more than two groups respectively. However, in real life we do not often encounter with this ideal situation. A robust method known as F t statistic has been identified as an alternative to the above methods in handling the problem of nonnormality. Motivated by the good performance of the method, in this study we proposed to use F t statistic with three different trimming strategies, namely, i) fixed symmetric trimming (10%, 15% and 20%), ii) fixed asymmetric trimming (10%, 15% and 20%) and iii) empirically determined trimming, to simultaneously handle the problem of nonnormality and heteroscedasticity. To test for the robustness of the procedures towards the violation of the assumptions, several variables were manipulated. The variables are types of distributions and heterogeneity of variances. Type I error for each procedures were then be calculated. This study will be based on simulated data with each procedure been simulated 5000 times. Based on the Type I error rates, we were able to identify which procedures (F t with different trimming strategies) are robust and have good control of Type I error. The best procedure that should be taken into consideration is the F t with MOM -T n for normal distribution, 15% fixed trimming for skewed normal-tailed distribution and MOM -MAD n for skewed leptokurtic distribution. This is because, all of the procedures produced the nearest Type I error rates to the nominal level.

Research paper thumbnail of Test of hypothesis in one-way random effects model with unequal error variances

Journal of Statistical Computation and Simulation, 1985

... Othman, Abdul Rahman, (1983), One-way Random Effects Model with Unequal Error Variances, unpu... more ... Othman, Abdul Rahman, (1983), One-way Random Effects Model with Unequal Error Variances, unpublished MS. Thesis, Department of Mathematics, Southern Illinois University at Carbondale. Rao, PSRS, Kaplan, J. and Cochran, WG (1981), Estimators for the one-way ...

Research paper thumbnail of New Procedure in Testing Differences between Two Groups

Applied Mathematics & Information Sciences, 2013

Research paper thumbnail of The dual role of two scale estimators

Abstract. Two scale estimators, E1 and E2,(Rousseeuw & Croux, 1993) for the medians were used... more Abstract. Two scale estimators, E1 and E2,(Rousseeuw & Croux, 1993) for the medians were used in their intended role in a test statistic that is constructed from the diferences of medians between treatment groups (Babu, Padmanabhan & Puri, 1999). They were also ...

Research paper thumbnail of Monitoring of Milk Quality With Disposable Taste Sensor

Sensor, 2003

A disposable screen-printed multi channel taste sensor composed of several types of lipid as tran... more A disposable screen-printed multi channel taste sensor composed of several types of lipid as transducers and a computer as data analyzer could detect taste in a manner similar to human gustatory sensation. The disposable taste sensor was used to measure the electrical potential resulted from the interaction between lipid membranes and taste substances. In the present study, two types of packaged commercial milk, the ultra high temperature (UHT) and the pasteurized milk were tested. It was found that the disposable taste sensor is capable to discriminate reliably between fresh and spoiled milk and to follow the deterioration of the milk quality when it is stored at room temperature based on a pattern recognition principle namely Principle Component Analysis (PCA). This research could provide a new monitoring method ideally for simple and cheap decentralized testing for controlling the quality of milk, which may be of great use in the dairy industries.

Research paper thumbnail of Assessing Normality: Applications in Multi-Group Designs

Warr and Erich (2013) compared a frequently recommended procedure in textbooks: the interquartile... more Warr and Erich (2013) compared a frequently recommended procedure in textbooks: the interquartile range divided by the sample standard deviation, against the Shapiro-Wilk's test in assessing normality of data. They found the Shapiro-Wilk's test to be far superior to the deficient interquartile range statistic. We look further into the issue of assessing non-normality by investigating the Anderson-Darling goodness-of-fit statistic for its sensitivity to detect non-normal data in a multi-group problem where Type I error and power issues can be explored from perspectives not considered by Warr and Erich. In particular, we examined the sensitivity of this test for 23 non-normal distributions consisting of g-and h-distributions, contaminated mixed-normal distributions and multinomial distributions. In addition, we used a sequentially-rejective Bonferroni procedure to limit the overall rate of Type I errors across the multi-groups assessed for normality and defined the power of the procedure according to whether there was at least one rejection from among the three group tests, whether all three non-normal groups of data were detected and the average of the per-group power values. Our results indicate that the Anderson-Darling test was generally effective in detecting varied types of nonnormal data.

Research paper thumbnail of A comparative study of robust tests for spread: Asymmetric trimming strategies

British Journal of Mathematical and Statistical Psychology, 2008

We examined 633 procedures that can be used to compare the variability of scores across independe... more We examined 633 procedures that can be used to compare the variability of scores across independent groups. The procedures, except for one, were modifications of the procedures suggested by and . We modified their procedures by substituting robust measures of the typical score and variability, rather than relying on classical estimators. The robust measures that we utilized were either based on a priori or empirically determined symmetric or asymmetric trimming strategies. The Levene-type and O'Brien-type transformed scores were used with either the ANOVA F test, a robust test due to , or the Welch (1951) test. Based on four measures of robustness, we recommend a Levene-type transformation based upon empirically determined 20% asymmetric trimmed means, involving a particular adaptive estimator, where the transformed scores are then used with the ANOVA F test.

Research paper thumbnail of Modeling perception of temperature change using the generalized additive model

Research paper thumbnail of A power investigation of Alexander Govern test with adaptive trimmed mean as a central tendency measure

Research paper thumbnail of Adaptive and automatic trimming in testing the equality of two group case

Research paper thumbnail of Unravelling the temporal association between lameness and body condition score in dairy cattle using a multistate modelling approach

Preventive veterinary medicine, Jan 3, 2015

Recent studies have reported associations between lameness and body condition score (BCS) in dair... more Recent studies have reported associations between lameness and body condition score (BCS) in dairy cattle, however the impact of change in the dynamics of BCS on both lameness occurrence and recovery is currently unknown. The aim of this longitudinal study was to investigate the effect of change in BCS on the transitions from the non-lame to lame, and lame to non-lame states. A total of 731 cows with 6889 observations from 4 UK herds were included in the study. Mobility score (MS) and body condition score (BCS) were recorded every 13-15 days from July 2010 until December 2011. A multilevel multistate discrete time event history model was built to investigate the transition of lameness over time. There were 1042 non-lame episodes and 593 lame episodes of which 50% (519/1042) of the non-lame episodes transitioned to the lame state and 81% (483/593) of the lame episodes ended with a transition to the non-lame state. Cows with a lower BCS at calving (BCS Group 1 (1.00-1.75) and Group 2 (2.00-2.25)) had a higher probability of transition from non-lame to lame and a lower probability of transition from lame to non-lame compared to cows with BCS 2.50-2.75, i.e. they were more likely to become lame and if lame, they were less likely to recover. Similarly, cows who suffered a greater decrease in BCS (compared to their BCS at calving) had a higher probability of becoming lame and a lower probability of recovering in the next 15 days. An increase in BCS from calving was associated with the converse effect, i.e. a lower probability of cows moving from the non-lame to the lame state and higher probability of transition from lame to non-lame. Days in lactation, quarters of calving and parity were associated with both lame and non-lame transitions and there was evidence of heterogeneity among cows in lameness occurrence and recovery. This study suggests loss of BCS and increase of BCS could influence the risk of becoming lame and the chance of recovery from lameness. Regular monitoring and maintenance of BCS on farms could be a key tool for reducing lameness. Further work is urgently needed in this area to allow a better understanding of the underlying mechanisms behind these relationships.

Research paper thumbnail of A Robust Alternative to the t - Test

Modern Applied Science, 2012

t-test is a classical test statistics for testing the equality of two groups. However, this test ... more t-test is a classical test statistics for testing the equality of two groups. However, this test is very sensitive to non-normality as well as variance heterogeneity. To overcome these problems, robust method such as F t and S 1 tests statistics can be used. This study proposed the use of a robust estimator that is trimmed mean as the central tendency measure in F t test and median as the central tendency measure in S 1 test when comparing the equality of two groups. The performance of the S 1 test with MAD n was able to give the most convincing result than the other methods. The F t with MAD n showed comparable results with the conventional methods. This study has shown some improvement in the statistical solution of detecting differences between location parameters. These modified methods may serve as alternatives to some other robust statistical methods which are unable to handle either the problem of non-normality, variance heterogeneity or unbalanced design.

Research paper thumbnail of Chemometric Classification of Herb – Orthosiphon stamineus According to Its Geographical Origin Using Virtual Chemical Sensor Based Upon Fast GC

Sensors, 2003

An analytical method using Electronic Nose (E-nose) instrument for analysis of volatile organic c... more An analytical method using Electronic Nose (E-nose) instrument for analysis of volatile organic compound from Orthosiphon stamineus raw samples have been developed. This instrument is a new chemical sensor based on Fast Gas Chromatography and Surface Acoustics Wave (SAW) detector. Chromatographic fingerprint obtained from the headspace analysis of O. stamineus samples were used as a guideline for optimum selection of an array of sensor. Qualitative analysis was carried out based on the responses of each sensor array in order to distinguish the geographical origin of the cultivated sample. The results of the analysis showed variances of volatile chemical compound of the samples even though it is from the same species. However, similarities of main components from all five samples were observed. Usage of pattern recognition chemometric approaches such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Cluster Analysis (CA) for processing Sensors 2003, 3 459 instrumental data provided good classification of O. stamineus samples according to its geographical origin.

Research paper thumbnail of A SAS program to assess the sensitivity of normality tests on non-normal data

Research paper thumbnail of The Phylogenetic Tree of RNA Polymerase Constructed Using MOM Method

2009 International Conference of Soft Computing and Pattern Recognition, 2009

... Nazalan Najimudin School of Biological Sciences Universiti Sains Malaysia 11800 Minden Penang... more ... Nazalan Najimudin School of Biological Sciences Universiti Sains Malaysia 11800 Minden Penang, Malaysia nazalan@usm.my Zeti Azura Mohamed Hussein Faculty of Science and Technology Universiti Kebangsaan Malaysia 43600 Bangi Selangor, Malaysia zeti@ukm.my ...

Research paper thumbnail of The relationship between complexity (taxonomy) and difficulty

Difficulty and complexity are important factors that occur in every test questions. These two fac... more Difficulty and complexity are important factors that occur in every test questions. These two factors will also affect the reliability of the test. Hence, difficulty and complexity must be considered by educators during preparation of the test questions. The relationship between difficulty and complexity is studied. Complexity is defined as the level in Bloom's Taxonomy. Difficulty is represented by the proportion of students scoring between specific score intervals. A chi-square test of independence between difficulty and complexity was conducted on the results of a continuous assessment of a third year undergraduate course, Probability Theory. The independence test showed that the difficulty and complexity are related. However, this relationship is small.

Research paper thumbnail of Simulation and statistics: Like rhythm and song

ABSTRACT Simulation has been introduced to solve problems in the form of systems. By using this t... more ABSTRACT Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.

Research paper thumbnail of Comparative Performance of Pseudo-Median Procedure, Welch’s Test and Mann-Whitney-Wilcoxon at Specific Pairing

Modern Applied Science, 2011

The objective of this study is to investigate the performance of two-sample pseudo-median based p... more The objective of this study is to investigate the performance of two-sample pseudo-median based procedure in testing differences between groups. The procedure is the modification of one-sample Wilcoxon procedure using pseudo-median of differences between group values as the central measure of location. The test was conducted on two groups setting with moderate sample sizes of symmetric and asymmetric distributions. The performance of the procedure was measured and evaluated in terms of Type I error and power rates obtained via Monte Carlo methods. Type I error and power rates of the procedure were then compared with the alternative parametric and nonparametric procedures namely the Welch's test and Mann-Whitney-Wilcoxon test. The findings revealed that the pseudo-median procedure is capable in controlling its Type I error close to the nominal level when heterogeneity of variances exists. In terms of robustness, the pseudo-median procedure outperforms the Welch's and Mann Whitney Wilcoxon tests when distributions are skewed. The pseudo-median procedure is also capable in maintaining high power rates especially for negative pairing.

Research paper thumbnail of Type I Error Rates of Ft Statistic with Different Trimming Strategies for Two Groups Case

Modern Applied Science, 2011

When the assumptions of normality and homoscedasticity are met, researchers should have no doubt ... more When the assumptions of normality and homoscedasticity are met, researchers should have no doubt in using classical test such as t-test and ANOVA to test for the equality of central tendency measures for two and more than two groups respectively. However, in real life we do not often encounter with this ideal situation. A robust method known as F t statistic has been identified as an alternative to the above methods in handling the problem of nonnormality. Motivated by the good performance of the method, in this study we proposed to use F t statistic with three different trimming strategies, namely, i) fixed symmetric trimming (10%, 15% and 20%), ii) fixed asymmetric trimming (10%, 15% and 20%) and iii) empirically determined trimming, to simultaneously handle the problem of nonnormality and heteroscedasticity. To test for the robustness of the procedures towards the violation of the assumptions, several variables were manipulated. The variables are types of distributions and heterogeneity of variances. Type I error for each procedures were then be calculated. This study will be based on simulated data with each procedure been simulated 5000 times. Based on the Type I error rates, we were able to identify which procedures (F t with different trimming strategies) are robust and have good control of Type I error. The best procedure that should be taken into consideration is the F t with MOM -T n for normal distribution, 15% fixed trimming for skewed normal-tailed distribution and MOM -MAD n for skewed leptokurtic distribution. This is because, all of the procedures produced the nearest Type I error rates to the nominal level.

Research paper thumbnail of Test of hypothesis in one-way random effects model with unequal error variances

Journal of Statistical Computation and Simulation, 1985

... Othman, Abdul Rahman, (1983), One-way Random Effects Model with Unequal Error Variances, unpu... more ... Othman, Abdul Rahman, (1983), One-way Random Effects Model with Unequal Error Variances, unpublished MS. Thesis, Department of Mathematics, Southern Illinois University at Carbondale. Rao, PSRS, Kaplan, J. and Cochran, WG (1981), Estimators for the one-way ...

Research paper thumbnail of New Procedure in Testing Differences between Two Groups

Applied Mathematics & Information Sciences, 2013

Research paper thumbnail of The dual role of two scale estimators

Abstract. Two scale estimators, E1 and E2,(Rousseeuw & Croux, 1993) for the medians were used... more Abstract. Two scale estimators, E1 and E2,(Rousseeuw & Croux, 1993) for the medians were used in their intended role in a test statistic that is constructed from the diferences of medians between treatment groups (Babu, Padmanabhan & Puri, 1999). They were also ...

Research paper thumbnail of Monitoring of Milk Quality With Disposable Taste Sensor

Sensor, 2003

A disposable screen-printed multi channel taste sensor composed of several types of lipid as tran... more A disposable screen-printed multi channel taste sensor composed of several types of lipid as transducers and a computer as data analyzer could detect taste in a manner similar to human gustatory sensation. The disposable taste sensor was used to measure the electrical potential resulted from the interaction between lipid membranes and taste substances. In the present study, two types of packaged commercial milk, the ultra high temperature (UHT) and the pasteurized milk were tested. It was found that the disposable taste sensor is capable to discriminate reliably between fresh and spoiled milk and to follow the deterioration of the milk quality when it is stored at room temperature based on a pattern recognition principle namely Principle Component Analysis (PCA). This research could provide a new monitoring method ideally for simple and cheap decentralized testing for controlling the quality of milk, which may be of great use in the dairy industries.

Research paper thumbnail of Assessing Normality: Applications in Multi-Group Designs

Warr and Erich (2013) compared a frequently recommended procedure in textbooks: the interquartile... more Warr and Erich (2013) compared a frequently recommended procedure in textbooks: the interquartile range divided by the sample standard deviation, against the Shapiro-Wilk's test in assessing normality of data. They found the Shapiro-Wilk's test to be far superior to the deficient interquartile range statistic. We look further into the issue of assessing non-normality by investigating the Anderson-Darling goodness-of-fit statistic for its sensitivity to detect non-normal data in a multi-group problem where Type I error and power issues can be explored from perspectives not considered by Warr and Erich. In particular, we examined the sensitivity of this test for 23 non-normal distributions consisting of g-and h-distributions, contaminated mixed-normal distributions and multinomial distributions. In addition, we used a sequentially-rejective Bonferroni procedure to limit the overall rate of Type I errors across the multi-groups assessed for normality and defined the power of the procedure according to whether there was at least one rejection from among the three group tests, whether all three non-normal groups of data were detected and the average of the per-group power values. Our results indicate that the Anderson-Darling test was generally effective in detecting varied types of nonnormal data.