Sample Size Research Papers - Academia.edu (original) (raw)

Purpose -The research study involved an exploration of the thoughts and perspectives of Generation X aerospace engineers regarding strategies, processes, and methods to enhance the transfer of knowledge from baby boomers to Generation X... more

Purpose -The research study involved an exploration of the thoughts and perspectives of Generation X aerospace engineers regarding strategies, processes, and methods to enhance the transfer of knowledge from baby boomers to Generation X aerospace engineers.

Results: Twenty five (58%) control patients died at home compared with 124 (67%) patients allocated to hospital at home. This difference was not significant; intention to treat analysis did not show that hospital at home increased the... more

Results: Twenty five (58%) control patients died at home compared with 124 (67%) patients allocated to hospital at home. This difference was not significant; intention to treat analysis did not show that hospital at home increased the number of deathsat home. Seventy three patients ...

The main aim of this paper is to provide some practical guidance to researchers on how statistical power analysis can be used to estimate sample size in empirical design. The paper describes the key assumptions underlying statistical... more

The main aim of this paper is to provide some practical guidance to researchers on how statistical power analysis can be used to estimate sample size in empirical design. The paper describes the key assumptions underlying statistical power analysis and illustrates through several examples how to determine the appropriate sample size. The examples use hypotheses often tested in sport sciences and verified with popular statistical tests including the independent-samples t-test, one-way and twoway analysis of variance (ANOVA), correlation analysis, and regression analysis. Commonly used statistical packages allow researchers to determine appropriate sample size for hypothesis testing situations listed above.

Radiocarbon dating is the most widely used dating technique in the world. Recent advances in Accelerator Mass Spectrometry (AMS) and sample preparation techniques have reduced the sample-size requirements by a factor of 1000 and decreased... more

Radiocarbon dating is the most widely used dating technique in the world. Recent advances in Accelerator Mass Spectrometry (AMS) and sample preparation techniques have reduced the sample-size requirements by a factor of 1000 and decreased the measurement time from weeks to minutes. Today, it is estimated that more than 90 percent of all measurements made on accelerator mass spectrometers are for radiocarbon age dates. The production of 14 C in the atmosphere varies through time due to changes in the Earth's geomagnetic field intensity and in its concentration, which is regulated by the carbon cycle. As a result of these two variables, a radiocarbon age is not equivalent to a calendar age. Four decades of joint research by the dendrochronology and radiocarbon communities have produced a radiocarbon calibration data set of remarkable precision and accuracy extending from the present to approximately 12,000 calendar years before present. This paper presents high precision paired 230 Th/ 234 U/ 238 U and 14 C age determinations on pristine coral samples that enable us to extend the radiocarbon calibration curve from 12,000 to 50,000 years before present. We developed a statistical model to properly estimate sample age conversion from radiocarbon years to calendar years, taking full account of combined errors in input ages and calibration uncertainties. Our radiocarbon calibration program is publicly accessible at: http://www.radiocarbon.LDEO.columbia.edu/ along with full documentation of the samples, data, and our statistical calibration model.

In this pilot study, we point out potential differences between calcaneal trabecular microarchitecture in humans and nonhuman large apes, such as increased degree of anisotropy, reduced bone volume fraction, and very stereotypical... more

In this pilot study, we point out potential differences between calcaneal trabecular microarchitecture in humans and nonhuman large apes, such as increased degree of anisotropy, reduced bone volume fraction, and very stereotypical orientation of the trabeculae. Even though sample size does not permit us to investigate the issue statistically, the observed differences between humans and other hominoids warrants further in-depth investigation. We also show that some in Wiley InterScience (www.interscience.wiley.com).

The human fatty acid amide hydrolase (FAAH) missense mutation c.385 C!A, which results in conversion of a conserved proline residue to threonine (P129T), has been associated with street drug use and problem drug abuse. Although a link... more

The human fatty acid amide hydrolase (FAAH) missense mutation c.385 C!A, which results in conversion of a conserved proline residue to threonine (P129T), has been associated with street drug use and problem drug abuse. Although a link between the FAAH P129T variant and human drug abuse has been reported, the extent of risk and speciWc types of substance addiction vulnerability remain to be determined. Here, we investigated the relationship of the FAAH P129T variant to a number of linked single nucleotide polymorphisms to establish a haplotyping system, calculate the estimated age and origin of the FAAH 385 C!A mutation and evaluate its association with clinically signiWcant drug addiction in a case control study. The results showed a signiWcant over-representation of the FAAH P129T homozygotes in 249 subjects with documented multiple diVerent drug addictions compared to drug free individuals of the same ethnic backgrounds (P = 0.05) using logistic regression analysis controlling for ethnicity. To increase the logistic regression analysis power by increasing the sample size, the data from our previous study (Sipe et al. in Proc Natl Acad Sci USA 99:8394-8399, 2002) were pooled with the present cohort which increased the signiWcance to P = 0.00003. Investigation of the FAAH chromosomal backgrounds of the P129T variant in both multiple diVerent drug addicted and control subjects revealed a common ancestral haplotype, marked population diVerences in haplotype genetic diversity and an estimated P129T mutation age of 114,425-177,525 years. Collectively, these results show that the P129T mutation is the only common mutation in the FAAH gene and is signiWcantly associated with addictive traits. Moreover, this mutation appears to have arisen early in human evolution and this study validates the previous link between the FAAH P129T variant and vulnerability to addiction of multiple diVerent drugs.

Transmission assessment surveys (TAS) for lymphatic filariasis have been proposed as a platform to assess the impact of mass drug administration (MDA) on soil-transmitted helminths (STHs). This study used computer simulation and field... more

Transmission assessment surveys (TAS) for lymphatic filariasis have been proposed as a platform to assess the impact of mass drug administration (MDA) on soil-transmitted helminths (STHs). This study used computer simulation and field data from pre- and post-MDA settings across Kenya to evaluate the performance and cost-effectiveness of the TAS design for STH assessment compared with alternative survey designs. Variations in the TAS design and different sample sizes and diagnostic methods were also evaluated. The district-level TAS design correctly classified more districts compared with standard STH designs in pre-MDA settings. Aggregating districts into larger evaluation areas in a TAS design decreased performance, whereas age group sampled and sample size had minimal impact. The low diagnostic sensitivity of Kato-Katz and mini-FLOTAC methods was found to increase misclassification. We recommend using a district-level TAS among children 8-10 years of age to assess STH but suggest ...

The Boston Naming Test (BNT) is one of the most commonly used tests of confrontation naming. The length of the test, particularly when administered to impaired patients, has prompted the derivation of several abbreviated forms. Short... more

The Boston Naming Test (BNT) is one of the most commonly used tests of confrontation naming. The length of the test, particularly when administered to impaired patients, has prompted the derivation of several abbreviated forms. Short forms of the BNT have typically been equated in terms of difficulty, but not empirically derived for discriminating between normals and anomic patients. Furthermore, most reports to date have been limited in sample size and generalizability. The present study examined BNT data from a total of 1,044 subjects, including 719 normals and 325 patients with Alzheimer's disease (AD). Scores were calculated for the entire 60-item version as well as for eight previously reported short forms. The scores were examined for the effects of age, education, and gender, as well as for the ability of each form to discriminate between AD patients and normals. There was a significant effect of age, education, and gender on all previously published forms, and the short forms varied in their ability to discriminate between patients and controls. A stepwise discriminant analysis was conducted to empirically derive a new, gender-neutral short form with discriminability comparable to the full 60-item test. Norms from this sample on the empirically derived short form are reported.

We performed a quantitative review of associations between the higher order personality traits in the Big Three and Big Five models (i.e., neuroticism, extraversion, disinhibition, conscientiousness, agreeableness, and openness) and... more

We performed a quantitative review of associations between the higher order personality traits in the Big Three and Big Five models (i.e., neuroticism, extraversion, disinhibition, conscientiousness, agreeableness, and openness) and specific depressive, anxiety, and substance use disorders (SUD) in adults. This approach resulted in 66 meta-analyses. The review included 175 studies published from 1980 to 2007, which yielded 851 effect sizes. For a given analysis, the number of studies ranged from three to 63 (total sample size ranged from 1,076 to 75,229). All diagnostic groups were high on neuroticism (mean Cohen's d ϭ 1.65) and low on conscientiousness (mean d ϭ Ϫ1.01). Many disorders also showed low extraversion, with the largest effect sizes for dysthymic disorder (d ϭ Ϫ1.47) and social phobia (d ϭ Ϫ1.31). Disinhibition was linked to only a few conditions, including SUD (d ϭ 0.72). Finally, agreeableness and openness were largely unrelated to the analyzed diagnoses. Two conditions showed particularly distinct profiles: SUD, which was less related to neuroticism but more elevated on disinhibition and disagreeableness, and specific phobia, which displayed weaker links to all traits. Moderator analyses indicated that epidemiologic samples produced smaller effects than patient samples and that Eysenck's inventories showed weaker associations than NEO scales. In sum, we found that common mental disorders are strongly linked to personality and have similar trait profiles. Neuroticism was the strongest correlate across the board, but several other traits showed substantial effects independent of neuroticism. Greater attention to these constructs can significantly benefit psychopathology research and clinical practice.

In this paper, we used simulations to investigate the effect of sample size, number of indicators, factor loadings, and factor correlations on frequencies of the acceptance/rejection of models (true and misspecified) when selected... more

In this paper, we used simulations to investigate the effect of sample size, number of indicators, factor loadings, and factor correlations on frequencies of the acceptance/rejection of models (true and misspecified) when selected goodness-of-fit indices were compared with prespecified cutoff values. We found the percent of true models accepted when a goodness-of-fit index was compared with a prespecified cutoff value was affected by the interaction of the sample size and the total number of indicators. In addition, for the Tucker-Lewis index (TLI) and the relative noncentrality index (RNI), model acceptance percentages were affected by the interaction of sample size and size of factor loadings. For misspecified models, model acceptance percentages were affected by the interaction of the number of indicators and the degree of model misspecification. This suggests that researchers should use caution in using cutoff values for evaluating model fit. However, the study suggests that researchers who prefer to use prespecified cutoff values should use TLI, RNI, NNCP, and root-mean-square-error-ofapproximation (RMSEA) to assess model fit. The use of GFI should be discouraged.

Obtaining accurate system models for verification is a hard and time consuming process, which is seen by industry as a hindrance to adopt otherwise powerful modeldriven development techniques and tools. In this paper we pursue an... more

Obtaining accurate system models for verification is a hard and time consuming process, which is seen by industry as a hindrance to adopt otherwise powerful modeldriven development techniques and tools. In this paper we pursue an alternative approach where an accurate high-level model can be automatically constructed from observations of a given black-box embedded system. We adapt algorithms for learning finite probabilistic automata from observed system behaviors. We prove that in the limit of large sample sizes the learned model will be an accurate representation of the data-generating system. In particular, in the large sample limit, the learned model and the original system will define the same probabilities for linear temporal logic (LTL) properties. Thus, we can perform PLTL model-checking on the learned model to infer properties of the system. We perform experiments learning models from system observations at different levels of abstraction. The experimental results show the learned models provide very good approximations for relevant properties of the original system.

Background and purpose: Studies on the comorbidity of migraine and epilepsy have shown conflicting results. We wanted to explore the epidemiological association between migraine and seizure disorders in a population-based material where... more

Background and purpose: Studies on the comorbidity of migraine and epilepsy have shown conflicting results. We wanted to explore the epidemiological association between migraine and seizure disorders in a population-based material where case ascertainment was enhanced by individual specialist assessments. Methods: Information concerning migraine and seizure disorders was collected from 1793 participants in an interviewbased survey in a circumscribed community. Mixed headache, with features both of migraine without aura and tension-type headache, was excluded from further analyses because of its ambiguous character (n = 137). Thus, data from 1656 participants were included in the study. Results: The number of subjects with epilepsy was small, and a statistically significant association between migraine and the diagnosis of epilepsy was not found. There was a tendency to more active epilepsy in subjects with migraine (1.0%, 5/524), particularly for migraine with aura (1.8%, 3/168), compared with subjects without migraine (0.5%, 6/1132). Migraine was present in five of 11 subjects with active epilepsy (45%) and in four of 28 (14%) with epilepsy in remission (P = 0.09). Conclusions: An overall association between migraine and seizure disorders could not be demonstrated, but there was a tendency to more migraine in individuals with active epilepsy.

Comparing species richness among assemblages using sample units: why not use extrapolation methods to standardize different sample sizes?-Oikos 101: 398-410. Comparisons of species richness among assemblages using different sample sizes... more

Comparing species richness among assemblages using sample units: why not use extrapolation methods to standardize different sample sizes?-Oikos 101: 398-410. Comparisons of species richness among assemblages using different sample sizes may produce erroneous conclusions due to the strong positive relationship between richness and sample size. A current way of handling the problem is to standardize sample sizes to the size of the smallest sample in the study. A major criticism about this approach is the loss of information contained in the larger samples. A potential way of solving the problem is to apply extrapolation techniques to smaller samples, and produce an estimated species richness expected to occur if sample size were increased to the same size of the largest sample. We evaluated the reliability of 11 potential extrapolation methods over a range of different data sets and magnitudes of extrapolation. The basic approach adopted in the evaluation process was a comparison between the observed richness in a sample and the estimated richness produced by estimators using a sub-sample of the same sample. The Log-Series estimator was the most robust for the range of data sets and sub-sample sizes used, followed closely by Negative Binomial, SO-J1, Logarithmic, Stout and Vandermeer, and Weibull estimators. When applied to a set of independently replicated samples from a species-rich assemblage, 95% confidence intervals of estimates produced by the six best evaluated methods were comparable to those of observed richness in the samples. Performance of estimators tended to be better for species-rich data sets rather than for those which contained few species. Good estimates were found when extrapolating up to 1.8-2.0 times the size of the sample. We suggest that the use of the best evaluated methods within the range of indicated conditions provides a safe solution to the problem of losing information when standardizing different sample sizes to the size of the smallest sample.

The internal validity of an epidemiological study can be affected by random error and systematic error. Random error reflects a problem of precision in assessing a given exposure-disease relationship and can be reduced by increasing the... more

The internal validity of an epidemiological study can be affected by random error and systematic error. Random error reflects a problem of precision in assessing a given exposure-disease relationship and can be reduced by increasing the sample size. On the other hand, systematic error or bias reflects a problem of validity of the study and arises because of any error resulting from methods used by the investigator when recruiting individuals for the study, from factors affecting the study participation (selection bias) or from systematic distortions when collecting information about exposures and outcomes (information bias). Another important factor which may affect the internal validity of a clinical study is confounding. In this article, we focus on two categories of bias: selection bias and information bias. Confounding will be described in a future article of this series.

Purpose-This paper seeks to examine the factors and barriers that contribute to successful knowledge sharing among the university teaching staff. Design/methodology/approach-Based on an extensive review of literature, measures of... more

Purpose-This paper seeks to examine the factors and barriers that contribute to successful knowledge sharing among the university teaching staff. Design/methodology/approach-Based on an extensive review of literature, measures of knowledge sharing are identified. These include such factors as nature of knowledge, working culture, staff attitudes, motivation to share and opportunities to share. A model is developed for the study and hypotheses are formulated. Primary data were collected through a survey from a sample of teaching staff from both public and private universities in Malaysia. Findings-Based on empirical research, the study shows some contrasting findings. As for the sample drawn from teaching staff belonging to public universities, there is a significant relationship between knowledge sharing and the independent factors mentioned earlier. Results from the sample from staff teaching in private universities do not show such relationships. Research limitations/implications-The sample size itself and the generalisation of results to teaching staff from higher education institutions in Malaysia constitute a major limitation. Practical implications-The findings of the study provide useful insights to management of higher education institutions in providing facilities to enhance knowledge sharing among teaching staff. Originality/value-The study makes a valuable contribution, given that there is a dearth of empirical studies of this nature focusing on the South East Asian region.

The purpose of this review was to critically analyse existing tools to measure perinatal mental health risk and report on the psychometric properties of the various approaches using defined criteria. An initial literature search revealed... more

The purpose of this review was to critically analyse existing tools to measure perinatal mental health risk and report on the psychometric properties of the various approaches using defined criteria. An initial literature search revealed 379 papers, from which 21 papers relating to ten instruments were included in the final review. A further four papers were identified from experts (one excluded) in the field. The psychometric properties of six multidimensional tools and/or criteria were assessed.

Effect of the dialysis membrane on mortality of chronic hemodialysis patients. Mortality of prevalent chronic hemodialysis patients remains high. The potential effect of the dialysis membrane on this mortality has not been previously... more

Effect of the dialysis membrane on mortality of chronic hemodialysis patients. Mortality of prevalent chronic hemodialysis patients remains high. The potential effect of the dialysis membrane on this mortality has not been previously investigated in a large population of chronic hemodialysis patients. Using data from the United States Renal Data System (USRDS), we analyzed a random sample of 6,536 patients receiving hemodialysis on December 31, 1990. The study design was a historical prospective study. By limiting the study to patients dialyzed for at least one year with bicarbonate dialysate, in whom the dose of dialysis could be calculated, and in whom dialysis membrane and co-existing morbidities were defined, the sample size was reduced to 2,410 patients. A Cox proportional hazards model was used to estimate relative mortality risk. The types of dialysis membranes used were broadly classified into three categories: unsubstituted cellulose, modified cellulose (generally cellulose membranes that have been modified by substitutions of some or most of their hydroxyl moieties) and synthetic membranes that are not cellulosebased. The results of the study suggest that after adjusting for the dose of dialysis and the presence of co-morbid factors, the relative risk of mortality of patients dialyzed with modified cellulose or synthetic membranes was at least 25% less than that of patients treated with unsubstitoted cellulose membranes (P < 0.001). To account for the possibility that these differences were due to regional practice patterns, we further stratified the data for nine different regions. There was still a 20% difference in relative risk of mortality between membrane groups with the mortality statistically significantly less in patients treated with synthetic membranes (P < 0.045) compared to patients dialyzed with unsubstituted cellulose membranes. The results of this study suggest that the dialysis membrane plays an important role in the outcome of chronic hemodialysis patients. However, more definitive studies are needed before a cause and effect relationship can be proven.

Post-tonsillectomy swallowing pain is a common and distressing side effect after tonsillectomy and thus of great clinical interest. Up until now, there is no randomized controlled patient- and observer-blinded study evaluating the... more

Post-tonsillectomy swallowing pain is a common and distressing side effect after tonsillectomy and thus of great clinical interest. Up until now, there is no randomized controlled patient- and observer-blinded study evaluating the efficacy of acupuncture against swallowing pain after tonsillectomy. We therefore compared the potency of specific verum acupuncture points related to a Chinese medical diagnosis in reducing postoperative swallowing pain with non-specific control points on the body as well as a non-acupuncture group who received standard medication only. The standardized pain therapy after tonsillectomy was orally administered nonsteroidal anti-inflammatory drugs (NSAID) (diclofenac 3 × 50 mg oral). The patients (n = 123) treated with NSAID were asked about their acute pain after taking a sip of water between the first and fifth postoperative day. Participants’ pain was assessed using visual analog (VAS) [zero (0) for no pain up to ten (10) for the acute reported outset pain] before and 20 min, 1, 2 and 3 h after acupuncture treatment or standard pain medication, respectively. The functional assessment of diagnosis and treatment point-combination occurred by means of the “Heidelberg Model” of Traditional Chinese Medicine (TCM). Verum acupuncture lead to a significant additional pain relief. In comparison to the acupuncture, they also reported an average of 3 h duration of adequate pain-relief past taking the NSAID. This trial strongly supports a specific acupuncture scheme for the treatment of postoperative swallowing pain after tonsillectomy. It may particularly serve as an alternative pain treatment in case of NSAID intolerances.

Aims/hypothesis There has been much focus on the potential role of mitochondria in the aetiology of type 2 diabetes and the metabolic syndrome, and many case–control mitochondrial association studies have been undertaken for these... more

Aims/hypothesis There has been much focus on the potential role of mitochondria in the aetiology of type 2 diabetes and the metabolic syndrome, and many case–control mitochondrial association studies have been undertaken for these conditions. We tested for a potential association between common mitochondrial variants and a number of quantitative traits related to type 2 diabetes in a large sample of >2,000 healthy Australian adolescent twins and their siblings, many of whom were measured on more than one occasion. Methods To the best of our knowledge, this is the first mitochondrial association study of quantitative traits undertaken using family data. The maternal inheritance pattern of mitochondria means established association methodologies are unsuitable for analysis of mitochondrial data in families. We present a methodology, implemented in the freely available program Sib-Pair for performing such an analysis. Results Despite our study having the power to detect variants with modest effects on these phenotypes, only one significant association was found after correction for multiple testing in any of four age groups. This was for mt14365 with triacylglycerol levels (unadjusted p = 0.0006). This association was not replicated in other age groups. Conclusions/interpretation We find little evidence in our sample to suggest that common European mitochondrial variants contribute to variation in quantitative phenotypes related to diabetes. Only one variant showed a significant association in our sample, and this association will need to be replicated in a larger cohort. Such replication studies or future meta-analyses may reveal more subtle effects that could not be detected here because of limitations of sample size.

In this paper the authors show that the largest eigenvalue of the sample covariance matrix tends to a limit under certain conditions when both the number of variables and the sample size tend to infinity. The above result is proved under... more

In this paper the authors show that the largest eigenvalue of the sample covariance matrix tends to a limit under certain conditions when both the number of variables and the sample size tend to infinity. The above result is proved under the mild restriction that the fourth moment of the elements of the sample sums of squares and cross products (SP) matrix exist.

The needle biopsy technique described by Bergström is the most commonly used technique to obtain samples to assess muscle metabolism. Sampling of muscle, particularly the vastus lateralis, has become an essential tool in biomedical and... more

The needle biopsy technique described by Bergström is the most commonly used technique to obtain samples to assess muscle metabolism. Sampling of muscle, particularly the vastus lateralis, has become an essential tool in biomedical and clinical research. Optimal sample size is critical for availability of tissue for processing. To evaluate the effectiveness of a novel technique to obtain adequate sample size using wall suction applied to needle muscle biopsy, we collected samples from subjects in on-going clinical studies for gene expression.Muscle biopsy samples of the vastus lateralis using 6 mm Bergström needles under local anesthesia were obtained from 55 subjects who had volunteered to participate in this research project. The vastus lateralis was biopsied according to the methods described by Bergström with a 6 mm biopsy needle. Wall suction was applied to the inner bore of the biopsy needle after the needle was inserted into the muscle.The mean sample of biopsy taken using the 6 mm was 233 mg (n = 55). The wall suction (200 mm Hg) applied to the needle pulled the surrounding tissue into the central bore of the needle. The quality of the samples was adequate for all biochemical assays. The biopsy technique did not result in any complications due to infection or bleeding.Using a novel technique of connecting a 6 mm Bergström biopsy needle to wall suction, we have obtained 200 to 300 mg muscle biopsy specimens uniformly, with ease, and minimal discomfort. An increase in sample size allows for a wider variety of biochemical and histopathological analysis.

We used a dataset of 164 titles comprising 146 primary publications, 16 congress proceedings and 2 unpublished studies on grass endophytes published or conducted between 1982-2004. We compiled the reference database from narrative reviews... more

We used a dataset of 164 titles comprising 146 primary publications, 16 congress proceedings and 2 unpublished studies on grass endophytes published or conducted between 1982-2004. We compiled the reference database from narrative reviews of the topic, keyword searches in the Web of Science, reference sections of published papers and information gathered by networking with our colleagues over the past 15 years.

Background: Low colo-rectal anastomoses have a relevant risk of leakage. The protective stomas (ileostomy or colostomy) have always been utilized to reduce the complications due to anastomotic leakage. The stoma not only causes relevant... more

Background: Low colo-rectal anastomoses have a relevant risk of leakage. The protective stomas (ileostomy or colostomy) have always been utilized to reduce the complications due to anastomotic leakage. The stoma not only causes relevant morbidity but also needs a second operation to be closed, with an added risk of complications. Purpose: For this reason we planed and carried out a temporary percutaneous ileostomy by a jejunal probe introduced in the distal ileum, that can be removed without a surgical procedure and with negligible complications. Methods: The ALPPI trial is a randomized controlled, open, parallel, equivalence multicenter study. Patients undergoing elective laparoscopic or laparotomic surgery for rectal cancer with extraperitoneal anastomosis, will be randomly allocated to undergo either lateral ileostomy or percutaneous ileostomy by exclusion probe. Results: The primary endpoint is the protection of the extraperitoneal colo-rectal anastomosis in terms of incidence of symptomatic and asymptomatic anastomotic leakages. The secondary endpoints are the evaluation of complications due to the placement and the removal of the exclusion probe for percutaneous ileostomy. Conclusions: The ALPPI trial is designed to provide the surgical community with an evidence based new technique in the protection of low colo-rectal anastomosis, alternative to the conventional stomas.

This paper establishes new criteria for deriving the optimal sample size when a hypothesis test between many binomial populations is performed. The problem is addressed from the Bayesian point of view assuming that the prior parameters... more

This paper establishes new criteria for deriving the optimal sample size when a hypothesis test between many binomial populations is performed. The problem is addressed from the Bayesian point of view assuming that the prior parameters are dependent through a Dirichlet distribution. Initially, an upper bound is set for the posterior risk and then we choose as 'optimum' the combined sample size for which the likelihood of the data does not satisfy this bound. The combined sample size is divided equally among the binomial populations. A series of numerical examples in the case of three populations for which the proportions are equal to either a fixed or to a random value further illustrates the suggested methodology.

Systematic methods for improving response rates in surveys has been an area of interest for researchers for some time following the articulation of the Total Design Method (Dillman 1978) which included applying aspects of social influence... more

Systematic methods for improving response rates in surveys has been an area of interest for researchers for some time following the articulation of the Total Design Method (Dillman 1978) which included applying aspects of social influence and social exchange theory to encourage responses. One part of the literature focused on the idea of offering a tangible but token incentive to respondents at the time of the request for data. Subsequent research and experience has shown that appropriate incentives do improve respondent cooperation in terms of obtaining better response rates (eg Singer and Wilmot 1997; Yammarino, Skinner and Childers 1991). The focus of most work has been on using monetary or token material incentives for surveys of individuals, so little is known about the problem of how to motivate businesses and the individuals in them to respond. The situation of National Statistical Organisations (NSOs) conducting surveys of businesses is also different to that of many other s...

Single frequency estimation is a long-studied problem with application domains including radar, sonar, telecommunications, astronomy and medicine. One method of estimation, called phase unwrapping, attempts to estimate the frequency by... more

Single frequency estimation is a long-studied problem with application domains including radar, sonar, telecommunications, astronomy and medicine. One method of estimation, called phase unwrapping, attempts to estimate the frequency by performing linear regression on the phase of the received signal. This procedure is complicated by the fact that the received phase is 'wrapped' modulo 2 and therefore must be 'unwrapped' before the regression can be performed. In this paper, we propose an estimator that performs phase unwrapping in the least squares sense. The estimator is shown to be strongly consistent and its asymptotic distribution is derived. We then show that the problem of computing the least squares phase unwrapping is related to a problem in algorithmic number theory known as the nearest lattice point problem. We derive a polynomial time algorithm that computes the least squares estimator. The results of various simulations are described for different values of sample size and SNR.

Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social... more

Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the power of the test of the treatment effect correctly. Such power computations may require some programming and special routines of statistical software. Alternatively, one can use the typical power tables to compute power in nested designs. This paper provides simple formulae that define expected effect sizes and sample sizes needed to compute power in nested designs using the typical power tables. Simple examples are presented to demonstrate the usefulness of the formulae.

Objectives: Functional (psychogenic or somatoform) symptoms are common in neurology clinics. Cognitive-behavioral therapy (CBT) can be an effective treatment, but there are major obstacles to its provision in practice. We tested the... more

Objectives: Functional (psychogenic or somatoform) symptoms are common in neurology clinics. Cognitive-behavioral therapy (CBT) can be an effective treatment, but there are major obstacles to its provision in practice. We tested the hypothesis that adding CBT-based guided self-help (GSH) to the usual care (UC) received by patients improves outcomes. Methods: We conducted a randomized trial in 2 neurology services in the United Kingdom. Outpatients with functional symptoms (rated by the neurologist as "not at all" or only "somewhat" explained by organic disease) were randomly allocated to UC or UC plus GSH. GSH comprised a self-help manual and 4 half-hour guidance sessions. The primary outcome was self-rated health on a 5-point clinical global improvement scale (CGI) at 3 months. Secondary outcomes were measured at 3 and 6 months. Results: In this trial, 127 participants were enrolled, and primary outcome data were collected for 125. Participants allocated to GSH reported greater improvement on the primary outcome (adjusted common odds ratio on the CGI 2.36 [95% confidence interval 1.17-4.74; p ϭ 0.016]). The absolute difference in proportion "better" or "much better" was 13% (number needed to treat was 8). At 6 months the treatment effect was no longer statistically significant on the CGI but was apparent in symptom improvement and in physical functioning. Conclusions: CBT-based GSH is feasible to implement and efficacious. Further evaluation is indicated. Classification of evidence: This study provides Class III evidence that CBT-based GSH therapy improves self-reported general health, as measured by the CGI, in patients with functional neurologic symptoms. Neurology ® 2011;77:564-572 GLOSSARY CBT ϭ cognitive-behavioral therapy; CGI ϭ clinical global improvement scale; CPS ϭ change in presenting symptoms scale; GSH ϭ guided self-help; NNT ϭ number needed to treat; OR ϭ odds ratio; SF-12 ϭ Medical Outcomes Short Form 12-Item Scale; UC ϭ usual care. Many somatic symptoms such as pain, weakness, and dizziness are unexplained by organic disease. 1 Such symptoms are referred to as "functional," "psychogenic," "medically unexplained," or "somatoform," although all these terms are problematic. 2 These symptoms account for one-third of attendance at medical clinics 3,4 with neurology having one of the highest rates. 4,5 The outcome after medical consultation is poor. 6,7 As with many conditions at the interface of neurology and psychiatry, integrated approaches to patient management have been neglected. We know that intensive cognitive-behavioral therapy (CBT) can reduce the symptoms, distress, and disability of patients with functional symptoms. 8 However, there are major obstacles to its delivery in practice because patients often regard psychological treatment as inappropriate and referral to mental health services as unacceptable and CBT therapists may not be available in all communities. These obstacles could potentially be overcome: CBT could be adapted to directly address the patients' somatic concerns, 9 it could be delivered in the neurology clinic, and it could be provided in a self-help form (bibliotherapy). CBT-based self-help is effective for other conditions, such as de

This paper describes a novel structure for a hardwired fast Fourier transform (FFT) signal processor that promises to permit digital spectrum analysis to achieve throughput rates consistent with extremely wide-band radars. The technique... more

This paper describes a novel structure for a hardwired fast Fourier transform (FFT) signal processor that promises to permit digital spectrum analysis to achieve throughput rates consistent with extremely wide-band radars. The technique is based on the use of serial storage for data and intermediate results and multiple arithmetic units each of which carries out a sparse Fourier transform. Details of the system are described for data sample sizes that are binary multiples, but the technique is applicable to any composite number. Index Terms-Cascade Fourier transform, digital signal processor, Doppler radar, fast Fourier transform, radar-sonar signal processor, radix-two fast Fourier transform, real-time signal processor.

Fibreoptic intubation remains a key technique for the management of difficult intubation. We randomly compared the second generation single-use Ambu Ò aScope TM 2 videoscope with a standard re-usable flexible intubating fibrescope in 50... more

Fibreoptic intubation remains a key technique for the management of difficult intubation. We randomly compared the second generation single-use Ambu Ò aScope TM 2 videoscope with a standard re-usable flexible intubating fibrescope in 50 tracheal intubations in patients with a difficult airway simulated by a semirigid collar. All patients' tracheas were intubated successfully with the aScope 2 or the re-usable fibrescope. The median (IQR [range]) time to intubate was significantly longer with the aScope 2 70 (55-97 [41-226]) s vs 50 (40-59 [27-175]) s, p = 0.0003) due to an increased time to see the carina. Quality of vision was significantly lower with the aScope 2 (excellent 24 (48%) vs 49 (98%), p = 0.0001; good 22 (44%) vs 1 (2%), p = 0.0001; poor 4 (8%) vs 0, p = 0.12) but with no difference in the subjective ease to intubate (easy score of 31 (62%) vs 38 (76%), p = 0.19; intermediate 12 (24%) vs 7 (14%), p = 0.31; difficult 7 (14%) vs 5 (5%), p = 0.76). The longer times to intubate and the poorer scores for quality of vision do not support the use of the single-use aScope 2 videoscope as an alternative to the re-usable fibrescope.

A method was validated for the multi-residue analysis of 82 pesticides in grapes at ≤25 ng/g level. Berry samples (10 g) mixed with sodium sulphate (10 g) were extracted with ethyl acetate (10 mL); cleaned by dispersive solid phase... more

A method was validated for the multi-residue analysis of 82 pesticides in grapes at ≤25 ng/g level. Berry samples (10 g) mixed with sodium sulphate (10 g) were extracted with ethyl acetate (10 mL); cleaned by dispersive solid phase extraction and the results were obtained by liquid chromatography–tandem mass spectrometry. Reduction in sample size and proportion of ethyl acetate for extraction did not affect accuracy or precision of analysis when compared to the reported methods and was also statistically similar to the QuEChERS technique. The method was rugged (HorRat < 0.5) with <20% measurement uncertainties. Limit of quantification was <10 ng/g with recoveries 70–120% for most pesticides. The method offers cheaper and safer alternative to typical multi-residue analysis methods for grape.

It is a well known part of statistical knowledge that first order asymptotically efficient procedures can be misleading for moderate sample sizes. Usually this is demonstrated for some popular special cases including numerical... more

It is a well known part of statistical knowledge that first order asymptotically efficient procedures can be misleading for moderate sample sizes. Usually this is demonstrated for some popular special cases including numerical comparisons. Typically the situation is worse if nuisance parameters are present. In this paper we give second order asymptotically efficient tests, confidence regions, and estimators for the nonlinear regression model which are based on the least-squares estimator and the residual sum of squares. :'

Keywords: Logistic mixed model; Penalized Quasi Likelihood; Unit level model. ( ) Work supported by the project PRIN 2007 Efficient use of auxiliary information at the design and at the estimation stage of complex surveys: methodological... more

Keywords: Logistic mixed model; Penalized Quasi Likelihood; Unit level model. ( ) Work supported by the project PRIN 2007 Efficient use of auxiliary information at the design and at the estimation stage of complex surveys: methodological aspects and applications for producing official statistics.

There has been growing interest, when comparing an experimental treatment with an active control with respect to a binary outcome, in allowing the non-inferiority margin to depend on the unknown success rate in the control group. It does... more

There has been growing interest, when comparing an experimental treatment with an active control with respect to a binary outcome, in allowing the non-inferiority margin to depend on the unknown success rate in the control group. It does not seem universally recognized, however, that the statistical test should appropriately adjust for the uncertainty surrounding the non-inferiority margin. In this paper, we inspect a naive procedure that treats an "observed margin" as if it were fixed a priori, and explain why it might not be valid. We then derive a class of tests based on the delta method, including the Wald test and the score test, for a smooth margin. An alternative derivation is given for the asymptotic distribution of the likelihood ratio statistic, again for a smooth margin. We discuss the asymptotic behavior of these tests when applied to a piecewise smooth margin. A simple condition on the margin function is given which allows the likelihood ratio test to carry over to a piecewise smooth margin using the same critical value as for a smooth margin. Simulation experiments are conducted, under a smooth margin and a piecewise linear margin, to evaluate the finite-sample performance of the asymptotic tests studied.

There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are... more

There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and Additional Supporting Information may be found in the online version of this article.

STAR * D is a multisite, prospective, randomized, multistep clinical trial of outpatients with nonpsychotic major depressive disorder. The study compares various treatment options for those who do not attain a satisfactory response with... more

STAR * D is a multisite, prospective, randomized, multistep clinical trial of outpatients with nonpsychotic major depressive disorder. The study compares various treatment options for those who do not attain a satisfactory response with citalopram, a selective serotonin reuptake inhibitor antidepressant. The study enrolls 4000 adults (ages 18-75) from both primary and specialty care practices who have not had either a prior inadequate response or clear-cut intolerance to a robust trial of protocol treatments during the current major depressive episode. After receiving citalopram (level 1), participants without sufficient symptomatic benefit are

Design/methodology/approach – The paper used a mail-survey of companies listed in the Directory of the Federation of Malaysian Manufacturers (FMM), year 2003. The FMM Directory provides a database of over 2,000 manufacturing firms of... more

Design/methodology/approach – The paper used a mail-survey of companies listed in the Directory of the Federation of Malaysian Manufacturers (FMM), year 2003. The FMM Directory provides a database of over 2,000 manufacturing firms of various sizes producing a broad range of ...

In this paper, we consider constructing reliable confidence intervals for regression parameters using robust M-estimation allowing for the possibility of time series correlation among the errors. The change of variance function is used to... more

In this paper, we consider constructing reliable confidence intervals for regression parameters using robust M-estimation allowing for the possibility of time series correlation among the errors. The change of variance function is used to approximate the theoretical coverage ...

Background: A major unresolved safety concern for malaria case management is the use of artemisinin combination therapies (ACTs) in the first trimester of pregnancy. There is a need for human data to inform policy makers and treatment... more

Background: A major unresolved safety concern for malaria case management is the use of artemisinin combination therapies (ACTs) in the first trimester of pregnancy. There is a need for human data to inform policy makers and treatment guidelines on the safety of artemisinin combination therapies (ACT) when used during early pregnancy. Methods: The overall goal of this paper is to describe the methods and implementation of a study aimed at developing surveillance systems for identifying exposures to antimalarials during early pregnancy and for monitoring pregnancy outcomes using health and demographic surveillance platforms. This was a multi-center prospective observational cohort study involving women at health and demographic surveillance sites in three countries in Africa: Burkina Faso, Kenya and Mozambique [(ClinicalTrials.gov Identifier: NCT01232530)]. The study was designed to identify pregnant women with artemisinin exposure in the first trimester and compare them to: 1) pregnant women without malaria, 2) pregnant women treated for malaria, but exposed to other antimalarials, and 3) pregnant women with malaria and treated with artemisinins in the 2nd or 3rd trimesters from the same settings. Pregnant women were recruited through community-based surveys and attendance at health facilities, including antenatal care clinics and followed until delivery. Data from the three sites will be pooled for analysis at the end of the study. Results are forthcoming. Discussion: Despite few limitations, the methods described here are relevant to the development of sustainable pharmacovigilance systems for drugs used by pregnant women in the tropics using health and demographic surveillance sites to prospectively ascertain drug safety in early pregnancy.

It is increasingly common to be faced with longitudinal or multi-level data sets that have large number of predictors and/or a large sample size. Current methods of fitting and inference for mixed effects models tend to perform poorly in... more

It is increasingly common to be faced with longitudinal or multi-level data sets that have large number of predictors and/or a large sample size. Current methods of fitting and inference for mixed effects models tend to perform poorly in such settings. When there are many variables, it is appealing to allow uncertainty in subset selection and to obtain a sparse characterization of the data. Bayesian methods are available to address these goals using Markov chain Monte Carlo (MCMC), but MCMC is very computationally expensive and can be infeasible in large p and/or large n problems. As a fast approximate Bayes solution, we recommend a novel approximation to the posterior relying on variational methods. Variational methods are used to approximate the posterior of the parameters in a decomposition of the variance components, with priors chosen to obtain a sparse solution that allows selection of random effects. The method is evaluated through a simulation study, and applied to an epidemiological application.

Corporations of all sizes form teams in order to tackle quality problems. Often, these teams are given a quantifiable goal such as ''reduce defects by 40% within the next two months.'' In order to assess whether or not the goal has been... more

Corporations of all sizes form teams in order to tackle quality problems. Often, these teams are given a quantifiable goal such as ''reduce defects by 40% within the next two months.'' In order to assess whether or not the goal has been met, data must be gathered. In this article, we develop methods for determining the sample sizes necessary for detecting relative quality improvements, with specified probability, in finite and infinite populations. We provide formulas for the calculations and use the Solver function in Excel to implement normal approximations to the solutions. In addition, we provide methods, based on the hypergeometric distribution, for finding the exact error rates (a and b) for given samples sizes. A real-life example is discussed and modified to illustrate cases with finite and infinite populations.