Bootstrap (Statistics) Research Papers - Academia.edu (original) (raw)

Purpose – The purpose of this paper is to investigate, through the lens of the gift-giving theory, volunteers’ motivations for intending to stay with organizations. Design/methodology/approach – Data were collected from 379 volunteers... more

Purpose – The purpose of this paper is to investigate, through the lens of the gift-giving theory, volunteers’ motivations for intending to stay with organizations.
Design/methodology/approach – Data were collected from 379 volunteers from 30 charitable organizations operating in Italy’s socio-healthcare service sector. Bootstrapped mediation analysis was used to test the hypothesized relationships.
Findings – Volunteers’ reciprocal attitudes and gift-giving intentions partially mediated the relationship between motives and intentions to stay.
Practical implications – Policy makers of charitable organizations are advised to be more responsive to behavioral signals revealing volunteers’ motivations, attitudes, and intentions. Managers should appropriately align organizational responsiveness with volunteers’ commitment through gift-giving exchange systems.
Originality/value – The findings reveal that reciprocity and gift giving are significant organizational variables greatly influencing volunteers’ intentions to stay with organizations. Signaling theory is used to explain how volunteers’ attitudes are linked with organizational responsiveness. Furthermore, this study is the first to use an Italian setting to consider motives, reciprocity, and gift giving as they relate to intentions to stay.
Keywords Motivation, Reciprocity, Bootstrapped multiple mediation, Organizational behaviour, Non-profit organizations, Gift giving
Paper type Research paper

The thesis deals with spatial structure and chronological development of the early Lengyel Culture (4900-4700 cal BC) settlement at the site Svodín - Busahegy in Nové Zámky district, SW Slovakia. The approach is based on the settlement... more

The thesis deals with spatial structure and chronological development of the early Lengyel Culture (4900-4700 cal BC) settlement at the site Svodín - Busahegy in Nové Zámky district, SW Slovakia. The approach is based on the settlement area theory and more generally on the theory of artefacts. The used evidence consists finds from 1756 features, of which 679 belong to the Lengyel Culture and 184 are burials. The work includes a comprehensive electronic catalogue containing all relevant find documentation, localization plans of the trenches and features, plans of archaeological structures, belonging to different phases of the Lengyel culture settlement. The methodical chapters describe a process for direct digitization of ceramic fragments by laser scanning of profiles and automated reconstruction of ceramic shapes. The elaboration of chronology is based on the observed stratigraphic relations and typological analysis of the finds. Morphometric analysis and multivariate statistical methods were applied for the computerized typological classification of 1992 profiles of ceramic vessels. Find units were assigned to chronological phases based on a stochastic model, which integrates stratigraphic and typo-chronological data. A set of solutions was computed using hybrid particle swarm optimisation and genetic algorithm analysed by Monte Carlo simulation using Markov chains and interpreted in terms of Bayesian Statistics. Based on the spatial distribution of archaeological structures, three residential, three funerary and one rondel component have been identified in the settlement area. The final solution of the chronological model divides the settlement into seven phases, three pre-rondel phases, two phases in which subsequently the smaller and the larger rondels were built and two post-rondel phases. A spatial differentiation of settlement area based on social aspects was observed, and a possible association of the localization of the rondels with previous residential structures. In addition of burials in the vicinity of houses for the entire duration of the settlement, we also observe a separate burial area in the pre-rondel period. The typological composition of pottery shows a strong influence from the area Tisza culture, especially in the pre-rondel period. The usability of seriation and multivariate statistical methods for chronological ordering of find units was critically assessed based on the analysis of typo-chronological development of grave inventories.

"Novel computational and statistical prediction methods such as the support vector machine are becoming increasingly popular in remote-sensing applications and need to be compared to more traditional approaches like maximum-likelihood... more

"Novel computational and statistical prediction methods such
as the support vector machine are becoming increasingly popular
in remote-sensing applications and need to be compared
to more traditional approaches like maximum-likelihood classification.
However, the accuracy assessment of such predictive
models in a spatial context needs to account for the
presence of spatial autocorrelation in geospatial data by using
spatial cross-validation and bootstrap strategies instead of
their now more widely used non-spatial equivalent. These
spatial resampling-based estimation procedures were therefore
implemented in a new package ‘sperrorest’ for the opensource
statistical data analysis software R."

For a long time, one of my dreams was to describe the nature of uncertainty axiomatically, and it looks like I've finally done it in my co∼eventum mechanics! Now it remains for me to explain to everyone the co∼eventum mechanics in the... more

For a long time, one of my dreams was to describe the nature of uncertainty axiomatically, and it looks like I've finally done it in my co∼eventum mechanics! Now it remains for me to explain to everyone the co∼eventum mechanics in the most approachable way. This is what I'm trying to do in this work. The co∼eventum mechanics is another name for the co∼event theory, i.e., for the theory of experience and chance which I axiomatized in 2016 [1, 2]. In my opinion, this name best reflects the co∼event-based idea of the new dual theory of uncertainty, which combines the probability theory as a theory of chance, with its dual half, the believability theory as a theory of experience. In addition, I like this new name indicates a direct connection between the co∼event theory and quantum mechanics, which is intended for the physical explanation and description of the conict between quantum observers and quantum observations [4]. Since my theory of uncertainty satises the Kolmogorov axioms of probability theory, to explain this co∼eventum mechanics I will use a way analogous to the already tested one, which explains the theory of probability as a theory of chance describing the results of a random experiment. The simplest example of a random experiment in probability theory is the " tossing a coin ". Therefore, I decided to use this the simplest random experiment itself, as well as the two its analogies: the " "flipping a coin " and the " spinning a coin " to explain the co∼eventum mechanics, which describes the results of a combined experienced random experiment. I would like to resort to the usual for the probability theory " coin-based " analogy to explain (and first of all for myself) the logic of the co∼eventum mechanics as a logic of experience and chance. Of course, this analogy one may seem strange if not crazy. But I did not come up with a better way of tying the explanations of the logic of the co∼eventum mechanics to the coin-based explanations that are commonly used in probability theory to explain at least for myself the logic of the chance through a simple visual " coin-based " model that clarifies what occurs as a result of a combined experienced random experiment in which the experience of observer faces the chance of observation. I hope this analogy can be useful not only for me in understanding the co∼eventum mechanics.

You yourself, or what is the same, your experience is such ``coin'' that, while you aren't questioned, it rotates all the time in ``free flight''. And only when you answer the question the ``coin'' falls on one of the sides: ``Yes'' or... more

You yourself, or what is the same, your experience is such ``coin'' that, while you aren't questioned, it rotates all the time in ``free flight''. And only when you answer the question the ``coin'' falls on one of the sides: ``Yes'' or ``No'' with the believability that your experience tells you.

Controlled experiments with chess engines may be used to evaluate the effectiveness of alternative continuation lines of chess openings. The proposed methodology is demonstrated by evaluating a selection of continuations by White after... more

Controlled experiments with chess engines may be used to evaluate the effectiveness of alternative continuation lines of chess openings. The proposed methodology is demonstrated by evaluating a selection of continuations by White after the Sicilian Defense Najdorf Variation has been played by Black. The results suggest that the nine continuations tested represent a wide range of effectiveness that are mostly consistent with expert opinion.

Sampling variance or relative variance of a survey estimator can be related to its expected value by a mathematical relationships called generalized variance function (GVF) In the paper, the results of precision estimation using... more

Sampling variance or relative variance of a survey estimator can be related to its expected value by a mathematical relationships called generalized variance function (GVF) In the paper, the results of precision estimation using Generalized Variance Function for various income variables from Polish Household Budget Survey for counties (NUTS4) are presented. An attempt was made to compare this method with other precision estimation methods. A starting point was the estimation of Balanced Repeated Replication variances. or bootstrap variances in the situation where using BRR was not applicable. To evaluate the GVF model the hyperbolic function was used The computation was done using WesVAR and SPSS software and also special procedures prepared for R-project environment. The assessment of estimates consistency for counties was also conducted by means of small area models.

In the presentation authors present some empirical results for the Gini coefficient estimation on the basis of the Polish Household Budget Survey obtained for regions. The direct estimation was done using Ineq package for R-project... more

In the presentation authors present some empirical results for the Gini coefficient estimation on the basis of the Polish Household Budget Survey obtained for regions. The direct estimation was done using Ineq package for R-project environment. The precision of direct estimation was calculated using the bootstrap technique. The small area models for the Gini coefficient was also presented. To obtain the model-based estimates of the Gini ratio for regions the EB and EBLUP techniques were applied. For EBLUP estimation SAE package for R-project environment was used

The author presents a method of hierarchical bayesian estimation to estimate the value of the different income variables on the basis of studies of household budgets and POLTAX tax register. Calculations have been made for the case where... more

The author presents a method of hierarchical bayesian estimation to estimate the value of the different income variables on the basis of studies of household budgets and POLTAX tax register. Calculations have been made for the case where approximately known a priori evaluation of hyperparameters used to construct a conditional probability, which is used in the model. The author compares the efficiency of the estimates obtained by using other hierarchical methods of estimation for small areas, including the EBLUP estimators type. This gave congruity in precision of the estimated parameters using both techniques.

Populasi Dalam metode penelitian, kata populasi amat populer yang dipergunakan untuk menyebutkan serumpun atau sekelompok objek yang menjadi sasaran penelitian. Oleh karenanya populasi penelitian merupakan keseluruhan (universum) dari... more

Populasi Dalam metode penelitian, kata populasi amat populer yang dipergunakan untuk menyebutkan serumpun atau sekelompok objek yang menjadi sasaran penelitian. Oleh karenanya populasi penelitian merupakan keseluruhan (universum) dari objek penelitian yang dapat berupa manusia, hewan, tumbuhan, udara, gejala, nilai, peristiwa, sikap hidup, dan sebagainya,sehingga objek-objek itu dapat menjadi sasaran sumber data penelitian. Pengelompokkan populasi dari penentuan sumber data, yaitu: a. Populasi terbatas, yaitu populasi yang memiliki sumber data yang jelas batas-batasnya secara kuantitatif. b. Populasi tak terhingga, yaitu populasi yang memiliki sumber data yang tidak dapat ditentukan batas-batasnya secara kuantitatif. Dilihat dari kompleksitas objek populasi, maka populasi dapat dibedakan atas: a) Populasi homogen, yaitu keseluruhan individu yang menjadi anggota populasi, memiliki sifat-sifat yang relatif sama satu sama lainnya b) Populasi heterogen, yaitu keseluruhan individu anggota populasi relatif memiliki sifat-sifat individual, di mana sifat tersebut membedakan individu anggota populasi yang satu dengan lainnya Tahapan didalam pengambilan keputusan secara Statistika dapat dinyatakan dalam: pengambilan sampel, pendugaan parameter populasi dan pengujian parameter populasi. Pengujian ukuran pemusatan populasi dapat diklasifikasikasikan menjadi dua kelompok yaitu : a. Asumsi distribusi normal terpenuhi dan pengujiannya dilakukan terhadap rata-rata populasi serta statistik ujinya adalah t untuk varians populasi tak diketahui dan z untuk varians populasi diketahui b. Asumsi distribusi normal tak terpenuhi, pengujiannya dilakukan terhadap median dan statistik ujinya adalah uji tanda ataupun uji Wilcoxon.

Bootstrap Method examines a data practical approach to illustrate the support applications and other Resampling methods. It can also provide a powerful approach of statistical data analysis, as they have more general applications than... more

Bootstrap Method examines a data practical approach to illustrate the support applications and other Resampling methods. It can also provide a powerful approach of statistical data analysis, as they have more general applications than standard parametric methods. Bootstrap is a method which provides accurate measurements in the evaluation of a choice. This technique allows the evaluation of a choice distribution. In the case of supposed independent observations, it can be implemented by creating a number of new choices of observed data, each of which taken randomly. This article begins with a description of Bootstrap methods and shows how they are related to other resampling methods. In the meantime it examines a wide variety of application approaches. The article deals with the generation of bootstrap copies by computational methods which enable us the calculation of variance, average and quantiles coefficient; the evaluation of standard error, and BCA confidence interval. It also examines the applications of these methods in a wide range of problems and the rising assumptions and hypotheses together with evaluation problems. Bootstrap method uses statistical software such as R, S-Plus, SPSS to attain a lot of values in different choices. In this article we have used the statistical software R, which is a software that is 'free', based on the S language, providing the implementation of a wide range of statistical functions. This paper presents experimental results of the population which are taken into consideration to study the variance coefficient using Bootstrap Method by emphasizing the implementation of programming language. It will also present the comparison of experiment conclusions which are drawn by using the standard statistical methods.

There are many ways to bootstrap data for multiple comparisons procedures. Methods described here include (i) bootstrap (parametric and nonparametric) as a generalization of classical normal-based MaxT methods, (ii) bootstrap as an... more

There are many ways to bootstrap data for multiple comparisons procedures. Methods described here include (i) bootstrap (parametric and nonparametric) as a generalization
of classical normal-based MaxT methods, (ii) bootstrap as an approximation to exact permutation methods, (iii) bootstrap as a generator of realistic null data sets, and (iv) bootstrap as a generator of realistic non-null data sets. Resampling of MinP versus MaxT is discussed, and the use of the bootstrap for closed testing is also presented. Application to biopharmaceutical statistics are given.

-Nous proposons dans cet article d'analyser un nouveau procédé de polissage automatique : Ecoclean, qui est développé principalement pour la préparation de prothèses dentaires. En faisant varier les conditions expérimentales... more

-Nous proposons dans cet article d'analyser un nouveau procédé de polissage automatique : Ecoclean, qui est développé principalement pour la préparation de prothèses dentaires. En faisant varier les conditions expérimentales (matériauétudié, durée de traitement, présence ou non d'acide, présence ou non d'aiguilles) nous obtenons desétats de surface différents. Pour chaqueéchantillon l'état de surface est caractériséà l'aide d'un rugosimètre tactile et le calcul d'une centaine de paramètres de rugosité. Nous recherchons ensuiteà l'aide du logiciel MESRUG, que nous avons développé dans notre laboratoire, les paramètres de rugosité qui sont les plus pertinents pour caractériser l'état de surface en fonction des conditions expérimentales. Le polissage automatique est comparé au polissage manuel. Cette méthode peut etre utilisée pour définir les conditions expérimentales permettant d'obtenir unétat de surface répondant a des critères de rugosité déterminés.

State-space models (SSM) with Markov switching offer a powerful framework for detecting multiple regimes in time series, analyzing mutual dependence and dynamics within regimes, and assessing transitions between regimes. These models... more

State-space models (SSM) with Markov switching offer a powerful framework for detecting multiple regimes in time series, analyzing mutual dependence and dynamics within regimes, and assessing transitions between regimes. These models however present considerable computational challenges due to the exponential number of possible regime sequences to account for. In addition, high dimensionality of time series can hinder likelihood-based inference. To address these challenges, novel statistical methods for Markov-switching SSMs are proposed using maximum likelihood estimation, Expectation-Maximization (EM), and parametric bootstrap. Solutions are developed for initializing the EM algorithm, accelerating convergence, and conducting inference. These methods, which are ideally suited to massive spatio-temporal data such as brain signals, are evaluated in simulations and applications to EEG studies of epilepsy and of motor imagery are presented.

Research on forecasting is extensive and includes many studies that have tested alternative methods in order to determine which ones are most effective. We review this evidence in order to provide guidelines for forecasting for marketing.... more

Research on forecasting is extensive and includes many studies that have tested alternative methods in order to determine which ones are most effective. We review this evidence in order to provide guidelines for forecasting for marketing. The coverage includes intentions, Delphi, role playing, conjoint analysis, judgmental bootstrapping, analogies, extrapolation, rule-based forecasting, expert systems, and econometric methods. We discuss research about which methods are most appropriate to forecast market size, actions of decision makers, market share, sales, and financial outcomes. In general, there is a need for statistical methods that incorporate the manager's domain knowledge. This includes rule-based forecasting, expert systems, and econometric methods. We describe how to choose a forecasting method and provide guidelines for the effective use of forecasts including such procedures as scenarios.

A quantitative method is described for comparing chess openings. Test openings and baseline openings are run through chess engines under controlled conditions and compared to evaluate the effectiveness of the test openings. The results... more

A quantitative method is described for comparing chess openings. Test openings and baseline openings are run through chess engines under controlled conditions and compared to evaluate the effectiveness of the test openings. The results are intuitively appealing and in some cases they agree with expert opinion. The specific contribution of this work is the development of an objective measure that may be used for the evaluation and refutation of chess openings, a process that had been left to thought experiments and subjective conjectures and thereby to a large variety of opinion and a great deal of debate. 2 1 An error in the Caro-Kann opening was corrected on March 15,

This 10 hours class is intended to give students the basis to empirically solve statistical problems. Talk 1 serves as an introduction to the statistical software R, and presents how to calculate basic measures such as mean, variance,... more

This 10 hours class is intended to give students the basis to empirically solve statistical problems. Talk 1 serves as an introduction to the statistical software R, and presents how to calculate basic measures such as mean, variance, correlation and gini index. Talk 2 shows how the central limit theorem and the law of the large numbers work empirically. Talk 3 presents the point estimate, the confidence interval and the hypothesis test for the most important parameters. Talk 4 introduces to the linear regression model and Talk 5 to the bootstrap world. Talk 5 also presents an easy example of a markov chains.
All the talks are supported by script codes, in R language.

This paper reviews the empirical research on forecasting in marketing. In addition, it presents results from some small scale surveys. We offer a framework for discussing forecasts in the area of marketing, and then review the literature... more

This paper reviews the empirical research on forecasting in marketing. In addition, it presents results from some small scale surveys. We offer a framework for discussing forecasts in the area of marketing, and then review the literature in light of that framework. Particular emphasis is given to a pragmatic interpretation of the literature and findings. Suggestions are made on what research is needed.

The period after initial development of Neolithic society in Central Europe, known as the Post-LBK era, is marked by an influx of new cultural stimuli from the South and the emergence of formalization in monumental architecture, resulting... more

The period after initial development of Neolithic society in Central Europe, known as the Post-LBK era, is marked by an influx of new cultural stimuli from the South and the emergence of formalization in monumental architecture, resulting in a cultural diversification while maintaining significant common traits across different regions. An important part of understanding the process of this change is understanding the development of social complexity during the transition. This study addresses this question by examining variations in burial rite coinciding with the age or sex of the deceased or the spatial distribution of 106 graves from the Lengyel Culture settlement in Svodín, dated around 4800 cal BC. The concept of exceptionality rather than richness of burials is introduced. It is based on the composition and spatial distribution of inventories within graves and contrary to the traditional deductive approach does not depend on prior selection of attributes of prestige. Principa...

In order to estimate model parameters in multiple regression models, resampling methods of bootstrap and jackknife are used. Resampling methods are used as an alternative readjustment method to the least squares method (OLS) especially... more

In order to estimate model parameters in multiple regression models, resampling methods of bootstrap and jackknife are used. Resampling methods are used as an alternative readjustment method to the least squares method (OLS) especially when assumptions belonging to error term in regression analysis are not met. Data used in the study are taken from 25 advertisements in Sahibinden.com website and the price of beetle car brand is accepted as dependent variable for multiple linear regression models. It is aimed that price variable taken is tried to be explained with the help of variables of fuel, case type, salesman, sunroof, wind shield, upholstery, age and engine size. When we examined the variables, it is seen that categorical variables are in question and dummy variable must be used. Firstly, model parameters of this obtained data are estimated using OLS and significances of parameters are tested, then, model parameters, significances of estimated parameters, coefficient of determination, and standard error of the model and % 90 confidence intervals are estimated using one of the resampling methods, bootstrap and jackknife method and results belonging to these three methods are compared. Also, generalization condition, which is to the population, of parameter estimation results belonging to explanatory variables used in this study are reviewed with the help of jackknife resampling method ve It has been seen that the salesman and upholstery independent variables have a considerable effect at a significance level of .10 on the dependent variable of price dependence of decision making (p < .10) and Jackknife have confirmed these generalization.

The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of... more

The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.

This paper reviews the empirical research on forecasting in marketing. In addition, it presents results from some small scale surveys. We offer a framework for discussing forecasts in the area of marketing, and then review the literature... more

This paper reviews the empirical research on forecasting in marketing. In addition, it presents results from some small scale surveys. We offer a framework for discussing forecasts in the area of marketing, and then review the literature in light of that framework. Particular emphasis is given to a pragmatic interpretation of the literature and findings. Suggestions are made on what research is needed.

Jual peralatan K3 disurabaya,Jual peralatan K3 listrik,Jual peralatan K3 kontruksi,Jual peralatan K3 jogja,Jual peralatan K3 otomotif,Jual peralatan K3 Bangunan,Jual untuk peralatan k3 otomotif,Dijakarta jual peralatan k3 listrik,Dimedan... more

During skill execution, performers have been shown to attend to different aspects of movement, the external effects of one's action, or to other environmental information. A variety of psychological mechanisms have been proposed to... more

During skill execution, performers have been shown to attend to different aspects of movement, the external effects of one's action, or to other environmental information. A variety of psychological mechanisms have been proposed to account for the differential outcomes when adopting each attentional strategy. However, there is limited information about the extent to which different attentional foci change the workload demands of task performance. To examine this, the current study administered the NASA-Task Load Index following a simulated shooting dual-task. Participants performed the primary shooting task alone (control), and also with a secondary task that directed attention towards an aspect of skill execution (skill-focused) and an unrelated environmental stimulus (extraneous focus). Primary and secondary task performances were significantly greater in the extraneous focus compared to the skill-focused dual-task. Also, workload was significantly lower during the extraneous focus compared to the skill-focused dual-task condition. Further analyses revealed that workload significantly mediated the effects of skill level on performance during the skill-focused and extraneous focus dual-tasks and various subscales of workload (i.e. temporal demand) contributed unique amounts of variance to this relationship. A discussion of the relationship between attention, workload and its subcomponents, skill level, and performance is presented.

The last three decades have seen an increase of tests aimed at measuring an individual's vocabulary level or size. The target words used in these tests are typically sampled from word frequency lists, which are in turn based on language... more

The last three decades have seen an increase of tests aimed at measuring an individual's vocabulary level or size. The target words used in these tests are typically sampled from word frequency lists, which are in turn based on language corpora. Conventionally, test developers sample items from frequency bands of 1000 words; different tests employ different sampling ratios. Some have as few as 5 or 10 items representing the underlying population of words, whereas other tests feature a larger number of items, such as 24, 30, or 40. However, very rarely are the sampling size choices supported by clear empirical evidence. Here, using a bootstrapping approach, we illustrate the effect that a sample-size increase has on confidence intervals of individual learner vocabulary knowledge estimates, and on the inferences that can safely be made from test scores. We draw on a unique dataset consisting of adult L1 Japanese test takers' performance on two English vocabulary test formats, each featuring 1000 words. Our analysis shows that there are few purposes and settings where as few as 5 to 10 sampled items from a 1000-word frequency band (1K) are sufficient. The use of 30 or more items per 1000-word frequency band and tests consisting of fewer bands is recommended.

Research on modeling is becoming popular nowadays, there are several of analyses used in research for modeling and one of them is known as applied multiple linear regressions (MLR). To obtain a bootstrap, robust and fuzzy multiple linear... more

Research on modeling is becoming popular nowadays, there are several of analyses used in research for modeling and one of them is known as applied multiple linear regressions (MLR). To obtain a bootstrap, robust and fuzzy multiple linear regressions, an experienced researchers should be aware the correct method of statistical analysis in order to get a better improved result. The main idea of bootstrapping is to approximate the entire sampling distribution of some estimator. To achieve this is by resampling from our original sample. In this paper, we emphasized on combining and modeling using bootstrapping, robust and fuzzy regression methodology. An algorithm for combining
method is given by SAS language. We also provided some technical example of application of method discussed by using SAS computer software. The visualizing output of the analysis is discussed in detail.

A model is a statement of reality or its approximation. Most phenomena in the social sciences are extremely complex. With a model we simplify the reality and focus on a manageable number of factors. It is impossible to completely... more

A model is a statement of reality or its approximation. Most phenomena in the social sciences are extremely complex. With a model we simplify the reality and focus on a manageable number of factors. It is impossible to completely understand why consumers default and identify all the factors influencing customer's default behavior. The bank manager sets up a statistical model that relates customer's default behavior to only two important factors, the income and the education. There surely are thousands of other variables that may influence customer's default behavior. This article describes the fundamentals of a statistical model building. We begin our discussion on the managerial justification for building a statistical model. Then we discuss three important statistical issues that are of prime importance to database marketers: model/variable selection, treatment of missing data, and evaluation of the model using Bootstrap Method.

In global markets a new typology of firm is emerging. Such new kind of firm has been defined second mover as it combines strategies of innovation and imitation. The purpose of this paper is to identify which external factors of network... more

In global markets a new typology of firm is emerging. Such new kind of firm has been defined second mover as it combines strategies of innovation and imitation. The purpose of this paper is to identify which external factors of network system positively influence the business performance of a second mover firm by applying a PLS-Path Modeling analysis to the Chinese and Middle Eastern markets.

Vocabulary’s relationship to reading proficiency is frequently cited as a justification for the assessment of L2 written receptive vocabulary knowledge. However, to date, there has been relatively little research regarding which... more

Vocabulary’s relationship to reading proficiency is frequently cited as a justification for the assessment of L2 written receptive vocabulary knowledge. However, to date, there has been relatively little research regarding which modalities of vocabulary knowledge have the strongest correlations to reading proficiency, and observed differences have often been statistically non-significant. The present research employs a bootstrapping approach to reach a clearer understanding of relationships between various modalities of vocabulary knowledge to reading proficiency. Test-takers (N = 103) answered 1000 vocabulary test items spanning the third 1000 most frequent English words in the New General Service List corpus (Browne, Culligan, & Phillips, 2013). Items were answered under four modalities: Yes/No checklists, form recall, meaning recall, and meaning recognition. These pools of test items were then sampled with replacement to create 1000 simulated tests ranging in length from five to 200 items and the results were correlated to the Test of English for International Communication (TOEIC®) Reading scores. For all examined test lengths, meaning-recall vocabulary tests had the highest average correlations to reading proficiency, followed by form-recall vocabulary tests. The results indicated that tests of vocabulary recall are stronger predictors of reading proficiency than tests of vocabulary recognition, despite the theoretically closer relationship of vocabulary recognition to reading.

Zainuddin, N.H., Lola, M.S., Djauhari, M.A., Yusof, F., Ramlee, M.N.A., Deraman, A., Ibrahim, Y., Abdullah, M.T. Abstract Hybrid models such as the Artificial Neural Network-Autoregressive Integrated Moving Average (ANN–ARIMA) model are... more

Peralatan k3 untuk memasak,Peralatan k3 untuk memasak hamburger,Peralatan k3 untuk memasak ikan,Peralatan k3 untuk memasak nasi,Peralatan k3 untuk memasak quinoa,Peralatan k3 untuk memasak sara,Peralatan k3 untuk memasak udang,jual... more

In many real-world binary classification tasks (e.g. detection of certain objects from images), an available dataset is imbalanced, i.e., it has much less representatives of a one class (a minor class), than of another. Generally,... more

In many real-world binary classification tasks (e.g. detection of certain objects from images), an available dataset is imbalanced, i.e., it has much less representatives of a one class (a minor class), than of another. Generally, accurate prediction of the minor class is crucial but it's hard to achieve since there is not much information about the minor class. One approach to deal with this problem is to preliminarily resample the dataset, i.e., add new elements to the dataset or remove existing ones. Resampling can be done in various ways which raises the problem of choosing the most appropriate one. In this paper we experimentally investigate impact of resampling on classification accuracy, compare resampling methods and highlight key points and difficulties of resampling.