Bootstrap (Statistics) Research Papers - Academia.edu (original) (raw)

Purpose – The purpose of this paper is to investigate, through the lens of the gift-giving theory, volunteers’ motivations for intending to stay with organizations. Design/methodology/approach – Data were collected from 379 volunteers... more

Purpose – The purpose of this paper is to investigate, through the lens of the gift-giving theory, volunteers’ motivations for intending to stay with organizations.
Design/methodology/approach – Data were collected from 379 volunteers from 30 charitable organizations operating in Italy’s socio-healthcare service sector. Bootstrapped mediation analysis was used to test the hypothesized relationships.
Findings – Volunteers’ reciprocal attitudes and gift-giving intentions partially mediated the relationship between motives and intentions to stay.
Practical implications – Policy makers of charitable organizations are advised to be more responsive to behavioral signals revealing volunteers’ motivations, attitudes, and intentions. Managers should appropriately align organizational responsiveness with volunteers’ commitment through gift-giving exchange systems.
Originality/value – The findings reveal that reciprocity and gift giving are significant organizational variables greatly influencing volunteers’ intentions to stay with organizations. Signaling theory is used to explain how volunteers’ attitudes are linked with organizational responsiveness. Furthermore, this study is the first to use an Italian setting to consider motives, reciprocity, and gift giving as they relate to intentions to stay.
Keywords Motivation, Reciprocity, Bootstrapped multiple mediation, Organizational behaviour, Non-profit organizations, Gift giving
Paper type Research paper

The thesis deals with spatial structure and chronological development of the early Lengyel Culture (4900-4700 cal BC) settlement at the site Svodín - Busahegy in Nové Zámky district, SW Slovakia. The approach is based on the settlement... more

The thesis deals with spatial structure and chronological development of the early Lengyel Culture (4900-4700 cal BC) settlement at the site Svodín - Busahegy in Nové Zámky district, SW Slovakia. The approach is based on the settlement area theory and more generally on the theory of artefacts. The used evidence consists finds from 1756 features, of which 679 belong to the Lengyel Culture and 184 are burials. The work includes a comprehensive electronic catalogue containing all relevant find documentation, localization plans of the trenches and features, plans of archaeological structures, belonging to different phases of the Lengyel culture settlement. The methodical chapters describe a process for direct digitization of ceramic fragments by laser scanning of profiles and automated reconstruction of ceramic shapes. The elaboration of chronology is based on the observed stratigraphic relations and typological analysis of the finds. Morphometric analysis and multivariate statistical methods were applied for the computerized typological classification of 1992 profiles of ceramic vessels. Find units were assigned to chronological phases based on a stochastic model, which integrates stratigraphic and typo-chronological data. A set of solutions was computed using hybrid particle swarm optimisation and genetic algorithm analysed by Monte Carlo simulation using Markov chains and interpreted in terms of Bayesian Statistics. Based on the spatial distribution of archaeological structures, three residential, three funerary and one rondel component have been identified in the settlement area. The final solution of the chronological model divides the settlement into seven phases, three pre-rondel phases, two phases in which subsequently the smaller and the larger rondels were built and two post-rondel phases. A spatial differentiation of settlement area based on social aspects was observed, and a possible association of the localization of the rondels with previous residential structures. In addition of burials in the vicinity of houses for the entire duration of the settlement, we also observe a separate burial area in the pre-rondel period. The typological composition of pottery shows a strong influence from the area Tisza culture, especially in the pre-rondel period. The usability of seriation and multivariate statistical methods for chronological ordering of find units was critically assessed based on the analysis of typo-chronological development of grave inventories.

"Novel computational and statistical prediction methods such as the support vector machine are becoming increasingly popular in remote-sensing applications and need to be compared to more traditional approaches like maximum-likelihood... more

"Novel computational and statistical prediction methods such
as the support vector machine are becoming increasingly popular
in remote-sensing applications and need to be compared
to more traditional approaches like maximum-likelihood classification.
However, the accuracy assessment of such predictive
models in a spatial context needs to account for the
presence of spatial autocorrelation in geospatial data by using
spatial cross-validation and bootstrap strategies instead of
their now more widely used non-spatial equivalent. These
spatial resampling-based estimation procedures were therefore
implemented in a new package ‘sperrorest’ for the opensource
statistical data analysis software R."

For a long time, one of my dreams was to describe the nature of uncertainty axiomatically, and it looks like I've finally done it in my co∼eventum mechanics! Now it remains for me to explain to everyone the co∼eventum mechanics in the... more

For a long time, one of my dreams was to describe the nature of uncertainty axiomatically, and it looks like I've finally done it in my co∼eventum mechanics! Now it remains for me to explain to everyone the co∼eventum mechanics in the most approachable way. This is what I'm trying to do in this work. The co∼eventum mechanics is another name for the co∼event theory, i.e., for the theory of experience and chance which I axiomatized in 2016 [1, 2]. In my opinion, this name best reflects the co∼event-based idea of the new dual theory of uncertainty, which combines the probability theory as a theory of chance, with its dual half, the believability theory as a theory of experience. In addition, I like this new name indicates a direct connection between the co∼event theory and quantum mechanics, which is intended for the physical explanation and description of the conict between quantum observers and quantum observations [4]. Since my theory of uncertainty satises the Kolmogorov axioms of probability theory, to explain this co∼eventum mechanics I will use a way analogous to the already tested one, which explains the theory of probability as a theory of chance describing the results of a random experiment. The simplest example of a random experiment in probability theory is the " tossing a coin ". Therefore, I decided to use this the simplest random experiment itself, as well as the two its analogies: the " "flipping a coin " and the " spinning a coin " to explain the co∼eventum mechanics, which describes the results of a combined experienced random experiment. I would like to resort to the usual for the probability theory " coin-based " analogy to explain (and first of all for myself) the logic of the co∼eventum mechanics as a logic of experience and chance. Of course, this analogy one may seem strange if not crazy. But I did not come up with a better way of tying the explanations of the logic of the co∼eventum mechanics to the coin-based explanations that are commonly used in probability theory to explain at least for myself the logic of the chance through a simple visual " coin-based " model that clarifies what occurs as a result of a combined experienced random experiment in which the experience of observer faces the chance of observation. I hope this analogy can be useful not only for me in understanding the co∼eventum mechanics.

You yourself, or what is the same, your experience is such ``coin'' that, while you aren't questioned, it rotates all the time in ``free flight''. And only when you answer the question the ``coin'' falls on one of the sides: ``Yes'' or... more

You yourself, or what is the same, your experience is such ``coin'' that, while you aren't questioned, it rotates all the time in ``free flight''. And only when you answer the question the ``coin'' falls on one of the sides: ``Yes'' or ``No'' with the believability that your experience tells you.

Controlled experiments with chess engines may be used to evaluate the effectiveness of alternative continuation lines of chess openings. The proposed methodology is demonstrated by evaluating a selection of continuations by White after... more

Controlled experiments with chess engines may be used to evaluate the effectiveness of alternative continuation lines of chess openings. The proposed methodology is demonstrated by evaluating a selection of continuations by White after the Sicilian Defense Najdorf Variation has been played by Black. The results suggest that the nine continuations tested represent a wide range of effectiveness that are mostly consistent with expert opinion.

Sampling variance or relative variance of a survey estimator can be related to its expected value by a mathematical relationships called generalized variance function (GVF) In the paper, the results of precision estimation using... more

Sampling variance or relative variance of a survey estimator can be related to its expected value by a mathematical relationships called generalized variance function (GVF) In the paper, the results of precision estimation using Generalized Variance Function for various income variables from Polish Household Budget Survey for counties (NUTS4) are presented. An attempt was made to compare this method with other precision estimation methods. A starting point was the estimation of Balanced Repeated Replication variances. or bootstrap variances in the situation where using BRR was not applicable. To evaluate the GVF model the hyperbolic function was used The computation was done using WesVAR and SPSS software and also special procedures prepared for R-project environment. The assessment of estimates consistency for counties was also conducted by means of small area models.

In the presentation authors present some empirical results for the Gini coefficient estimation on the basis of the Polish Household Budget Survey obtained for regions. The direct estimation was done using Ineq package for R-project... more

In the presentation authors present some empirical results for the Gini coefficient estimation on the basis of the Polish Household Budget Survey obtained for regions. The direct estimation was done using Ineq package for R-project environment. The precision of direct estimation was calculated using the bootstrap technique. The small area models for the Gini coefficient was also presented. To obtain the model-based estimates of the Gini ratio for regions the EB and EBLUP techniques were applied. For EBLUP estimation SAE package for R-project environment was used

The author presents a method of hierarchical bayesian estimation to estimate the value of the different income variables on the basis of studies of household budgets and POLTAX tax register. Calculations have been made for the case where... more

The author presents a method of hierarchical bayesian estimation to estimate the value of the different income variables on the basis of studies of household budgets and POLTAX tax register. Calculations have been made for the case where approximately known a priori evaluation of hyperparameters used to construct a conditional probability, which is used in the model. The author compares the efficiency of the estimates obtained by using other hierarchical methods of estimation for small areas, including the EBLUP estimators type. This gave congruity in precision of the estimated parameters using both techniques.

Populasi Dalam metode penelitian, kata populasi amat populer yang dipergunakan untuk menyebutkan serumpun atau sekelompok objek yang menjadi sasaran penelitian. Oleh karenanya populasi penelitian merupakan keseluruhan (universum) dari... more

Populasi Dalam metode penelitian, kata populasi amat populer yang dipergunakan untuk menyebutkan serumpun atau sekelompok objek yang menjadi sasaran penelitian. Oleh karenanya populasi penelitian merupakan keseluruhan (universum) dari objek penelitian yang dapat berupa manusia, hewan, tumbuhan, udara, gejala, nilai, peristiwa, sikap hidup, dan sebagainya,sehingga objek-objek itu dapat menjadi sasaran sumber data penelitian. Pengelompokkan populasi dari penentuan sumber data, yaitu: a. Populasi terbatas, yaitu populasi yang memiliki sumber data yang jelas batas-batasnya secara kuantitatif. b. Populasi tak terhingga, yaitu populasi yang memiliki sumber data yang tidak dapat ditentukan batas-batasnya secara kuantitatif. Dilihat dari kompleksitas objek populasi, maka populasi dapat dibedakan atas: a) Populasi homogen, yaitu keseluruhan individu yang menjadi anggota populasi, memiliki sifat-sifat yang relatif sama satu sama lainnya b) Populasi heterogen, yaitu keseluruhan individu anggota populasi relatif memiliki sifat-sifat individual, di mana sifat tersebut membedakan individu anggota populasi yang satu dengan lainnya Tahapan didalam pengambilan keputusan secara Statistika dapat dinyatakan dalam: pengambilan sampel, pendugaan parameter populasi dan pengujian parameter populasi. Pengujian ukuran pemusatan populasi dapat diklasifikasikasikan menjadi dua kelompok yaitu : a. Asumsi distribusi normal terpenuhi dan pengujiannya dilakukan terhadap rata-rata populasi serta statistik ujinya adalah t untuk varians populasi tak diketahui dan z untuk varians populasi diketahui b. Asumsi distribusi normal tak terpenuhi, pengujiannya dilakukan terhadap median dan statistik ujinya adalah uji tanda ataupun uji Wilcoxon.

Bootstrap Method examines a data practical approach to illustrate the support applications and other Resampling methods. It can also provide a powerful approach of statistical data analysis, as they have more general applications than... more

Bootstrap Method examines a data practical approach to illustrate the support applications and other Resampling methods. It can also provide a powerful approach of statistical data analysis, as they have more general applications than standard parametric methods. Bootstrap is a method which provides accurate measurements in the evaluation of a choice. This technique allows the evaluation of a choice distribution. In the case of supposed independent observations, it can be implemented by creating a number of new choices of observed data, each of which taken randomly. This article begins with a description of Bootstrap methods and shows how they are related to other resampling methods. In the meantime it examines a wide variety of application approaches. The article deals with the generation of bootstrap copies by computational methods which enable us the calculation of variance, average and quantiles coefficient; the evaluation of standard error, and BCA confidence interval. It also examines the applications of these methods in a wide range of problems and the rising assumptions and hypotheses together with evaluation problems. Bootstrap method uses statistical software such as R, S-Plus, SPSS to attain a lot of values in different choices. In this article we have used the statistical software R, which is a software that is 'free', based on the S language, providing the implementation of a wide range of statistical functions. This paper presents experimental results of the population which are taken into consideration to study the variance coefficient using Bootstrap Method by emphasizing the implementation of programming language. It will also present the comparison of experiment conclusions which are drawn by using the standard statistical methods.

There are many ways to bootstrap data for multiple comparisons procedures. Methods described here include (i) bootstrap (parametric and nonparametric) as a generalization of classical normal-based MaxT methods, (ii) bootstrap as an... more

There are many ways to bootstrap data for multiple comparisons procedures. Methods described here include (i) bootstrap (parametric and nonparametric) as a generalization
of classical normal-based MaxT methods, (ii) bootstrap as an approximation to exact permutation methods, (iii) bootstrap as a generator of realistic null data sets, and (iv) bootstrap as a generator of realistic non-null data sets. Resampling of MinP versus MaxT is discussed, and the use of the bootstrap for closed testing is also presented. Application to biopharmaceutical statistics are given.

State-space models (SSM) with Markov switching offer a powerful framework for detecting multiple regimes in time series, analyzing mutual dependence and dynamics within regimes, and assessing transitions between regimes. These models... more

State-space models (SSM) with Markov switching offer a powerful framework for detecting multiple regimes in time series, analyzing mutual dependence and dynamics within regimes, and assessing transitions between regimes. These models however present considerable computational challenges due to the exponential number of possible regime sequences to account for. In addition, high dimensionality of time series can hinder likelihood-based inference. To address these challenges, novel statistical methods for Markov-switching SSMs are proposed using maximum likelihood estimation, Expectation-Maximization (EM), and parametric bootstrap. Solutions are developed for initializing the EM algorithm, accelerating convergence, and conducting inference. These methods, which are ideally suited to massive spatio-temporal data such as brain signals, are evaluated in simulations and applications to EEG studies of epilepsy and of motor imagery are presented.

The book explores the phenomenon of wooden structures in Great Moravian graves at burial sites located near important centres, such as Mikulčice and Pohansko, and in peripheral areas (North Moravia). The objective is to describe and... more

The book explores the phenomenon of wooden structures in Great Moravian graves at burial sites located near important centres, such as Mikulčice and Pohansko, and in peripheral areas (North Moravia). The objective is to describe and reconstruct grave arrangements and to present a theoretical model of social and economic relationships of individuals buried in different types of graves, economic or religious significance of grave arrangements, and the value of this phenomenon in terms of chronology. In addition to the basic categorisation and quantification of the studied phenomenon in relation to other burial rite attributes (using Dell Statistica on the analytical level and IBM SPSS, using ArcGIS on the geoinformational level), the work applies new research methods focused on mathematical testing of theoretical models, in particular, modelling using structural equations (IBM SPSS AMOS). Given the nature of the processed data, the modelling process uses non-parametric tests of the quality of models (bootstrap simulations). The objective is to develop stable theoretical models, which can be used to create a meaningful narrative interpretation, and also to discuss the possibilities and limitations of multidimensional reductions of analytical space, such as factor analysis or principal component analysis, which are used in archaeology very frequently, but not always properly. The work is concluded with a narrative model of structure, function (in socio-economic relations), frequency of use and chronology of wooden structures in Great Moravian graves.

Práce zkoumá fenomén úprav hrobových jam a dřevěných konstrukcí ve velkomoravských hrobech pocházejících z pohřebišť ležících v okolí významných center jako Mikulčice a Pohansko, jakož i z pohřebišť v periferních oblastech (severní Morava). Cílem je popsat a rekonstruovat podobu dřevěných konstrukcí a předložit teoretický model sociálních a ekonomických vztahů jedinců pohřbených v jednotlivých typech hrobových konstrukcí, posoudit ekonomický či religiózní význam dřevěných konstrukcí, jakož i o výpovědní hodnotu tohoto fenoménu z hlediska chronologie. Kromě základní kategorizace a kvantifikace úprav hrobových jam a dřevěných konstrukcí ve vztahu k jiným atributům pohřebního ritu (v analytické rovině s použitím softwarů Dell Statistica a IBM SPSS, v geoinformačních rovině s použitím softwaru ArcGIS) jsou v práci aplikovány nové výzkumné metody, které jsou zaměřeny na matematické testování teoretických modelů. Jde především o modelování pomocí strukturálních rovnic (software IBM SPSS AMOS). V procesu modelování jsou vzhledem k charakteru dat, se kterými je pracováno, využity neparametrický testy kvality modelů (bootstrapové simulace). Kromě cíle budovat stabilní teoretické modely, na jejichž základě je smysluplné tvořit narativní interpretaci, je významnou pohnutkou tohoto postupu diskuze o možnostech a omezeních v archeologii dosud velmi často, ale ne vždy korektně, používaných vícerozměrných redukcí analytického prostoru, jako jsou například faktorová analýza či analýza hlavních komponentů. V závěru práce je prezentován narativní model vlastních konstrukcí, funkce (v socioekonomických relacích), frekvence užívání, jakož i chronologie dřevěných konstrukcí ve velkomoravských hrobech.

Purpose: The theory that self-esteem is substantially constructed based on social interactions implies that having a stutter could have a negative impact on self-esteem. Specifically, self-esteem during adolescence, a period of life... more

Purpose:
The theory that self-esteem is substantially constructed based on social interactions implies that having a stutter could have a negative impact on self-esteem. Specifically, self-esteem during adolescence, a period of life characterized by increased self-consciousness, could be at risk. In addition to studying mean differences between stuttering and non-stuttering adolescents, this article concentrates on the influence of stuttering severity on domain-specific and general self-esteem. Subsequently, we investigate if covert processes on negative communication attitudes, experienced stigma, non-disclosure of stuttering, and (mal)adaptive perfectionism mediate the relationship between stuttering severity and self-esteem.

Methods:
Our sample comprised 55 stuttering and 76 non-stuttering adolescents. They were asked to fill in a battery of questionnaires, consisting of: Subjective Screening of Stuttering, Self-Perception Profile for Adolescents, Erickson S-24, Multidimensional Perfectionism Scale, and the Stigmatization and Disclosure in Adolescents Who Stutter Scale.

Results:
SEM (structural equation modeling) analyses showed that stuttering severity negatively influences adolescents' evaluations of social acceptance, school competence, the competence to experience a close friendship, and global self-esteem. Maladaptive perfectionism and especially negative communication attitudes fully mediate the negative influence of stuttering severity on self-esteem. Group comparison showed that the mediation model applies to both stuttering and non-stuttering adolescents.

Conclusion:
We acknowledge the impact of having a stutter on those domains of the self in which social interactions and communication matter most. We then accentuate that negative attitudes about communication situations and excessive worries about saying things in ways they perceive as wrong are important processes to consider with regard to the self-esteem of adolescents who stutter. Moreover, we provide evidence that these covert processes also need to be addressed when helping adolescents who are insecure about their fluency in general.

Research on forecasting is extensive and includes many studies that have tested alternative methods in order to determine which ones are most effective. We review this evidence in order to provide guidelines for forecasting for marketing.... more

Research on forecasting is extensive and includes many studies that have tested alternative methods in order to determine which ones are most effective. We review this evidence in order to provide guidelines for forecasting for marketing. The coverage includes intentions, Delphi, role playing, conjoint analysis, judgmental bootstrapping, analogies, extrapolation, rule-based forecasting, expert systems, and econometric methods. We discuss research about which methods are most appropriate to forecast market size, actions of decision makers, market share, sales, and financial outcomes. In general, there is a need for statistical methods that incorporate the manager's domain knowledge. This includes rule-based forecasting, expert systems, and econometric methods. We describe how to choose a forecasting method and provide guidelines for the effective use of forecasts including such procedures as scenarios.

This 10 hours class is intended to give students the basis to empirically solve statistical problems. Talk 1 serves as an introduction to the statistical software R, and presents how to calculate basic measures such as mean, variance,... more

This 10 hours class is intended to give students the basis to empirically solve statistical problems. Talk 1 serves as an introduction to the statistical software R, and presents how to calculate basic measures such as mean, variance, correlation and gini index. Talk 2 shows how the central limit theorem and the law of the large numbers work empirically. Talk 3 presents the point estimate, the confidence interval and the hypothesis test for the most important parameters. Talk 4 introduces to the linear regression model and Talk 5 to the bootstrap world. Talk 5 also presents an easy example of a markov chains.
All the talks are supported by script codes, in R language.

This paper reviews the empirical research on forecasting in marketing. In addition, it presents results from some small scale surveys. We offer a framework for discussing forecasts in the area of marketing, and then review the literature... more

This paper reviews the empirical research on forecasting in marketing. In addition, it presents results from some small scale surveys. We offer a framework for discussing forecasts in the area of marketing, and then review the literature in light of that framework. Particular emphasis is given to a pragmatic interpretation of the literature and findings. Suggestions are made on what research is needed.

The period after initial development of Neolithic society in Central Europe, known as the Post-LBK era, is marked by an influx of new cultural stimuli from the South and the emergence of formalization in monumental architecture, resulting... more

The period after initial development of Neolithic society in Central Europe, known as the Post-LBK era, is marked by an influx of new cultural stimuli from the South and the emergence of formalization in monumental architecture, resulting in a cultural diversification while maintaining significant common traits across different regions. An important part of understanding the process of this change is understanding the development of social complexity during the transition. This study addresses this question by examining variations in burial rite coinciding with the age or sex of the deceased or the spatial distribution of 106 graves from the Lengyel Culture settlement in Svodín, dated around 4800 cal BC. The concept of exceptionality rather than richness of burials is introduced. It is based on the composition and spatial distribution of inventories within graves and contrary to the traditional deductive approach does not depend on prior selection of attributes of prestige. Principa...

In order to estimate model parameters in multiple regression models, resampling methods of bootstrap and jackknife are used. Resampling methods are used as an alternative readjustment method to the least squares method (OLS) especially... more

In order to estimate model parameters in multiple regression models, resampling methods of bootstrap and jackknife are used. Resampling methods are used as an alternative readjustment method to the least squares method (OLS) especially when assumptions belonging to error term in regression analysis are not met. Data used in the study are taken from 25 advertisements in Sahibinden.com website and the price of beetle car brand is accepted as dependent variable for multiple linear regression models. It is aimed that price variable taken is tried to be explained with the help of variables of fuel, case type, salesman, sunroof, wind shield, upholstery, age and engine size. When we examined the variables, it is seen that categorical variables are in question and dummy variable must be used. Firstly, model parameters of this obtained data are estimated using OLS and significances of parameters are tested, then, model parameters, significances of estimated parameters, coefficient of determination, and standard error of the model and % 90 confidence intervals are estimated using one of the resampling methods, bootstrap and jackknife method and results belonging to these three methods are compared. Also, generalization condition, which is to the population, of parameter estimation results belonging to explanatory variables used in this study are reviewed with the help of jackknife resampling method ve It has been seen that the salesman and upholstery independent variables have a considerable effect at a significance level of .10 on the dependent variable of price dependence of decision making (p < .10) and Jackknife have confirmed these generalization.

The design of a poverty measure involves the selection of a set of parameters and poverty figures. In most cases the measures are estimated from sample surveys. This raises the question of how conclusive particular poverty comparisons are... more

The design of a poverty measure involves the selection of a set of parameters and poverty figures. In most cases the measures are estimated from sample surveys. This raises the question of how conclusive particular poverty comparisons are subject to both the set of selected parameters (or variations within a plausible range) and the sample datasets. This chapter shows how to apply dominance and rank robustness tests to assess comparisons as poverty cutoffs and other parameters changes. It presents ingredients of statistical inference, including standard errors, confidence intervals, and hypothesis tests. And it discusses how robustness and statistical inference tools can be used together to assert concrete policy conclusions. An appendix presents methods for computing standard errors, including the bootstrapped standard errors.

Citation: Alkire, S., Foster, J. E., Seth, S., Santos, M. E., Roche, J. M., and Ballon, P. (2015). Multidimensional Poverty Measurement and Analysis, Oxford: Oxford University Press, ch. 8.

Jual peralatan K3 disurabaya,Jual peralatan K3 listrik,Jual peralatan K3 kontruksi,Jual peralatan K3 jogja,Jual peralatan K3 otomotif,Jual peralatan K3 Bangunan,Jual untuk peralatan k3 otomotif,Dijakarta jual peralatan k3 listrik,Dimedan... more

Jual peralatan K3 disurabaya,Jual peralatan K3 listrik,Jual peralatan K3 kontruksi,Jual peralatan K3 jogja,Jual peralatan K3 otomotif,Jual peralatan K3 Bangunan,Jual untuk peralatan k3 otomotif,Dijakarta jual peralatan k3 listrik,Dimedan jual peralatan k3 bangunan,Jual peralatan k3 bangunan

Kami menyediakan berbagai kebutuhan peralatan penunjang K3 seperti :
Safety Vest
First Aid Package
Safety Float
Full Body Harness / Safety Belt
etc
Peralatan diatas sangat membantu untuk mendukung pelaksanaan Zero Accident Program K3 – Sarana Evakuasi – P3K – dan kegiatan yang berkaitan dengan Kesehatan dan Keselamatan Kerja.
Informasi untuk ORDER :
0813 – 5507 – 4389 ( TSEL )
htps//solusisafety wordpress.com

During skill execution, performers have been shown to attend to different aspects of movement, the external effects of one's action, or to other environmental information. A variety of psychological mechanisms have been proposed to... more

During skill execution, performers have been shown to attend to different aspects of movement, the external effects of one's action, or to other environmental information. A variety of psychological mechanisms have been proposed to account for the differential outcomes when adopting each attentional strategy. However, there is limited information about the extent to which different attentional foci change the workload demands of task performance. To examine this, the current study administered the NASA-Task Load Index following a simulated shooting dual-task. Participants performed the primary shooting task alone (control), and also with a secondary task that directed attention towards an aspect of skill execution (skill-focused) and an unrelated environmental stimulus (extraneous focus). Primary and secondary task performances were significantly greater in the extraneous focus compared to the skill-focused dual-task. Also, workload was significantly lower during the extraneous focus compared to the skill-focused dual-task condition. Further analyses revealed that workload significantly mediated the effects of skill level on performance during the skill-focused and extraneous focus dual-tasks and various subscales of workload (i.e. temporal demand) contributed unique amounts of variance to this relationship. A discussion of the relationship between attention, workload and its subcomponents, skill level, and performance is presented.

The last three decades have seen an increase of tests aimed at measuring an individual's vocabulary level or size. The target words used in these tests are typically sampled from word frequency lists, which are in turn based on language... more

The last three decades have seen an increase of tests aimed at measuring an individual's vocabulary level or size. The target words used in these tests are typically sampled from word frequency lists, which are in turn based on language corpora. Conventionally, test developers sample items from frequency bands of 1000 words; different tests employ different sampling ratios. Some have as few as 5 or 10 items representing the underlying population of words, whereas other tests feature a larger number of items, such as 24, 30, or 40. However, very rarely are the sampling size choices supported by clear empirical evidence. Here, using a bootstrapping approach, we illustrate the effect that a sample-size increase has on confidence intervals of individual learner vocabulary knowledge estimates, and on the inferences that can safely be made from test scores. We draw on a unique dataset consisting of adult L1 Japanese test takers' performance on two English vocabulary test formats, each featuring 1000 words. Our analysis shows that there are few purposes and settings where as few as 5 to 10 sampled items from a 1000-word frequency band (1K) are sufficient. The use of 30 or more items per 1000-word frequency band and tests consisting of fewer bands is recommended.

The logic of uncertainty is not the logic of experience and as well as it is not the logic of chance. It is the logic of experience and chance. Experience and chance are two inseparable poles. These are two dual reflections of one... more

The logic of uncertainty is not the logic of experience and as well as it is not the logic of chance. It is the logic of experience and chance. Experience and chance are two inseparable poles. These are two dual reflections of one essence, which is called co∼event. The theory of experience and chance is the theory of co∼events. To study the co∼events, it is not enough to study the experience and to study the chance. For this, it is necessary to study the experience and chance as a single entire, a co∼event. In other words, it is necessary to study their interaction within a co∼event. The new co∼event axiomatics and the theory of co∼events following from it were created precisely for these purposes. In this work, I am going to demonstrate the effectiveness of the new theory of co∼events in a studying the logic of uncertainty. I will do this by the example of a co∼event splitting of the logic of the Bayesian scheme, which has a long history of fierce debates between Bayesionists and frequentists. I hope the logic of the theory of experience and chance will make its modest contribution to the application of these old dual debaters.

Keywords: Eventology, event, probability, probability theory, Kolmogorov’s axiomatics, experience, chance, cause, consequence, co∼event, set of co∼events, bra-event, set of bra-events, ket-event, set of ket-events, believability, certainty, believability theory, certainty theory, theory of co∼events, theory of experience and chance, co∼event dualism, co∼event axiomatics, logic of uncertainty, logic of experience and chance, logic of cause and consequence, logic of the past and the future, Bayesian scheme.

Research on modeling is becoming popular nowadays, there are several of analyses used in research for modeling and one of them is known as applied multiple linear regressions (MLR). To obtain a bootstrap, robust and fuzzy multiple linear... more

Research on modeling is becoming popular nowadays, there are several of analyses used in research for modeling and one of them is known as applied multiple linear regressions (MLR). To obtain a bootstrap, robust and fuzzy multiple linear regressions, an experienced researchers should be aware the correct method of statistical analysis in order to get a better improved result. The main idea of bootstrapping is to approximate the entire sampling distribution of some estimator. To achieve this is by resampling from our original sample. In this paper, we emphasized on combining and modeling using bootstrapping, robust and fuzzy regression methodology. An algorithm for combining
method is given by SAS language. We also provided some technical example of application of method discussed by using SAS computer software. The visualizing output of the analysis is discussed in detail.

A model is a statement of reality or its approximation. Most phenomena in the social sciences are extremely complex. With a model we simplify the reality and focus on a manageable number of factors. It is impossible to completely... more

A model is a statement of reality or its approximation. Most phenomena in the social sciences are extremely complex. With a model we simplify the reality and focus on a manageable number of factors. It is impossible to completely understand why consumers default and identify all the factors influencing customer's default behavior. The bank manager sets up a statistical model that relates customer's default behavior to only two important factors, the income and the education. There surely are thousands of other variables that may influence customer's default behavior. This article describes the fundamentals of a statistical model building. We begin our discussion on the managerial justification for building a statistical model. Then we discuss three important statistical issues that are of prime importance to database marketers: model/variable selection, treatment of missing data, and evaluation of the model using Bootstrap Method.

Vocabulary’s relationship to reading proficiency is frequently cited as a justification for the assessment of L2 written receptive vocabulary knowledge. However, to date, there has been relatively little research regarding which... more

Vocabulary’s relationship to reading proficiency is frequently cited as a justification for the assessment of L2 written receptive vocabulary knowledge. However, to date, there has been relatively little research regarding which modalities of vocabulary knowledge have the strongest correlations to reading proficiency, and observed differences have often been statistically non-significant. The present research employs a bootstrapping approach to reach a clearer understanding of relationships between various modalities of vocabulary knowledge to reading proficiency. Test-takers (N = 103) answered 1000 vocabulary test items spanning the third 1000 most frequent English words in the New General Service List corpus (Browne, Culligan, & Phillips, 2013). Items were answered under four modalities: Yes/No checklists, form recall, meaning recall, and meaning recognition. These pools of test items were then sampled with replacement to create 1000 simulated tests ranging in length from five to 200 items and the results were correlated to the Test of English for International Communication (TOEIC®) Reading scores. For all examined test lengths, meaning-recall vocabulary tests had the highest average correlations to reading proficiency, followed by form-recall vocabulary tests. The results indicated that tests of vocabulary recall are stronger predictors of reading proficiency than tests of vocabulary recognition, despite the theoretically closer relationship of vocabulary recognition to reading.

Zainuddin, N.H., Lola, M.S., Djauhari, M.A., Yusof, F., Ramlee, M.N.A., Deraman, A., Ibrahim, Y., Abdullah, M.T. Abstract Hybrid models such as the Artificial Neural Network-Autoregressive Integrated Moving Average (ANN–ARIMA) model are... more

Zainuddin, N.H., Lola, M.S., Djauhari, M.A., Yusof, F., Ramlee, M.N.A., Deraman, A., Ibrahim, Y., Abdullah, M.T.

Abstract
Hybrid models such as the Artificial Neural Network-Autoregressive Integrated Moving Average (ANN–ARIMA) model are widely used in forecasting. However, inaccuracies and inefficiency remain in evidence. To yield the ANN–ARIMA with a higher degree of accuracy, efficiency and precision, the bootstrap and the double bootstrap methods are commonly used as alternative methods through the reconstruction of an ANN–ARIMA standard error. Unfortunately, these methods have not been applied in time series-based forecasting models. The aims of this study are twofold. First, is to propose the hybridization of bootstrap model and that of double bootstrap mode called Bootstrap Artificial Neural Network-Autoregressive Integrated Moving Average (B-ANN–ARIMA) and Double Bootstrap Artificial Neural Network-Autoregressive Integrated Moving Average (DB-ANN–ARIMA), respectively. Second, is to investigate the performance of these proposed models by comparing them with ARIMA, ANN and ANN–ARIMA. Our investigation is based on three well-known real datasets, i.e., Wolf's sunspot data, Canadian lynx data and, Malaysia ringgit/United States dollar exchange rate data. Statistical analysis on SSE, MSE, RMSE, MAE, MAPE and VAF is then conducted to verify that the proposed models are better than previous ARIMA, ANN and ANN–ARIMA models. The empirical results show that, compared with ARIMA, ANNs and ANN–ARIMA models, the proposed models generate smaller values of SSE, MSE, RMSE, MAE, MAPE and VAF for both training and testing datasets. In other words, the proposed models are better than those that we compare with. Their forecasting values are closer to the actual values. Thus, we conclude that the proposed models can be used to generate better forecasting values with higher degree of accuracy, efficiency and, precision in forecasting time series results becomes a priority. © 2019 Elsevier B.V.

Eventology of multivariate statistics Eventology and mathematical eventology Philosophical eventology and philosophy of probability Practical eventology Eventology of safety Eventological economics and psychology Mathematics in the... more

Eventology of multivariate statistics
Eventology and mathematical eventology
Philosophical eventology and philosophy of probability
Practical eventology
Eventology of safety
Eventological economics and psychology
Mathematics in the humanities, socio-economic and natural sciences
Financial and actuarial mathematics
Multivariate statistical analysis
Multivariate complex analysis
Decision-making under risk and uncertainty
Risk measurement and risk models
Theory of fuzzy events and generalized theory of uncertainty
System analysis and events management

EEC'2016 ~ workshop on axiomatizing experience and chance, and Hilbert's sixth problem

With topics from quantum physics, probability and believability to economics, sociology and psychology, the workshop will be intended for an interdisciplinary discussion on mathematical theories of experience and chance. Topics of discussion include the results, thoughts and ideas on axiomatization of the eventological theory of experience and chance in the framework of the decision of Hilbert sixth problem.
Eventology of experience and chance
Beleivability theory and statistics of experience
Probability theory and statistics of chance
Axiomatizing experience and chance

Peralatan k3 untuk memasak,Peralatan k3 untuk memasak hamburger,Peralatan k3 untuk memasak ikan,Peralatan k3 untuk memasak nasi,Peralatan k3 untuk memasak quinoa,Peralatan k3 untuk memasak sara,Peralatan k3 untuk memasak udang,jual... more

Peralatan k3 untuk memasak,Peralatan k3 untuk memasak hamburger,Peralatan k3 untuk memasak ikan,Peralatan k3 untuk memasak nasi,Peralatan k3 untuk memasak quinoa,Peralatan k3 untuk memasak sara,Peralatan k3 untuk memasak udang,jual peralatan k3 memasak,agen peralatan k3 memasak,distributor peralatan k3 memasak

Kami menyediakan berbagai kebutuhan peralatan penunjang K3 seperti :
Safety Vest
First Aid Package
Safety Float
Full Body Harness / Safety Belt
etc
Peralatan diatas sangat membantu untuk mendukung pelaksanaan Zero Accident Program K3 – Sarana Evakuasi – P3K – dan kegiatan yang berkaitan dengan Kesehatan dan Keselamatan Kerja.
Informasi untuk ORDER :
0813 – 5507 – 4389 ( TSEL )
htps//solusisafety wordpress.com

In many real-world binary classification tasks (e.g. detection of certain objects from images), an available dataset is imbalanced, i.e., it has much less representatives of a one class (a minor class), than of another. Generally,... more

In many real-world binary classification tasks (e.g. detection of certain objects from images), an available dataset is imbalanced, i.e., it has much less representatives of a one class (a minor class), than of another. Generally, accurate prediction of the minor class is crucial but it's hard to achieve since there is not much information about the minor class. One approach to deal with this problem is to preliminarily resample the dataset, i.e., add new elements to the dataset or remove existing ones. Resampling can be done in various ways which raises the problem of choosing the most appropriate one. In this paper we experimentally investigate impact of resampling on classification accuracy, compare resampling methods and highlight key points and difficulties of resampling.