Introduction to Social Statistics: The Logic of Statistical Reasoning by Thomas Dietz, Linda Kalof (original) (raw)

Introduction to Social Statistics: The Logic of Statistical Reasoning

1999

Table of contents 1. Linear models: some historical perspectives 8. Balanced linear models 2. Basic elements of linear algebra 9. The adequacy of Satterthwaite's approximation 3. Basic concepts in matrix algebra 10. Unbalanced fixed-effects models 4. The multivariate normal distribution 11. Unbalanced random and mixed models 5. Quadratic forms in normal variables 12. Additional topics in linear models 6. Full rank linear models 13. Generalized linear models 7. Less-than-full-rank linear models Readership: All readers interested in regression presented with a mix of theory and practice. The material on which this book is based has been taught in a couple of courses at the University of Florida for about 20 years and the author's skills and experience in doing this are superbly represented in this fine text. The presentation itself leans more toward the theoretical aspects, but there are numerous exercises that reinforce both the theoretical and the practical aspects of regression. (However, no solutions are provided.) "Chapters 11 and 12 can be particularly helpful to graduate students looking for dissertation topics." (Preface) This is an excellent, reliable, and comprehensive text.

Introduction to Statistics for Political and Social Scientists

Course Abstract by: Bernhard Kittel This outline is for a course I attended. The course provided a concise introduction to fundamental ideas of descriptive and inferential statistics with the intention to help participants understanding and applying the logic of quantitative analysis in political and social science. It covers the basic concepts of statistics, probability, and test theory. Methods of bi-variate analysis were discussed for nominal, ordinal, and interval-scaled variables, and the basic idea of linear regression analysis were introduced. This course introduced students to using the system ‘R’ as part of its practical application of the course material covered. For more information, please refer to the attached pdf document from ECPR and the lecturer which contains the course syllabus information provided on the course website. No copyright infringement intended.

Summer Seminar: Philosophy of Statistics Lecture Notes 10: Linear Regression Model: a brief introduction

2019

The Linear Regression (LR) model is arguably the most widely used statistical model in empirical modeling across many disciplines. It provides the exemplar for all regression models as well as several other statistical models referred to as ‘regression-like’ models, some of which will be discussed briefly in this chapter. The primary objective is to discuss the LR model and its associated statistical inference procedures. Special attention is paid to the model assumptions and how they relate to the sampling distributions of the statistics of interest. The main lesson of this chapter is that when any of the probabilistic assumptions of the LR model are invalid for data z0:={( )  =1  }  inferences based on it will be unreliable. The unreliability of inference will often stem from inconsistent estimators and sizeable discrepancies between actual and nominal error probabilities induced by statistical misspecification.

Regression

Models is a comprehensive manual for the applied researcher who wants to perform data analysis using linear and nonlinear regression and multilevel models. The book introduces and demonstrates a wide variety of models, at the same time instructing the reader in how to fit these models using freely available software packages. The book illustrates the concepts by working through scores of real data examples that have arisen in the authors' own applied research, with programming code provided for each one. Topics covered include causal inference, including regression, poststratification, matching, regression discontinuity, and instrumental variables, as well as multilevel logistic regression and missing-data imputation. Practical tips regarding building, fitting, and understanding are provided throughout.

Modern Statistics for the Social and Behavioral Sciences: A Practical Introduction.

This book is a two-semeser, graduate-level introduction to statistics. Classic methods are covered including simple explanations of when and why they can be unsatisfactory. This is followed by current results on how modern robust methods deal with known problems. Many illustrations indicate how to apply both classic and modern robust methods using R. This second edition contains major changes and additions. All of the chapters have been updated; for some there are only minor changes, but for others there are major changes. Many new R functions have been added that reflect recent advances. Briefly, they deal with new methods for comparing groups and studying associations. For example, it is now possible to compare a collection of quantiles in a manner that controls the probability of one or more Type I errors, even when there are tied values. This new approach can provide a deeper understanding of how groups compare in contrast to methods that focus on a single measure of location. Some new multiple comparison procedures are covered as well as some new techniques related to testing global hypotheses. New methods for comparing regression lines are covered and several new and improved methods for dealing with ANCOVA are described. There is even a new method for determining which independent variables are most important in a regression model. There are many ways of estimating which variables are most important. What is new here is the ability to determine the strength of the empirical evidence that the most important variables are correctly identified.

Analysis of multivariate social science data (2nd ed.). Boca

2008

When four of the leading researchers in the field of quantitative social sciences team up to write a book together, you can expect nothing less than a brilliant work. That is what the first edition of “Analysis of Multivariate Social Science Data ” from 2002 was, and that’s what the current second edition is. This new edition contains additional chapters on regression analysis, confirmatory factor analysis including structural equation models, and multilevel models. The strength of this book lies in the right mixture of simple mathematical expressions, com-prehensive non-mathematical descriptions of various multivariate approaches, numerous in-teresting real-life data examples (almost half of each chapter is dedicated to examples), and, last but not least, detailed interpretation of the results. As in other Bartholomew books, well-known methods or certain parts of them are, in many cases, presented from a slightly different angle. This makes this book also interesting for experience...

REGRESSION ANALYSIS AND RELEVANCE TO RESEARCH IN SOCIAL SCIENCES

Academic Journal of Accounting and Business Management, 2021

The study seeks to review regression analysis and its relevance to research in social sciences, the study relied on a review of various regression analyses being used in social sciences and the significance of regression analysis as a tool in the analysis of data sets. The study adopted a systematic exploratory research design, reviewing related articles, journals, and other prior studies in relation to regression analysis and its relevance in social sciences. After a careful systematic and contextual review, the study revealed that regression analysis is significant in providing a measure of coefficients of the determination which explains the effect of the independent variable (explanatory variable) on the explained variable otherwise known as regressed variables that give the idea of the prediction values of the regression analysis. Regression analysis provides a practical and strong tool for statistical analysis that can enhance investment decisions, business projections in manufacturing, production, stock price movement, sales, and revenue estimations, and generally in making future predictions. This review provides originality in a clear understanding of a comprehensive review of the relevance of regression analysis in social sciences, contributing to knowledge in this regard. The study recommends that researchers should adopt the required pragmatic and methodological steps when using regression analysis, unethical torturing of data should be avoided as this could lead to false results and wrong statistical predictions.

Inferences from regression analysis: are they valid? 1

The focus of this paper is regression analysis. Regression analysis forms the core for a family of techniques including path analysis, structural equation modelling, hierarchical linear modelling, and others. Regression analysis is perhaps the most-used quantitative method in the social sciences, most especially in economics and sociology but it has made inroads even in fields like anthropology and history. It forms the principal basis for determining the impact of social policies and, as such, has enormous influence on almost all public policy decisions. This paper raises fundamental questions about the utility of regression analysis for causal inference. I argue that the conditions necessary for regression analysis to yield valid causal inferences are so far from ever being met or approximated that such inferences are never valid. This dismal conclusion follows clearly from examining these conditions in the context of three widely-studied examples of applied regression analysis: earnings functions, education production functions, and aggregate production functions. Since my field of specialization is the economics of education, I approach each of these examples from that perspective. Nonetheless, I argue that my conclusions are not particular to looking at the impact of education or to these three examples, but that the underlying problems exhibited therein generally hold to be true in making causal inferences from regression analyses about other variables and on other topics. Overall argument In some fields, regression analysis is used as an ad hoc empirical exercise for moving beyond simple correlations. Researchers are often interested in the impact of a particular independent variable on a particular dependent variable and use regression analysis as a way of controlling for a few covariates. Despite being common, in many fields such empirical fishing expeditions are frowned upon because the result of particular interest (the coefficient on the key independent variable under examination) will depend on which covariates are selected as " controls ". To the contrary, nowadays, most fields teach that one has to be serious about causal modeling in order to use regression analysis for causal inference. Causal models require certain conditions to hold for regression coefficients to be accurate and unbiased estimates of causal impact. While these conditions are often expressed as properties of regression residuals, they also may be expressed as three necessary conditions for the proper specification of a causal model examining a particular (or set of) dependent variable(s): • All relevant variables are included in the model; 1 I would like to thank Jim Cobbe and an anonymous reviewer for comments on a draft of this paper. I wish to give a special thanks to Sande Milton for his insights and long-term collaboration with me on this topic.