Applied univariate, bivariate, and multivariate statistics için kapak resmi
Başlık:
Applied univariate, bivariate, and multivariate statistics
Yazar:
Denis, Daniel J., 1974-
ISBN:
9781118632239

9781118632314

9781118632338

9781119583004
Fiziksel Tanımlama:
1 online resource
İçerik:
Intro -- Title Page -- Copyright Page -- Contents -- Preface -- Importance of History -- Hypothesis-Testing and Decision-Making: Type I Versus Type Ii Errors -- Toward Multilevel Modeling -- Readability of the Book -- Coverage -- Emphasizing Logic Rather Than Arithmetic -- Missing Data -- Review Exercises and Data Analysis -- A Word to Instructors -- Some Conventions on Notation Used in This Book -- Final Thoughts -- Acknowledgments -- About the Companion Website -- 1: Preliminary Considerations -- 1.1 The Philosophical Bases of Knowledge: Rationalistic Versus Empiricist Pursuits -- 1.2 What Is a ``Model´´? -- 1.3 Social Sciences Versus Hard Sciences -- 1.4 Is Complexity a Good Depiction of Reality? Are Multivariate Methods Useful? -- 1.5 Causality -- 1.6 The Nature of Mathematics: Mathematics As a Representation of Concepts -- 1.7 As a Social Scientist, How Much Mathematics Do You Need to Know? -- 1.8 Statistics and Relativity -- 1.9 Experimental Versus Statistical Control -- 1.10 Statistical Versus Physical Effects -- 1.11 Understanding What ``Applied Statistics´´ Means -- Review Exercises -- 2: Mathematics and Probability Theory -- 2.1 Set Theory -- 2.1.1 Operations on Sets -- 2.1.2 Denoting Unions and Intersections of Many Sets -- 2.1.3 Complement of a Set -- 2.2 Cartesian Product a × B -- 2.3 Sets of Numbers -- 2.4 Set Theory Into Practice: Samples, Populations, and Probability -- 2.5 Probability -- 2.5.1 The Mathematical Theory of Probability -- 2.5.2 Events -- 2.5.3 The Axioms of Probability: And Some of Their Offspring -- 2.5.4 Conditional Probability -- 2.5.5 Mutually Exclusive versus Independent Events -- 2.5.6 More on Mutual Exclusiveness -- 2.6 Interpretations of Probability: Frequentist Versus Subjective -- 2.6.1 Law of Large Numbers -- 2.6.2 Problem with the Law of Large Numbers -- 2.6.3 The Subjective Interpretation of Probability.

2.7 Bayes' Theorem: Inverting Conditional Probabilities -- 2.7.1 Decomposing Bayes' Theorem -- 2.7.2 A Medical Example-Probability of HIV: The Logic of Bayesian Revision -- 2.7.3 Recap of Bayes' Theorem: The Idea of Revising Probability Estimates and Incorporating New Data -- 2.7.4 The Consideration of Base Rates and Other Information: Why Priors Are Important -- 2.7.5 Conditional Probabilities and Temporal Ordering -- 2.8 Statistical Inference -- 2.8.1 Shouldn't the Stakes Matter? -- 2.9 Essential Mathematics: Precalculus, Calculus, and Algebra -- 2.9.1 Polynomials -- 2.9.2 Functions -- 2.9.3 What is a Mathematical Function? -- 2.9.4 Spotting Functions Graphically: The Vertical-Line Test -- 2.9.5 Limits -- 2.9.6 Why Limits? How Are Limits Useful? -- 2.9.7 Asymptotes -- 2.9.8 Continuity -- 2.9.9 Why Does Continuity Matter? Leaping from Rationalism to Empiricism -- 2.9.10 Differential and Integral Calculus -- 2.9.11 The Derivative as a Limit -- 2.9.12 Derivative of a Linear Function -- 2.9.13 Using Derivatives: Finding Minima and Maxima of Functions -- 2.9.14 The Integral -- 2.9.15 Calculus in R -- 2.9.16 Vectors and Matrices -- 2.9.17 Why Vectors and Matrices? -- 2.9.18 Solving Systems of Linear Equations -- 2.10 Chapter Summary and Highlights -- Review Exercises -- 3: Introductory Statistics -- 3.1 Densities and Distributions -- 3.1.1 Plotting Normal Distributions -- 3.1.2 Binomial Distributions -- 3.1.3 Normal Approximation -- 3.1.4 Joint Probability Densities: Bivariate and Multivariate Distributions -- 3.2 Chi-Square Distributions and Goodness-Of-Fit Test -- 3.2.1 Power for Chi-Square Test of Independence -- 3.3 Sensitivity and Specificity -- 3.4 Scales of Measurement: Nominal, Ordinal, and Interval, Ratio -- 3.4.1 Nominal Scale -- 3.4.2 Ordinal Scale -- 3.4.3 Interval Scale -- 3.4.4 Ratio Scale.

3.5 Mathematical Variables Versus Random Variables -- 3.6 Moments and Expectations -- 3.6.1 Sample and Population Mean Vectors -- 3.7 Estimation and Estimators -- 3.8 Variance -- 3.9 Degrees of Freedom -- 3.10 Skewness and Kurtosis -- 3.11 Sampling Distributions -- 3.11.1 Sampling Distribution of the Mean -- 3.12 Central Limit Theorem -- 3.13 Confidence Intervals -- 3.14 Bootstrap and Resampling Techniques -- 3.15 Likelihood Ratio Tests and Penalized Log-Likelihood Statistics -- 3.16 Akaike'S Information Criteria -- 3.17 Covariance and Correlation -- 3.17.1 Covariance and Correlation Matrices -- 3.18 Other Correlation Coefficients -- 3.19 Student'S T Distribution -- 3.19.1 t-Tests for One Sample -- 3.19.2 t-Tests for Two Samples -- 3.19.3 Two-Sample t-Tests in R -- 3.20 Statistical Power -- 3.20.1 Visualizing Power -- 3.20.2 Power Estimation Using R and GPower -- 3.20.3 Estimating Sample Size and Power for Independent Samples t-Test -- 3.21 Paired Samples T-Test: Statistical Test for Matched Pairs (Elementary Blocking) Designs -- 3.22 Blocking With Several Conditions -- 3.23 Composite Variables: Linear Combinations -- 3.24 Models in Matrix Form -- 3.25 Graphical Approaches -- 3.25.1 Box-and-Whisker Plots -- 3.26 What Makes a P-Value Small? a Critical Overview and Simple Demonstration of Null Hypothesis Significance Testing -- 3.26.1 Null Hypothesis Significance Testing: A History of Criticism -- 3.26.2 The Makeup of a p-Value: A Brief Recap and Summary -- 3.26.3 The Issue of Standardized Testing: Are Students in Your School Achieving More Than the National Average? -- 3.26.4 Other Test Statistics -- 3.26.5 The Solution -- 3.26.6 Statistical Distance: Cohen's d -- 3.26.7 What Does Cohen's d Actually Tell Us? -- 3.26.8 Why and Where the Significance Test Still Makes Sense -- 3.27 Chapter Summary and Highlights -- Review Exercises.

4: Analysis of Variance: Fixed Effects Models -- 4.1 What Is Analysis of Variance? Fixed Versus Random Effects -- 4.1.1 Small Sample Example: Achievement as a Function of Teacher -- 4.1.2 Is Achievement a Function of Teacher? -- 4.2 How Analysis of Variance Works: a Big Picture Overview -- 4.2.1 Is the Observed Difference Likely? ANOVA as a Comparison (Ratio) of Variances -- 4.3 Logic and Theory of Anova: a Deeper Look -- 4.3.1 Independent Samples t-tests versus Analysis of Variance -- 4.3.2 The ANOVA Model: Explaining Variation -- 4.3.3 Breaking Down a Deviation -- 4.3.4 Naming the Deviations -- 4.3.5 The Sums of Squares of ANOVA -- 4.4 From Sums of Squares to Unbiased Variance Estimators: Dividing By Degrees of Freedom -- 4.5 Expected Mean Squares for One-Way Fixed Effects Model: Deriving the F-Ratio -- 4.5.1 Expected Mean Squares Between -- 4.5.2 Expected Mean Squares Within -- 4.6 The Null Hypothesis in Anova -- 4.7 Fixed Effects Anova: Model Assumptions -- 4.8 A Word on Experimental Design and Randomization -- 4.9 A Preview of the Concept of Nesting -- 4.10 Balanced Versus Unbalanced Data in Anova Models -- 4.11 Measures of Association and Effect Size in Anova: Measures of Variance Explained -- 4.11.1 Eta-Squared -- 4.11.2 Omega-Squared -- 4.12 The F-Test and the Independent Samples T-Test -- 4.13 Contrasts and Post-Hocs -- 4.13.1 Independence of Contrasts -- 4.13.2 Independent Samples t-Test as a Linear Contrast -- 4.14 Post-Hoc Tests -- 4.14.1 Newman-Keuls and Tukey HSD -- 4.14.2 Tukey HSD -- 4.14.3 Scheffé Test -- 4.14.4 Other Post-Hoc Tests -- 4.14.5 Contrast versus Post-Hoc? Which Should I Be Doing? -- 4.15 Sample Size and Power for Anova: Estimation With R and Gpower -- 4.15.1 Power for ANOVA in R and GPower -- 4.15.2 Computing f -- 4.16 Fixed Effects One-Way Analysis of Variance in R: Mathematics Achievement As a Function of Teacher.

4.16.1 Evaluating Assumptions -- 4.16.2 Post-Hoc Tests on Teacher -- 4.17 Analysis of Variance Via R'S Lm -- 4.18 Kruskal-Wallis Test in R -- 4.19 Anova in Spss: Achievement As a Function of Teacher -- 4.20 Chapter Summary and Highlights -- Review Exercises -- 5: Factorial Analysis of Variance: Modeling Interactions -- 5.1 What Is Factorial Analysis of Variance? -- 5.2 Theory of Factorial Anova: a Deeper Look -- 5.2.1 Deriving the Model for Two-Way Factorial ANOVA -- 5.2.2 Cell Effects -- 5.2.3 Interaction Effects -- 5.2.4 Cell Effects versus Interaction Effects -- 5.2.5 A Model for the Two-Way Fixed Effects ANOVA -- 5.3 Comparing One-Way Anova to Two-Way Anova: Cell Effects in Factorial Anova Versus Sample Effects in One-Way Anova -- 5.4 Partitioning the Sums of Squares for Factorial Anova: the Case of Two Factors -- 5.4.1 SS Total: A Measure of Total Variation -- 5.4.2 Model Assumptions: Two-Way Factorial Model -- 5.4.3 Expected Mean Squares for Factorial Design -- 5.4.4 Recap of Expected Mean Squares -- 5.5 Interpreting Main Effects in the Presence of Interactions -- 5.6 Effect Size Measures -- 5.7 Three-Way, Four-Way, and Higher-Order Models -- 5.8 Simple Main Effects -- 5.9 Nested Designs -- 5.9.1 Varieties of Nesting: Nesting of Levels versus Subjects -- 5.10 Achievement As a Function of Teacher and Textbook: Example of Factorial Anova in R -- 5.10.1 Comparing Models through AIC -- 5.10.2 Visualizing Main Effects and Interaction Effects Simultaneously -- 5.10.3 Simple Main Effects for Achievement Data: Breaking Down Interaction Effects -- 5.11 Interaction Contrasts -- 5.12 Chapter Summary and Highlights -- Review Exercises -- 6: Introduction to Random Effects and Mixed Models -- 6.1 What Is Random Effects Analysis of Variance? -- 6.2 Theory of Random Effects Models -- 6.3 Estimation in Random Effects Models.
Özet:
A clear and efficient balance between theory and application of statistical modeling techniques in the social and behavioral sciences Written as a general and accessible introduction, Applied Univariate, Bivariate, and Multivariate Statistics provides an overview of statistical modeling techniques used in fields in the social and behavioral sciences. Blending statistical theory and methodology, the book surveys both the technical and theoretical aspects of good data analysis. Featuring applied resources at various levels, the book includes statistical techniques such as t-tests and correlation as well as more advanced procedures such as MANOVA, factor analysis, and structural equation modeling. To promote a more in-depth interpretation of statistical techniques across the sciences, the book surveys some of the technical arguments underlying formulas and equations. Applied Univariate, Bivariate, and Multivariate Statistics also features -Demonstrations of statistical techniques using software packages such as R and SPSS -Examples of hypothetical and real data with subsequent statistical analyses -Historical and philosophical insights into many of the techniques used in modern social science -A companion website that includes further instructional details, additional data sets, solutions to selected exercises, and multiple programming options An ideal textbook for courses in statistics and methodology at the upper- undergraduate and graduate-levels in psychology, political science, biology, sociology, education, economics, communications, law, and survey research, Applied Univariate, Bivariate, and Multivariate Statistics is also a useful reference for practitioners and researchers in their field of application. DANIEL J. DENIS, PhD, is Associate Professor of Quantitative Psychology at the University of Montana where he teaches courses in univariate and multivariate statistics. He has published a number of articles in peer-reviewed journals and has served as consultant to researchers and practitioners in a variety of fields.
Notlar:
John Wiley and Sons
Ayırtma:
Kopya:

Rafta:*

Kütüphane
Materyal Türü
Demirbaş Numarası
Yer Numarası
Durumu/İade Tarihi
Materyal Ayırtma
Arıyor...
E-Kitap 593076-1001 QA279
Arıyor...

On Order