Statistics for linguistics :: a step-by-step guide for novices /
Linguists with no background in statistics will find this book to be an accessible introduction to statistics. Concepts are explained in non-technical terms, and mathematical formulas are kept to a minimum. The book incorporates SPSS, which is a statistics package that incorporates a point and click...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Cambridge :
Cambridge Scholars Publishing,
2015.
|
Schlagworte: | |
Online-Zugang: | Volltext |
Zusammenfassung: | Linguists with no background in statistics will find this book to be an accessible introduction to statistics. Concepts are explained in non-technical terms, and mathematical formulas are kept to a minimum. The book incorporates SPSS, which is a statistics package that incorporates a point and click interface rather than complex line-commands. Step-by-step instructions are provided for some of the most widely used statistics in linguistics. At the same time, the concepts behind each procedure are also explained. Traditional analyses such as ANOVA and t-tests are included in the book, but lingu. |
Beschreibung: | 1 online resource |
Bibliographie: | Includes bibliographical references (pages 165-167) and index. |
ISBN: | 9781443887762 1443887765 |
Internformat
MARC
LEADER | 00000cam a22000003i 4500 | ||
---|---|---|---|
001 | ZDB-4-EBA-ocn935272211 | ||
003 | OCoLC | ||
005 | 20241004212047.0 | ||
006 | m o d | ||
007 | cr |n||||||||| | ||
008 | 160119s2015 enk ob 001 0 eng d | ||
040 | |a YDXCP |b eng |e pn |c YDXCP |d IDEBK |d N$T |d OCLCF |d EBLCP |d OCLCQ |d CCO |d MERUC |d OCLCQ |d ZCU |d U3W |d ICG |d OCLCQ |d DKC |d OCLCQ |d UKAHL |d OCLCQ |d OCLCO |d OCLCQ |d OCLCO |d OCLCL |d NUI |d SXB | ||
019 | |a 951223659 | ||
020 | |a 9781443887762 |q (electronic bk.) | ||
020 | |a 1443887765 |q (electronic bk.) | ||
020 | |z 9781443876384 | ||
020 | |z 1443876380 | ||
035 | |a (OCoLC)935272211 |z (OCoLC)951223659 | ||
037 | |a 888189 |b MIL | ||
050 | 4 | |a P138.5 | |
072 | 7 | |a LAN |x 009010 |2 bisacsh | |
082 | 7 | |a 410.1/51 |2 23 | |
049 | |a MAIN | ||
100 | 1 | |a Eddington, David, |e author. | |
245 | 1 | 0 | |a Statistics for linguistics : |b a step-by-step guide for novices / |c David Eddington. |
264 | 1 | |a Cambridge : |b Cambridge Scholars Publishing, |c 2015. | |
300 | |a 1 online resource | ||
336 | |a text |b txt |2 rdacontent | ||
337 | |a computer |b c |2 rdamedia | ||
338 | |a online resource |b cr |2 rdacarrier | ||
520 | |a Linguists with no background in statistics will find this book to be an accessible introduction to statistics. Concepts are explained in non-technical terms, and mathematical formulas are kept to a minimum. The book incorporates SPSS, which is a statistics package that incorporates a point and click interface rather than complex line-commands. Step-by-step instructions are provided for some of the most widely used statistics in linguistics. At the same time, the concepts behind each procedure are also explained. Traditional analyses such as ANOVA and t-tests are included in the book, but lingu. | ||
504 | |a Includes bibliographical references (pages 165-167) and index. | ||
505 | 0 | |a Intro -- Contents -- Acknowledgements -- Introduction -- Chapter One -- 1.1 Opening an Existing File in SPSS Software -- 1.2 The Statistics Viewer and Data Editor Windows -- 1.3 Specifying Information about the Data in the Window -- 1.4 Saving a File in SPSS -- 1.5 Sorting the Data -- 1.6 Adding, Deleting, Copying, Pasting, and Cutting Rows and Colu -- Chapter Two -- 2.1 Types of Data -- 2.1.1 Categorical data -- 2.1.2 Ordinal data -- 2.1.3 Continuous data -- 2.2 Variables -- 2.2.1 Independent and dependent variables -- 2.2.2 Control variables -- 2.2.3 Confounding variables -- 2.3 Descriptive Statistics -- 2.3.1 Central tendency: Mean -- 2.3.2 Central tendency: Median -- 2.3.3 Central tendency: Mode -- 2.3.4 Dispersion -- 2.4 Using SPSS to Calculate Descriptive Statistics -- 2.5 Visualizing the Data -- 2.5.1 Histogram -- 2.5.2 Boxplot -- 2.6 Normal Distribution -- 2.6.1 Standard deviation and variance -- 2.6.2 Skew -- 2.6.3 Q-Q plot -- 2.6.4 Kurtosis -- 2.6.5 Tests of normal distribution -- 2.7 Reporting Descriptive Statistics -- 2.8 Inferential Statistics -- 2.8.1 Null hypothesis testing -- 2.8.2 Statistical significance -- 2.8.3 Limitations of p values and null hypothesis testing -- 2.8.4 Confidence intervals -- 2.8.5 Type I and Type II errors -- 2.9 Using SPSS to Make Boxplots of Confidence Intervals -- 2.10 Hands-On Exercises for Descriptive and Inferential Statistics -- 2.10.1 Descriptive statistics of pretest anxiety levels -- 2.10.2 Variables and hypotheses -- 2.10.2.1 English comparatives -- 2.10.2.2 Morpheme recognition and semantic similarity -- 2.10.2.3 Sentence length in first language acquisition -- 2.10.2.4 Classroom intervention and communication skills -- 2.10.2.5 English syllabification -- Chapter Three -- 3.1 Using SPSS to Generate Scatter Plots -- 3.2 Pearson Correlation Coefficient. | |
505 | 8 | |a 3.3 Using SPSS to Calculate Pearson Correlation -- 3.4 Statistical Significance of a Correlation -- 3.5 One-Tailed or Two? -- 3.6 Reporting the Results of a Correlation -- 3.7 Variance and r2 -- 3.8 Correlation Doesn't Mean Causation -- 3.9 Assumptions of Correlation -- 3.9.1 Continuous data -- 3.9.2 Linear relationship -- 3.9.2.1 Using SPSS to generate graphs for visualizing linearity -- 3.9.2.2 What to do if the data are not linear -- 3.9.3 Normally distributed data -- 3.9.3.1 Using SPSS to generate measures of normal distribution -- 3.9.4 Independence of observations -- 3.9.5 Homoscedasticity -- 3.10 Parametric versus Nonparametric Statistics -- 3.10.1 Disadvantages of nonparametric statistics -- 3.10.2 Advantages of nonparametric statistics -- 3.11 Data Transformation -- 3.11.1 Using SPSS to transform data -- 3.12 Recipe for a Correlation -- 3.13 Hands-On Exercises for Correlation -- 3.13.1 Corpus frequency -- 3.13.2 Sonority -- Chapter Four -- 4.1 Goodness of Fit Chi-Square -- 4.1.1 Standardized residuals and effect size in a goodness of fit chi-square -- 4.1.2 Reporting the results of a goodness of fit chi-square -- 4.1.3 Using SPSS to calculate a goodness of fit chi-square -- 4.2 Chi-Square Test of Independence -- 4.2.1 Effect size and standardized residuals in a chi-square test of independence -- 4.2.2 Reporting the results of a chi-square test of independence -- 4.2.3 Using SPSS to calculate a chi-square test of independence -- 4.3 Assumptions of Chi-Square -- 4.4 Recipe for a Chi-Square -- 4.5 Hands-On Exercises for Chi-Square -- 4.5.1 Judeo-Spanish sibilant voicing -- 4.5.2 /r/ to /R/ in Canadian French -- Chapter Five -- 5.1 Comparing Groups with an Independent T-Test -- 5.1.1 Calculating effect size -- 5.1.2 How to report the results of an independent t-test -- 5.1.3 Using SPSS to perform an independent t-test. | |
505 | 8 | |a 5.1.4 The assumptions of an independent t-test -- 5.1.5 Performing multiple t-tests -- 5.2 Using SPSS to Perform a Mann-Whitney Test -- 5.3 Paired (or Dependent) T-Tests -- 5.3.1 Calculating effect size for a paired t-test -- 5.3.2 Using SPSS to perform a paired t-test -- 5.3.3 How to report the results of a paired t-test -- 5.3.4 Assumptions of a paired t-test -- 5.4 Using SPSS to Perform a Wilcoxon Signed-Rank Test -- 5.4.1 How to report the results of a Wilcoxon signed-rank test -- 5.5 Bootstrapping a T-Test -- 5.6 Recipe for an Independent T-Test -- 5.7 Recipe for a Paired T-Test -- 5.8 Hands-On Activities for T-Tests -- 5.8.1 Word order comprehension by English speakers learning Spanish -- 5.8.2 L2 contact hours during study abroad -- Chapter Six -- 6.1 One-Way ANOVA -- 6.1.1 The results of a one-way ANOVA -- 6.1.1.1 Post hoc analysis -- 6.1.1.2 Effect size with partial eta2 -- 6.1.1.3 Reporting the results of a one-way ANOVA -- 6.1.2 Residuals -- 6.1.3 Assumptions of one-way ANOVA -- 6.1.4 Using SPSS to perform a one-way ANOVA -- 6.1.5 Using an ANOVA or t-test in studies that compare two groups -- 6.2 Welch's ANOVA -- 6.2.1 Reporting the results of a Welch's ANOVA -- 6.2.2 Using SPSS to perform a Welch's one-way ANOVA -- 6.2.3 Using SPSS to perform a Kruskal-Wallis H test -- 6.3 Factorial ANOVA -- 6.3.1 Interactions -- 6.3.2 Reporting the results of a factorial ANOVA -- 6.3.3 Post hoc analysis of a factorial ANOVA -- 6.3.4 Using SPSS to perform a factorial ANOVA -- 6.3.5 Assumptions of factorial ANOVA -- 6.3.6 Using SPSS to perform a nonparametric analysis in place of a factorial ANOVA -- 6.4 Repeated Measures ANOVA -- 6.5 Bootstrapping in ANOVA -- 6.5.1 Using SPSS to perform a one-way ANOVA with bootstrapping -- 6.5.2 Using SPSS to perform a factorial ANOVA with bootstrapping -- 6.6 Recipe for a One-Way ANOVA. | |
505 | 8 | |a 6.7 Recipe for a Factorial ANOVA -- 6.8 Hands-On Exercises for One-Way ANOVA -- 6.8.1 Does language experience affect how well English speakers learning Spanish understand Verb + Subject sentences? -- 6.8.2 Test anxiety in ESL learners -- 6.9 Hands-On Exercise for Factorial ANOVA -- 6.9.1 Vowel fronting in California English -- Chapter Seven -- 7.1 Simple Regression -- 7.1.1 Using SPSS to perform a simple regression -- 7.2 Multiple Linear Regression -- 7.2.1 Running the initial analysis -- 7.2.2 Using SPSS to perform a multiple linear regression -- 7.2.3 Interpreting the outcome of the initial multiple linear regression analysis -- 7.2.4 Standardized coefficients -- 7.2.5 Collinearity -- 7.2.6 Using categorical variables in a multiple regression: Dummy coding -- 7.2.7 Centering variables to make the intercept more interpretable -- 7.2.7.1 Using SPSS to center a variable -- 7.2.8 Assumptions of multiple regression -- 7.2.8.1 Independence, number, and types of variables -- 7.2.8.2 Normal distribution of the residuals -- 7.2.8.3 Homoscedasticity of the residuals -- 7.2.8.4 Linearity of the data -- 7.2.9 Addressing violations of statistical assumptions -- 7.2.9.1 Deleting and winsorizing outliers -- 7.2.9.2 Identifying outliers -- 7.2.9.3 Using SPSS to identify outliers -- 7.2.10 Reporting the results of a multiple linear regression -- 7.2.11 Types of multiple linear regression -- 7.2.11.1 Simultaneous regression -- 7.2.11.2 Stepwise or stepping up/down regression -- 7.2.11.3 Hierarchical regression -- 7.2.12 Using SPSS to perform a hierarchical multiple linear regression -- 7.2.13 Finding the most parsimonious regression model -- 7.2.14 Contrast coding and coding interactions -- 7.2.15 Multiple regression with several categorical variables -- 7.2.16 Using SPSS to carry out a bootstrapped multiple regression -- 7.2.17 Recipe for a multiple regression. | |
505 | 8 | |a 7.2.18 Hands-on exercises for multiple linear regression -- 7.2.18.1 Reaction time -- 7.2.18.2 Mental calculation -- Chapter Eight -- 8.1 Fixed Variables and Random Factors -- 8.2 Random Intercept -- 8.3 Random Slope -- 8.4 Covariance Structures and the G Matrix in a Random Effects Model -- 8.5 Repeated Effect -- 8.5.1 Covariance structures and the R matrix in a repeated effects model -- 8.5.2 The variance and covariance of the residuals in a model with a repeated effect -- 8.5.3 More on covariance structures of the residuals in a model with a repeated effect -- 8.5.4 Testing the fit of different covariance structures with a likelihood ratio t -- 8.5.5 Using SPSS to run a marginal model -- 8.6 Simple Example of a Mixed-Effects Model -- 8.7 A Closer Look at the R and G Matrices -- 8.8 Using SPSS to Run a Mixed-Effects Model with a Random Slope and a Repeated Effect -- 8.9 Example of a Mixed-Effects Model with Random Effects for Subject and Item -- 8.9.1 Running a mixed-effects analysis with random effects for subject and test item -- 8.9.2 Using SPSS to carry out a mixed-effects analysis -- 8.9.3 Introduction to the Syntax Editor -- 8.9.4 Results of the Winter and Bergen study -- 8.9.5 Reporting the results of a mixed-effects model -- 8.10 Testing the Assumptions of a Mixed-Effects Model -- 8.10.1 Using SPSS to test the assumptions of mixed-effects models -- 8.11 More about Using the Syntax Editor -- 8.12 Recipe for a Mixed-Effects Model -- 8.13 Hands-On Exercises for Mixed-Effects Models -- 8.13.1 Grammaticality judgments -- 8.13.2 Formality in pitch -- Chapter Nine -- 9.1. Binomial Logistic Regression -- 9.1.1 Results of the binary mixed-effects logistic regression -- 9.1.1.1 Basic model information -- 9.1.1.2 Calculating the accuracy rate -- 9.1.1.3 Significance of the random effects in the model -- 9.1.1.4 Result for the fixed effects. | |
650 | 0 | |a Linguistics |x Statistical methods. |0 http://id.loc.gov/authorities/subjects/sh85077229 | |
650 | 6 | |a Linguistique |x Méthodes statistiques. | |
650 | 7 | |a LANGUAGE ARTS & DISCIPLINES |x Linguistics |x Historical & Comparative. |2 bisacsh | |
650 | 7 | |a Linguistics |x Statistical methods |2 fast | |
655 | 4 | |a Electronic book. | |
758 | |i has work: |a Statistics for linguists (Text) |1 https://id.oclc.org/worldcat/entity/E39PCFJcXMYwHfyWXCp7qHXJ9P |4 https://id.oclc.org/worldcat/ontology/hasWork | ||
776 | 0 | 8 | |i Print version: |z 9781443876384 |z 1443876380 |w (OCoLC)911594782 |
856 | 4 | 0 | |l FWS01 |p ZDB-4-EBA |q FWS_PDA_EBA |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1155158 |3 Volltext |
938 | |a Askews and Holts Library Services |b ASKH |n AH29985534 | ||
938 | |a EBL - Ebook Library |b EBLB |n EBL4535079 | ||
938 | |a EBSCOhost |b EBSC |n 1155158 | ||
938 | |a ProQuest MyiLibrary Digital eBook Collection |b IDEB |n cis33543179 | ||
938 | |a YBP Library Services |b YANK |n 12810293 | ||
994 | |a 92 |b GEBAY | ||
912 | |a ZDB-4-EBA | ||
049 | |a DE-863 |
Datensatz im Suchindex
DE-BY-FWS_katkey | ZDB-4-EBA-ocn935272211 |
---|---|
_version_ | 1816882336717864960 |
adam_text | |
any_adam_object | |
author | Eddington, David |
author_facet | Eddington, David |
author_role | aut |
author_sort | Eddington, David |
author_variant | d e de |
building | Verbundindex |
bvnumber | localFWS |
callnumber-first | P - Language and Literature |
callnumber-label | P138 |
callnumber-raw | P138.5 |
callnumber-search | P138.5 |
callnumber-sort | P 3138.5 |
callnumber-subject | P - Philology and Linguistics |
collection | ZDB-4-EBA |
contents | Intro -- Contents -- Acknowledgements -- Introduction -- Chapter One -- 1.1 Opening an Existing File in SPSS Software -- 1.2 The Statistics Viewer and Data Editor Windows -- 1.3 Specifying Information about the Data in the Window -- 1.4 Saving a File in SPSS -- 1.5 Sorting the Data -- 1.6 Adding, Deleting, Copying, Pasting, and Cutting Rows and Colu -- Chapter Two -- 2.1 Types of Data -- 2.1.1 Categorical data -- 2.1.2 Ordinal data -- 2.1.3 Continuous data -- 2.2 Variables -- 2.2.1 Independent and dependent variables -- 2.2.2 Control variables -- 2.2.3 Confounding variables -- 2.3 Descriptive Statistics -- 2.3.1 Central tendency: Mean -- 2.3.2 Central tendency: Median -- 2.3.3 Central tendency: Mode -- 2.3.4 Dispersion -- 2.4 Using SPSS to Calculate Descriptive Statistics -- 2.5 Visualizing the Data -- 2.5.1 Histogram -- 2.5.2 Boxplot -- 2.6 Normal Distribution -- 2.6.1 Standard deviation and variance -- 2.6.2 Skew -- 2.6.3 Q-Q plot -- 2.6.4 Kurtosis -- 2.6.5 Tests of normal distribution -- 2.7 Reporting Descriptive Statistics -- 2.8 Inferential Statistics -- 2.8.1 Null hypothesis testing -- 2.8.2 Statistical significance -- 2.8.3 Limitations of p values and null hypothesis testing -- 2.8.4 Confidence intervals -- 2.8.5 Type I and Type II errors -- 2.9 Using SPSS to Make Boxplots of Confidence Intervals -- 2.10 Hands-On Exercises for Descriptive and Inferential Statistics -- 2.10.1 Descriptive statistics of pretest anxiety levels -- 2.10.2 Variables and hypotheses -- 2.10.2.1 English comparatives -- 2.10.2.2 Morpheme recognition and semantic similarity -- 2.10.2.3 Sentence length in first language acquisition -- 2.10.2.4 Classroom intervention and communication skills -- 2.10.2.5 English syllabification -- Chapter Three -- 3.1 Using SPSS to Generate Scatter Plots -- 3.2 Pearson Correlation Coefficient. 3.3 Using SPSS to Calculate Pearson Correlation -- 3.4 Statistical Significance of a Correlation -- 3.5 One-Tailed or Two? -- 3.6 Reporting the Results of a Correlation -- 3.7 Variance and r2 -- 3.8 Correlation Doesn't Mean Causation -- 3.9 Assumptions of Correlation -- 3.9.1 Continuous data -- 3.9.2 Linear relationship -- 3.9.2.1 Using SPSS to generate graphs for visualizing linearity -- 3.9.2.2 What to do if the data are not linear -- 3.9.3 Normally distributed data -- 3.9.3.1 Using SPSS to generate measures of normal distribution -- 3.9.4 Independence of observations -- 3.9.5 Homoscedasticity -- 3.10 Parametric versus Nonparametric Statistics -- 3.10.1 Disadvantages of nonparametric statistics -- 3.10.2 Advantages of nonparametric statistics -- 3.11 Data Transformation -- 3.11.1 Using SPSS to transform data -- 3.12 Recipe for a Correlation -- 3.13 Hands-On Exercises for Correlation -- 3.13.1 Corpus frequency -- 3.13.2 Sonority -- Chapter Four -- 4.1 Goodness of Fit Chi-Square -- 4.1.1 Standardized residuals and effect size in a goodness of fit chi-square -- 4.1.2 Reporting the results of a goodness of fit chi-square -- 4.1.3 Using SPSS to calculate a goodness of fit chi-square -- 4.2 Chi-Square Test of Independence -- 4.2.1 Effect size and standardized residuals in a chi-square test of independence -- 4.2.2 Reporting the results of a chi-square test of independence -- 4.2.3 Using SPSS to calculate a chi-square test of independence -- 4.3 Assumptions of Chi-Square -- 4.4 Recipe for a Chi-Square -- 4.5 Hands-On Exercises for Chi-Square -- 4.5.1 Judeo-Spanish sibilant voicing -- 4.5.2 /r/ to /R/ in Canadian French -- Chapter Five -- 5.1 Comparing Groups with an Independent T-Test -- 5.1.1 Calculating effect size -- 5.1.2 How to report the results of an independent t-test -- 5.1.3 Using SPSS to perform an independent t-test. 5.1.4 The assumptions of an independent t-test -- 5.1.5 Performing multiple t-tests -- 5.2 Using SPSS to Perform a Mann-Whitney Test -- 5.3 Paired (or Dependent) T-Tests -- 5.3.1 Calculating effect size for a paired t-test -- 5.3.2 Using SPSS to perform a paired t-test -- 5.3.3 How to report the results of a paired t-test -- 5.3.4 Assumptions of a paired t-test -- 5.4 Using SPSS to Perform a Wilcoxon Signed-Rank Test -- 5.4.1 How to report the results of a Wilcoxon signed-rank test -- 5.5 Bootstrapping a T-Test -- 5.6 Recipe for an Independent T-Test -- 5.7 Recipe for a Paired T-Test -- 5.8 Hands-On Activities for T-Tests -- 5.8.1 Word order comprehension by English speakers learning Spanish -- 5.8.2 L2 contact hours during study abroad -- Chapter Six -- 6.1 One-Way ANOVA -- 6.1.1 The results of a one-way ANOVA -- 6.1.1.1 Post hoc analysis -- 6.1.1.2 Effect size with partial eta2 -- 6.1.1.3 Reporting the results of a one-way ANOVA -- 6.1.2 Residuals -- 6.1.3 Assumptions of one-way ANOVA -- 6.1.4 Using SPSS to perform a one-way ANOVA -- 6.1.5 Using an ANOVA or t-test in studies that compare two groups -- 6.2 Welch's ANOVA -- 6.2.1 Reporting the results of a Welch's ANOVA -- 6.2.2 Using SPSS to perform a Welch's one-way ANOVA -- 6.2.3 Using SPSS to perform a Kruskal-Wallis H test -- 6.3 Factorial ANOVA -- 6.3.1 Interactions -- 6.3.2 Reporting the results of a factorial ANOVA -- 6.3.3 Post hoc analysis of a factorial ANOVA -- 6.3.4 Using SPSS to perform a factorial ANOVA -- 6.3.5 Assumptions of factorial ANOVA -- 6.3.6 Using SPSS to perform a nonparametric analysis in place of a factorial ANOVA -- 6.4 Repeated Measures ANOVA -- 6.5 Bootstrapping in ANOVA -- 6.5.1 Using SPSS to perform a one-way ANOVA with bootstrapping -- 6.5.2 Using SPSS to perform a factorial ANOVA with bootstrapping -- 6.6 Recipe for a One-Way ANOVA. 6.7 Recipe for a Factorial ANOVA -- 6.8 Hands-On Exercises for One-Way ANOVA -- 6.8.1 Does language experience affect how well English speakers learning Spanish understand Verb + Subject sentences? -- 6.8.2 Test anxiety in ESL learners -- 6.9 Hands-On Exercise for Factorial ANOVA -- 6.9.1 Vowel fronting in California English -- Chapter Seven -- 7.1 Simple Regression -- 7.1.1 Using SPSS to perform a simple regression -- 7.2 Multiple Linear Regression -- 7.2.1 Running the initial analysis -- 7.2.2 Using SPSS to perform a multiple linear regression -- 7.2.3 Interpreting the outcome of the initial multiple linear regression analysis -- 7.2.4 Standardized coefficients -- 7.2.5 Collinearity -- 7.2.6 Using categorical variables in a multiple regression: Dummy coding -- 7.2.7 Centering variables to make the intercept more interpretable -- 7.2.7.1 Using SPSS to center a variable -- 7.2.8 Assumptions of multiple regression -- 7.2.8.1 Independence, number, and types of variables -- 7.2.8.2 Normal distribution of the residuals -- 7.2.8.3 Homoscedasticity of the residuals -- 7.2.8.4 Linearity of the data -- 7.2.9 Addressing violations of statistical assumptions -- 7.2.9.1 Deleting and winsorizing outliers -- 7.2.9.2 Identifying outliers -- 7.2.9.3 Using SPSS to identify outliers -- 7.2.10 Reporting the results of a multiple linear regression -- 7.2.11 Types of multiple linear regression -- 7.2.11.1 Simultaneous regression -- 7.2.11.2 Stepwise or stepping up/down regression -- 7.2.11.3 Hierarchical regression -- 7.2.12 Using SPSS to perform a hierarchical multiple linear regression -- 7.2.13 Finding the most parsimonious regression model -- 7.2.14 Contrast coding and coding interactions -- 7.2.15 Multiple regression with several categorical variables -- 7.2.16 Using SPSS to carry out a bootstrapped multiple regression -- 7.2.17 Recipe for a multiple regression. 7.2.18 Hands-on exercises for multiple linear regression -- 7.2.18.1 Reaction time -- 7.2.18.2 Mental calculation -- Chapter Eight -- 8.1 Fixed Variables and Random Factors -- 8.2 Random Intercept -- 8.3 Random Slope -- 8.4 Covariance Structures and the G Matrix in a Random Effects Model -- 8.5 Repeated Effect -- 8.5.1 Covariance structures and the R matrix in a repeated effects model -- 8.5.2 The variance and covariance of the residuals in a model with a repeated effect -- 8.5.3 More on covariance structures of the residuals in a model with a repeated effect -- 8.5.4 Testing the fit of different covariance structures with a likelihood ratio t -- 8.5.5 Using SPSS to run a marginal model -- 8.6 Simple Example of a Mixed-Effects Model -- 8.7 A Closer Look at the R and G Matrices -- 8.8 Using SPSS to Run a Mixed-Effects Model with a Random Slope and a Repeated Effect -- 8.9 Example of a Mixed-Effects Model with Random Effects for Subject and Item -- 8.9.1 Running a mixed-effects analysis with random effects for subject and test item -- 8.9.2 Using SPSS to carry out a mixed-effects analysis -- 8.9.3 Introduction to the Syntax Editor -- 8.9.4 Results of the Winter and Bergen study -- 8.9.5 Reporting the results of a mixed-effects model -- 8.10 Testing the Assumptions of a Mixed-Effects Model -- 8.10.1 Using SPSS to test the assumptions of mixed-effects models -- 8.11 More about Using the Syntax Editor -- 8.12 Recipe for a Mixed-Effects Model -- 8.13 Hands-On Exercises for Mixed-Effects Models -- 8.13.1 Grammaticality judgments -- 8.13.2 Formality in pitch -- Chapter Nine -- 9.1. Binomial Logistic Regression -- 9.1.1 Results of the binary mixed-effects logistic regression -- 9.1.1.1 Basic model information -- 9.1.1.2 Calculating the accuracy rate -- 9.1.1.3 Significance of the random effects in the model -- 9.1.1.4 Result for the fixed effects. |
ctrlnum | (OCoLC)935272211 |
dewey-full | 410.1/51 |
dewey-hundreds | 400 - Language |
dewey-ones | 410 - Linguistics |
dewey-raw | 410.1/51 |
dewey-search | 410.1/51 |
dewey-sort | 3410.1 251 |
dewey-tens | 410 - Linguistics |
discipline | Sprachwissenschaft |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>12188cam a22005773i 4500</leader><controlfield tag="001">ZDB-4-EBA-ocn935272211</controlfield><controlfield tag="003">OCoLC</controlfield><controlfield tag="005">20241004212047.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr |n|||||||||</controlfield><controlfield tag="008">160119s2015 enk ob 001 0 eng d</controlfield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">YDXCP</subfield><subfield code="b">eng</subfield><subfield code="e">pn</subfield><subfield code="c">YDXCP</subfield><subfield code="d">IDEBK</subfield><subfield code="d">N$T</subfield><subfield code="d">OCLCF</subfield><subfield code="d">EBLCP</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">CCO</subfield><subfield code="d">MERUC</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">ZCU</subfield><subfield code="d">U3W</subfield><subfield code="d">ICG</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">DKC</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">UKAHL</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCL</subfield><subfield code="d">NUI</subfield><subfield code="d">SXB</subfield></datafield><datafield tag="019" ind1=" " ind2=" "><subfield code="a">951223659</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781443887762</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1443887765</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9781443876384</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">1443876380</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)935272211</subfield><subfield code="z">(OCoLC)951223659</subfield></datafield><datafield tag="037" ind1=" " ind2=" "><subfield code="a">888189</subfield><subfield code="b">MIL</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">P138.5</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">LAN</subfield><subfield code="x">009010</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="082" ind1="7" ind2=" "><subfield code="a">410.1/51</subfield><subfield code="2">23</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">MAIN</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Eddington, David,</subfield><subfield code="e">author.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Statistics for linguistics :</subfield><subfield code="b">a step-by-step guide for novices /</subfield><subfield code="c">David Eddington.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cambridge :</subfield><subfield code="b">Cambridge Scholars Publishing,</subfield><subfield code="c">2015.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Linguists with no background in statistics will find this book to be an accessible introduction to statistics. Concepts are explained in non-technical terms, and mathematical formulas are kept to a minimum. The book incorporates SPSS, which is a statistics package that incorporates a point and click interface rather than complex line-commands. Step-by-step instructions are provided for some of the most widely used statistics in linguistics. At the same time, the concepts behind each procedure are also explained. Traditional analyses such as ANOVA and t-tests are included in the book, but lingu.</subfield></datafield><datafield tag="504" ind1=" " ind2=" "><subfield code="a">Includes bibliographical references (pages 165-167) and index.</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Intro -- Contents -- Acknowledgements -- Introduction -- Chapter One -- 1.1 Opening an Existing File in SPSS Software -- 1.2 The Statistics Viewer and Data Editor Windows -- 1.3 Specifying Information about the Data in the Window -- 1.4 Saving a File in SPSS -- 1.5 Sorting the Data -- 1.6 Adding, Deleting, Copying, Pasting, and Cutting Rows and Colu -- Chapter Two -- 2.1 Types of Data -- 2.1.1 Categorical data -- 2.1.2 Ordinal data -- 2.1.3 Continuous data -- 2.2 Variables -- 2.2.1 Independent and dependent variables -- 2.2.2 Control variables -- 2.2.3 Confounding variables -- 2.3 Descriptive Statistics -- 2.3.1 Central tendency: Mean -- 2.3.2 Central tendency: Median -- 2.3.3 Central tendency: Mode -- 2.3.4 Dispersion -- 2.4 Using SPSS to Calculate Descriptive Statistics -- 2.5 Visualizing the Data -- 2.5.1 Histogram -- 2.5.2 Boxplot -- 2.6 Normal Distribution -- 2.6.1 Standard deviation and variance -- 2.6.2 Skew -- 2.6.3 Q-Q plot -- 2.6.4 Kurtosis -- 2.6.5 Tests of normal distribution -- 2.7 Reporting Descriptive Statistics -- 2.8 Inferential Statistics -- 2.8.1 Null hypothesis testing -- 2.8.2 Statistical significance -- 2.8.3 Limitations of p values and null hypothesis testing -- 2.8.4 Confidence intervals -- 2.8.5 Type I and Type II errors -- 2.9 Using SPSS to Make Boxplots of Confidence Intervals -- 2.10 Hands-On Exercises for Descriptive and Inferential Statistics -- 2.10.1 Descriptive statistics of pretest anxiety levels -- 2.10.2 Variables and hypotheses -- 2.10.2.1 English comparatives -- 2.10.2.2 Morpheme recognition and semantic similarity -- 2.10.2.3 Sentence length in first language acquisition -- 2.10.2.4 Classroom intervention and communication skills -- 2.10.2.5 English syllabification -- Chapter Three -- 3.1 Using SPSS to Generate Scatter Plots -- 3.2 Pearson Correlation Coefficient.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">3.3 Using SPSS to Calculate Pearson Correlation -- 3.4 Statistical Significance of a Correlation -- 3.5 One-Tailed or Two? -- 3.6 Reporting the Results of a Correlation -- 3.7 Variance and r2 -- 3.8 Correlation Doesn't Mean Causation -- 3.9 Assumptions of Correlation -- 3.9.1 Continuous data -- 3.9.2 Linear relationship -- 3.9.2.1 Using SPSS to generate graphs for visualizing linearity -- 3.9.2.2 What to do if the data are not linear -- 3.9.3 Normally distributed data -- 3.9.3.1 Using SPSS to generate measures of normal distribution -- 3.9.4 Independence of observations -- 3.9.5 Homoscedasticity -- 3.10 Parametric versus Nonparametric Statistics -- 3.10.1 Disadvantages of nonparametric statistics -- 3.10.2 Advantages of nonparametric statistics -- 3.11 Data Transformation -- 3.11.1 Using SPSS to transform data -- 3.12 Recipe for a Correlation -- 3.13 Hands-On Exercises for Correlation -- 3.13.1 Corpus frequency -- 3.13.2 Sonority -- Chapter Four -- 4.1 Goodness of Fit Chi-Square -- 4.1.1 Standardized residuals and effect size in a goodness of fit chi-square -- 4.1.2 Reporting the results of a goodness of fit chi-square -- 4.1.3 Using SPSS to calculate a goodness of fit chi-square -- 4.2 Chi-Square Test of Independence -- 4.2.1 Effect size and standardized residuals in a chi-square test of independence -- 4.2.2 Reporting the results of a chi-square test of independence -- 4.2.3 Using SPSS to calculate a chi-square test of independence -- 4.3 Assumptions of Chi-Square -- 4.4 Recipe for a Chi-Square -- 4.5 Hands-On Exercises for Chi-Square -- 4.5.1 Judeo-Spanish sibilant voicing -- 4.5.2 /r/ to /R/ in Canadian French -- Chapter Five -- 5.1 Comparing Groups with an Independent T-Test -- 5.1.1 Calculating effect size -- 5.1.2 How to report the results of an independent t-test -- 5.1.3 Using SPSS to perform an independent t-test.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">5.1.4 The assumptions of an independent t-test -- 5.1.5 Performing multiple t-tests -- 5.2 Using SPSS to Perform a Mann-Whitney Test -- 5.3 Paired (or Dependent) T-Tests -- 5.3.1 Calculating effect size for a paired t-test -- 5.3.2 Using SPSS to perform a paired t-test -- 5.3.3 How to report the results of a paired t-test -- 5.3.4 Assumptions of a paired t-test -- 5.4 Using SPSS to Perform a Wilcoxon Signed-Rank Test -- 5.4.1 How to report the results of a Wilcoxon signed-rank test -- 5.5 Bootstrapping a T-Test -- 5.6 Recipe for an Independent T-Test -- 5.7 Recipe for a Paired T-Test -- 5.8 Hands-On Activities for T-Tests -- 5.8.1 Word order comprehension by English speakers learning Spanish -- 5.8.2 L2 contact hours during study abroad -- Chapter Six -- 6.1 One-Way ANOVA -- 6.1.1 The results of a one-way ANOVA -- 6.1.1.1 Post hoc analysis -- 6.1.1.2 Effect size with partial eta2 -- 6.1.1.3 Reporting the results of a one-way ANOVA -- 6.1.2 Residuals -- 6.1.3 Assumptions of one-way ANOVA -- 6.1.4 Using SPSS to perform a one-way ANOVA -- 6.1.5 Using an ANOVA or t-test in studies that compare two groups -- 6.2 Welch's ANOVA -- 6.2.1 Reporting the results of a Welch's ANOVA -- 6.2.2 Using SPSS to perform a Welch's one-way ANOVA -- 6.2.3 Using SPSS to perform a Kruskal-Wallis H test -- 6.3 Factorial ANOVA -- 6.3.1 Interactions -- 6.3.2 Reporting the results of a factorial ANOVA -- 6.3.3 Post hoc analysis of a factorial ANOVA -- 6.3.4 Using SPSS to perform a factorial ANOVA -- 6.3.5 Assumptions of factorial ANOVA -- 6.3.6 Using SPSS to perform a nonparametric analysis in place of a factorial ANOVA -- 6.4 Repeated Measures ANOVA -- 6.5 Bootstrapping in ANOVA -- 6.5.1 Using SPSS to perform a one-way ANOVA with bootstrapping -- 6.5.2 Using SPSS to perform a factorial ANOVA with bootstrapping -- 6.6 Recipe for a One-Way ANOVA.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">6.7 Recipe for a Factorial ANOVA -- 6.8 Hands-On Exercises for One-Way ANOVA -- 6.8.1 Does language experience affect how well English speakers learning Spanish understand Verb + Subject sentences? -- 6.8.2 Test anxiety in ESL learners -- 6.9 Hands-On Exercise for Factorial ANOVA -- 6.9.1 Vowel fronting in California English -- Chapter Seven -- 7.1 Simple Regression -- 7.1.1 Using SPSS to perform a simple regression -- 7.2 Multiple Linear Regression -- 7.2.1 Running the initial analysis -- 7.2.2 Using SPSS to perform a multiple linear regression -- 7.2.3 Interpreting the outcome of the initial multiple linear regression analysis -- 7.2.4 Standardized coefficients -- 7.2.5 Collinearity -- 7.2.6 Using categorical variables in a multiple regression: Dummy coding -- 7.2.7 Centering variables to make the intercept more interpretable -- 7.2.7.1 Using SPSS to center a variable -- 7.2.8 Assumptions of multiple regression -- 7.2.8.1 Independence, number, and types of variables -- 7.2.8.2 Normal distribution of the residuals -- 7.2.8.3 Homoscedasticity of the residuals -- 7.2.8.4 Linearity of the data -- 7.2.9 Addressing violations of statistical assumptions -- 7.2.9.1 Deleting and winsorizing outliers -- 7.2.9.2 Identifying outliers -- 7.2.9.3 Using SPSS to identify outliers -- 7.2.10 Reporting the results of a multiple linear regression -- 7.2.11 Types of multiple linear regression -- 7.2.11.1 Simultaneous regression -- 7.2.11.2 Stepwise or stepping up/down regression -- 7.2.11.3 Hierarchical regression -- 7.2.12 Using SPSS to perform a hierarchical multiple linear regression -- 7.2.13 Finding the most parsimonious regression model -- 7.2.14 Contrast coding and coding interactions -- 7.2.15 Multiple regression with several categorical variables -- 7.2.16 Using SPSS to carry out a bootstrapped multiple regression -- 7.2.17 Recipe for a multiple regression.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">7.2.18 Hands-on exercises for multiple linear regression -- 7.2.18.1 Reaction time -- 7.2.18.2 Mental calculation -- Chapter Eight -- 8.1 Fixed Variables and Random Factors -- 8.2 Random Intercept -- 8.3 Random Slope -- 8.4 Covariance Structures and the G Matrix in a Random Effects Model -- 8.5 Repeated Effect -- 8.5.1 Covariance structures and the R matrix in a repeated effects model -- 8.5.2 The variance and covariance of the residuals in a model with a repeated effect -- 8.5.3 More on covariance structures of the residuals in a model with a repeated effect -- 8.5.4 Testing the fit of different covariance structures with a likelihood ratio t -- 8.5.5 Using SPSS to run a marginal model -- 8.6 Simple Example of a Mixed-Effects Model -- 8.7 A Closer Look at the R and G Matrices -- 8.8 Using SPSS to Run a Mixed-Effects Model with a Random Slope and a Repeated Effect -- 8.9 Example of a Mixed-Effects Model with Random Effects for Subject and Item -- 8.9.1 Running a mixed-effects analysis with random effects for subject and test item -- 8.9.2 Using SPSS to carry out a mixed-effects analysis -- 8.9.3 Introduction to the Syntax Editor -- 8.9.4 Results of the Winter and Bergen study -- 8.9.5 Reporting the results of a mixed-effects model -- 8.10 Testing the Assumptions of a Mixed-Effects Model -- 8.10.1 Using SPSS to test the assumptions of mixed-effects models -- 8.11 More about Using the Syntax Editor -- 8.12 Recipe for a Mixed-Effects Model -- 8.13 Hands-On Exercises for Mixed-Effects Models -- 8.13.1 Grammaticality judgments -- 8.13.2 Formality in pitch -- Chapter Nine -- 9.1. Binomial Logistic Regression -- 9.1.1 Results of the binary mixed-effects logistic regression -- 9.1.1.1 Basic model information -- 9.1.1.2 Calculating the accuracy rate -- 9.1.1.3 Significance of the random effects in the model -- 9.1.1.4 Result for the fixed effects.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Linguistics</subfield><subfield code="x">Statistical methods.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85077229</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Linguistique</subfield><subfield code="x">Méthodes statistiques.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">LANGUAGE ARTS & DISCIPLINES</subfield><subfield code="x">Linguistics</subfield><subfield code="x">Historical & Comparative.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Linguistics</subfield><subfield code="x">Statistical methods</subfield><subfield code="2">fast</subfield></datafield><datafield tag="655" ind1=" " ind2="4"><subfield code="a">Electronic book.</subfield></datafield><datafield tag="758" ind1=" " ind2=" "><subfield code="i">has work:</subfield><subfield code="a">Statistics for linguists (Text)</subfield><subfield code="1">https://id.oclc.org/worldcat/entity/E39PCFJcXMYwHfyWXCp7qHXJ9P</subfield><subfield code="4">https://id.oclc.org/worldcat/ontology/hasWork</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="z">9781443876384</subfield><subfield code="z">1443876380</subfield><subfield code="w">(OCoLC)911594782</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="l">FWS01</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FWS_PDA_EBA</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1155158</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">Askews and Holts Library Services</subfield><subfield code="b">ASKH</subfield><subfield code="n">AH29985534</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBL - Ebook Library</subfield><subfield code="b">EBLB</subfield><subfield code="n">EBL4535079</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBSCOhost</subfield><subfield code="b">EBSC</subfield><subfield code="n">1155158</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">ProQuest MyiLibrary Digital eBook Collection</subfield><subfield code="b">IDEB</subfield><subfield code="n">cis33543179</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">YBP Library Services</subfield><subfield code="b">YANK</subfield><subfield code="n">12810293</subfield></datafield><datafield tag="994" ind1=" " ind2=" "><subfield code="a">92</subfield><subfield code="b">GEBAY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBA</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-863</subfield></datafield></record></collection> |
genre | Electronic book. |
genre_facet | Electronic book. |
id | ZDB-4-EBA-ocn935272211 |
illustrated | Not Illustrated |
indexdate | 2024-11-27T13:27:00Z |
institution | BVB |
isbn | 9781443887762 1443887765 |
language | English |
oclc_num | 935272211 |
open_access_boolean | |
owner | MAIN DE-863 DE-BY-FWS |
owner_facet | MAIN DE-863 DE-BY-FWS |
physical | 1 online resource |
psigel | ZDB-4-EBA |
publishDate | 2015 |
publishDateSearch | 2015 |
publishDateSort | 2015 |
publisher | Cambridge Scholars Publishing, |
record_format | marc |
spelling | Eddington, David, author. Statistics for linguistics : a step-by-step guide for novices / David Eddington. Cambridge : Cambridge Scholars Publishing, 2015. 1 online resource text txt rdacontent computer c rdamedia online resource cr rdacarrier Linguists with no background in statistics will find this book to be an accessible introduction to statistics. Concepts are explained in non-technical terms, and mathematical formulas are kept to a minimum. The book incorporates SPSS, which is a statistics package that incorporates a point and click interface rather than complex line-commands. Step-by-step instructions are provided for some of the most widely used statistics in linguistics. At the same time, the concepts behind each procedure are also explained. Traditional analyses such as ANOVA and t-tests are included in the book, but lingu. Includes bibliographical references (pages 165-167) and index. Intro -- Contents -- Acknowledgements -- Introduction -- Chapter One -- 1.1 Opening an Existing File in SPSS Software -- 1.2 The Statistics Viewer and Data Editor Windows -- 1.3 Specifying Information about the Data in the Window -- 1.4 Saving a File in SPSS -- 1.5 Sorting the Data -- 1.6 Adding, Deleting, Copying, Pasting, and Cutting Rows and Colu -- Chapter Two -- 2.1 Types of Data -- 2.1.1 Categorical data -- 2.1.2 Ordinal data -- 2.1.3 Continuous data -- 2.2 Variables -- 2.2.1 Independent and dependent variables -- 2.2.2 Control variables -- 2.2.3 Confounding variables -- 2.3 Descriptive Statistics -- 2.3.1 Central tendency: Mean -- 2.3.2 Central tendency: Median -- 2.3.3 Central tendency: Mode -- 2.3.4 Dispersion -- 2.4 Using SPSS to Calculate Descriptive Statistics -- 2.5 Visualizing the Data -- 2.5.1 Histogram -- 2.5.2 Boxplot -- 2.6 Normal Distribution -- 2.6.1 Standard deviation and variance -- 2.6.2 Skew -- 2.6.3 Q-Q plot -- 2.6.4 Kurtosis -- 2.6.5 Tests of normal distribution -- 2.7 Reporting Descriptive Statistics -- 2.8 Inferential Statistics -- 2.8.1 Null hypothesis testing -- 2.8.2 Statistical significance -- 2.8.3 Limitations of p values and null hypothesis testing -- 2.8.4 Confidence intervals -- 2.8.5 Type I and Type II errors -- 2.9 Using SPSS to Make Boxplots of Confidence Intervals -- 2.10 Hands-On Exercises for Descriptive and Inferential Statistics -- 2.10.1 Descriptive statistics of pretest anxiety levels -- 2.10.2 Variables and hypotheses -- 2.10.2.1 English comparatives -- 2.10.2.2 Morpheme recognition and semantic similarity -- 2.10.2.3 Sentence length in first language acquisition -- 2.10.2.4 Classroom intervention and communication skills -- 2.10.2.5 English syllabification -- Chapter Three -- 3.1 Using SPSS to Generate Scatter Plots -- 3.2 Pearson Correlation Coefficient. 3.3 Using SPSS to Calculate Pearson Correlation -- 3.4 Statistical Significance of a Correlation -- 3.5 One-Tailed or Two? -- 3.6 Reporting the Results of a Correlation -- 3.7 Variance and r2 -- 3.8 Correlation Doesn't Mean Causation -- 3.9 Assumptions of Correlation -- 3.9.1 Continuous data -- 3.9.2 Linear relationship -- 3.9.2.1 Using SPSS to generate graphs for visualizing linearity -- 3.9.2.2 What to do if the data are not linear -- 3.9.3 Normally distributed data -- 3.9.3.1 Using SPSS to generate measures of normal distribution -- 3.9.4 Independence of observations -- 3.9.5 Homoscedasticity -- 3.10 Parametric versus Nonparametric Statistics -- 3.10.1 Disadvantages of nonparametric statistics -- 3.10.2 Advantages of nonparametric statistics -- 3.11 Data Transformation -- 3.11.1 Using SPSS to transform data -- 3.12 Recipe for a Correlation -- 3.13 Hands-On Exercises for Correlation -- 3.13.1 Corpus frequency -- 3.13.2 Sonority -- Chapter Four -- 4.1 Goodness of Fit Chi-Square -- 4.1.1 Standardized residuals and effect size in a goodness of fit chi-square -- 4.1.2 Reporting the results of a goodness of fit chi-square -- 4.1.3 Using SPSS to calculate a goodness of fit chi-square -- 4.2 Chi-Square Test of Independence -- 4.2.1 Effect size and standardized residuals in a chi-square test of independence -- 4.2.2 Reporting the results of a chi-square test of independence -- 4.2.3 Using SPSS to calculate a chi-square test of independence -- 4.3 Assumptions of Chi-Square -- 4.4 Recipe for a Chi-Square -- 4.5 Hands-On Exercises for Chi-Square -- 4.5.1 Judeo-Spanish sibilant voicing -- 4.5.2 /r/ to /R/ in Canadian French -- Chapter Five -- 5.1 Comparing Groups with an Independent T-Test -- 5.1.1 Calculating effect size -- 5.1.2 How to report the results of an independent t-test -- 5.1.3 Using SPSS to perform an independent t-test. 5.1.4 The assumptions of an independent t-test -- 5.1.5 Performing multiple t-tests -- 5.2 Using SPSS to Perform a Mann-Whitney Test -- 5.3 Paired (or Dependent) T-Tests -- 5.3.1 Calculating effect size for a paired t-test -- 5.3.2 Using SPSS to perform a paired t-test -- 5.3.3 How to report the results of a paired t-test -- 5.3.4 Assumptions of a paired t-test -- 5.4 Using SPSS to Perform a Wilcoxon Signed-Rank Test -- 5.4.1 How to report the results of a Wilcoxon signed-rank test -- 5.5 Bootstrapping a T-Test -- 5.6 Recipe for an Independent T-Test -- 5.7 Recipe for a Paired T-Test -- 5.8 Hands-On Activities for T-Tests -- 5.8.1 Word order comprehension by English speakers learning Spanish -- 5.8.2 L2 contact hours during study abroad -- Chapter Six -- 6.1 One-Way ANOVA -- 6.1.1 The results of a one-way ANOVA -- 6.1.1.1 Post hoc analysis -- 6.1.1.2 Effect size with partial eta2 -- 6.1.1.3 Reporting the results of a one-way ANOVA -- 6.1.2 Residuals -- 6.1.3 Assumptions of one-way ANOVA -- 6.1.4 Using SPSS to perform a one-way ANOVA -- 6.1.5 Using an ANOVA or t-test in studies that compare two groups -- 6.2 Welch's ANOVA -- 6.2.1 Reporting the results of a Welch's ANOVA -- 6.2.2 Using SPSS to perform a Welch's one-way ANOVA -- 6.2.3 Using SPSS to perform a Kruskal-Wallis H test -- 6.3 Factorial ANOVA -- 6.3.1 Interactions -- 6.3.2 Reporting the results of a factorial ANOVA -- 6.3.3 Post hoc analysis of a factorial ANOVA -- 6.3.4 Using SPSS to perform a factorial ANOVA -- 6.3.5 Assumptions of factorial ANOVA -- 6.3.6 Using SPSS to perform a nonparametric analysis in place of a factorial ANOVA -- 6.4 Repeated Measures ANOVA -- 6.5 Bootstrapping in ANOVA -- 6.5.1 Using SPSS to perform a one-way ANOVA with bootstrapping -- 6.5.2 Using SPSS to perform a factorial ANOVA with bootstrapping -- 6.6 Recipe for a One-Way ANOVA. 6.7 Recipe for a Factorial ANOVA -- 6.8 Hands-On Exercises for One-Way ANOVA -- 6.8.1 Does language experience affect how well English speakers learning Spanish understand Verb + Subject sentences? -- 6.8.2 Test anxiety in ESL learners -- 6.9 Hands-On Exercise for Factorial ANOVA -- 6.9.1 Vowel fronting in California English -- Chapter Seven -- 7.1 Simple Regression -- 7.1.1 Using SPSS to perform a simple regression -- 7.2 Multiple Linear Regression -- 7.2.1 Running the initial analysis -- 7.2.2 Using SPSS to perform a multiple linear regression -- 7.2.3 Interpreting the outcome of the initial multiple linear regression analysis -- 7.2.4 Standardized coefficients -- 7.2.5 Collinearity -- 7.2.6 Using categorical variables in a multiple regression: Dummy coding -- 7.2.7 Centering variables to make the intercept more interpretable -- 7.2.7.1 Using SPSS to center a variable -- 7.2.8 Assumptions of multiple regression -- 7.2.8.1 Independence, number, and types of variables -- 7.2.8.2 Normal distribution of the residuals -- 7.2.8.3 Homoscedasticity of the residuals -- 7.2.8.4 Linearity of the data -- 7.2.9 Addressing violations of statistical assumptions -- 7.2.9.1 Deleting and winsorizing outliers -- 7.2.9.2 Identifying outliers -- 7.2.9.3 Using SPSS to identify outliers -- 7.2.10 Reporting the results of a multiple linear regression -- 7.2.11 Types of multiple linear regression -- 7.2.11.1 Simultaneous regression -- 7.2.11.2 Stepwise or stepping up/down regression -- 7.2.11.3 Hierarchical regression -- 7.2.12 Using SPSS to perform a hierarchical multiple linear regression -- 7.2.13 Finding the most parsimonious regression model -- 7.2.14 Contrast coding and coding interactions -- 7.2.15 Multiple regression with several categorical variables -- 7.2.16 Using SPSS to carry out a bootstrapped multiple regression -- 7.2.17 Recipe for a multiple regression. 7.2.18 Hands-on exercises for multiple linear regression -- 7.2.18.1 Reaction time -- 7.2.18.2 Mental calculation -- Chapter Eight -- 8.1 Fixed Variables and Random Factors -- 8.2 Random Intercept -- 8.3 Random Slope -- 8.4 Covariance Structures and the G Matrix in a Random Effects Model -- 8.5 Repeated Effect -- 8.5.1 Covariance structures and the R matrix in a repeated effects model -- 8.5.2 The variance and covariance of the residuals in a model with a repeated effect -- 8.5.3 More on covariance structures of the residuals in a model with a repeated effect -- 8.5.4 Testing the fit of different covariance structures with a likelihood ratio t -- 8.5.5 Using SPSS to run a marginal model -- 8.6 Simple Example of a Mixed-Effects Model -- 8.7 A Closer Look at the R and G Matrices -- 8.8 Using SPSS to Run a Mixed-Effects Model with a Random Slope and a Repeated Effect -- 8.9 Example of a Mixed-Effects Model with Random Effects for Subject and Item -- 8.9.1 Running a mixed-effects analysis with random effects for subject and test item -- 8.9.2 Using SPSS to carry out a mixed-effects analysis -- 8.9.3 Introduction to the Syntax Editor -- 8.9.4 Results of the Winter and Bergen study -- 8.9.5 Reporting the results of a mixed-effects model -- 8.10 Testing the Assumptions of a Mixed-Effects Model -- 8.10.1 Using SPSS to test the assumptions of mixed-effects models -- 8.11 More about Using the Syntax Editor -- 8.12 Recipe for a Mixed-Effects Model -- 8.13 Hands-On Exercises for Mixed-Effects Models -- 8.13.1 Grammaticality judgments -- 8.13.2 Formality in pitch -- Chapter Nine -- 9.1. Binomial Logistic Regression -- 9.1.1 Results of the binary mixed-effects logistic regression -- 9.1.1.1 Basic model information -- 9.1.1.2 Calculating the accuracy rate -- 9.1.1.3 Significance of the random effects in the model -- 9.1.1.4 Result for the fixed effects. Linguistics Statistical methods. http://id.loc.gov/authorities/subjects/sh85077229 Linguistique Méthodes statistiques. LANGUAGE ARTS & DISCIPLINES Linguistics Historical & Comparative. bisacsh Linguistics Statistical methods fast Electronic book. has work: Statistics for linguists (Text) https://id.oclc.org/worldcat/entity/E39PCFJcXMYwHfyWXCp7qHXJ9P https://id.oclc.org/worldcat/ontology/hasWork Print version: 9781443876384 1443876380 (OCoLC)911594782 FWS01 ZDB-4-EBA FWS_PDA_EBA https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1155158 Volltext |
spellingShingle | Eddington, David Statistics for linguistics : a step-by-step guide for novices / Intro -- Contents -- Acknowledgements -- Introduction -- Chapter One -- 1.1 Opening an Existing File in SPSS Software -- 1.2 The Statistics Viewer and Data Editor Windows -- 1.3 Specifying Information about the Data in the Window -- 1.4 Saving a File in SPSS -- 1.5 Sorting the Data -- 1.6 Adding, Deleting, Copying, Pasting, and Cutting Rows and Colu -- Chapter Two -- 2.1 Types of Data -- 2.1.1 Categorical data -- 2.1.2 Ordinal data -- 2.1.3 Continuous data -- 2.2 Variables -- 2.2.1 Independent and dependent variables -- 2.2.2 Control variables -- 2.2.3 Confounding variables -- 2.3 Descriptive Statistics -- 2.3.1 Central tendency: Mean -- 2.3.2 Central tendency: Median -- 2.3.3 Central tendency: Mode -- 2.3.4 Dispersion -- 2.4 Using SPSS to Calculate Descriptive Statistics -- 2.5 Visualizing the Data -- 2.5.1 Histogram -- 2.5.2 Boxplot -- 2.6 Normal Distribution -- 2.6.1 Standard deviation and variance -- 2.6.2 Skew -- 2.6.3 Q-Q plot -- 2.6.4 Kurtosis -- 2.6.5 Tests of normal distribution -- 2.7 Reporting Descriptive Statistics -- 2.8 Inferential Statistics -- 2.8.1 Null hypothesis testing -- 2.8.2 Statistical significance -- 2.8.3 Limitations of p values and null hypothesis testing -- 2.8.4 Confidence intervals -- 2.8.5 Type I and Type II errors -- 2.9 Using SPSS to Make Boxplots of Confidence Intervals -- 2.10 Hands-On Exercises for Descriptive and Inferential Statistics -- 2.10.1 Descriptive statistics of pretest anxiety levels -- 2.10.2 Variables and hypotheses -- 2.10.2.1 English comparatives -- 2.10.2.2 Morpheme recognition and semantic similarity -- 2.10.2.3 Sentence length in first language acquisition -- 2.10.2.4 Classroom intervention and communication skills -- 2.10.2.5 English syllabification -- Chapter Three -- 3.1 Using SPSS to Generate Scatter Plots -- 3.2 Pearson Correlation Coefficient. 3.3 Using SPSS to Calculate Pearson Correlation -- 3.4 Statistical Significance of a Correlation -- 3.5 One-Tailed or Two? -- 3.6 Reporting the Results of a Correlation -- 3.7 Variance and r2 -- 3.8 Correlation Doesn't Mean Causation -- 3.9 Assumptions of Correlation -- 3.9.1 Continuous data -- 3.9.2 Linear relationship -- 3.9.2.1 Using SPSS to generate graphs for visualizing linearity -- 3.9.2.2 What to do if the data are not linear -- 3.9.3 Normally distributed data -- 3.9.3.1 Using SPSS to generate measures of normal distribution -- 3.9.4 Independence of observations -- 3.9.5 Homoscedasticity -- 3.10 Parametric versus Nonparametric Statistics -- 3.10.1 Disadvantages of nonparametric statistics -- 3.10.2 Advantages of nonparametric statistics -- 3.11 Data Transformation -- 3.11.1 Using SPSS to transform data -- 3.12 Recipe for a Correlation -- 3.13 Hands-On Exercises for Correlation -- 3.13.1 Corpus frequency -- 3.13.2 Sonority -- Chapter Four -- 4.1 Goodness of Fit Chi-Square -- 4.1.1 Standardized residuals and effect size in a goodness of fit chi-square -- 4.1.2 Reporting the results of a goodness of fit chi-square -- 4.1.3 Using SPSS to calculate a goodness of fit chi-square -- 4.2 Chi-Square Test of Independence -- 4.2.1 Effect size and standardized residuals in a chi-square test of independence -- 4.2.2 Reporting the results of a chi-square test of independence -- 4.2.3 Using SPSS to calculate a chi-square test of independence -- 4.3 Assumptions of Chi-Square -- 4.4 Recipe for a Chi-Square -- 4.5 Hands-On Exercises for Chi-Square -- 4.5.1 Judeo-Spanish sibilant voicing -- 4.5.2 /r/ to /R/ in Canadian French -- Chapter Five -- 5.1 Comparing Groups with an Independent T-Test -- 5.1.1 Calculating effect size -- 5.1.2 How to report the results of an independent t-test -- 5.1.3 Using SPSS to perform an independent t-test. 5.1.4 The assumptions of an independent t-test -- 5.1.5 Performing multiple t-tests -- 5.2 Using SPSS to Perform a Mann-Whitney Test -- 5.3 Paired (or Dependent) T-Tests -- 5.3.1 Calculating effect size for a paired t-test -- 5.3.2 Using SPSS to perform a paired t-test -- 5.3.3 How to report the results of a paired t-test -- 5.3.4 Assumptions of a paired t-test -- 5.4 Using SPSS to Perform a Wilcoxon Signed-Rank Test -- 5.4.1 How to report the results of a Wilcoxon signed-rank test -- 5.5 Bootstrapping a T-Test -- 5.6 Recipe for an Independent T-Test -- 5.7 Recipe for a Paired T-Test -- 5.8 Hands-On Activities for T-Tests -- 5.8.1 Word order comprehension by English speakers learning Spanish -- 5.8.2 L2 contact hours during study abroad -- Chapter Six -- 6.1 One-Way ANOVA -- 6.1.1 The results of a one-way ANOVA -- 6.1.1.1 Post hoc analysis -- 6.1.1.2 Effect size with partial eta2 -- 6.1.1.3 Reporting the results of a one-way ANOVA -- 6.1.2 Residuals -- 6.1.3 Assumptions of one-way ANOVA -- 6.1.4 Using SPSS to perform a one-way ANOVA -- 6.1.5 Using an ANOVA or t-test in studies that compare two groups -- 6.2 Welch's ANOVA -- 6.2.1 Reporting the results of a Welch's ANOVA -- 6.2.2 Using SPSS to perform a Welch's one-way ANOVA -- 6.2.3 Using SPSS to perform a Kruskal-Wallis H test -- 6.3 Factorial ANOVA -- 6.3.1 Interactions -- 6.3.2 Reporting the results of a factorial ANOVA -- 6.3.3 Post hoc analysis of a factorial ANOVA -- 6.3.4 Using SPSS to perform a factorial ANOVA -- 6.3.5 Assumptions of factorial ANOVA -- 6.3.6 Using SPSS to perform a nonparametric analysis in place of a factorial ANOVA -- 6.4 Repeated Measures ANOVA -- 6.5 Bootstrapping in ANOVA -- 6.5.1 Using SPSS to perform a one-way ANOVA with bootstrapping -- 6.5.2 Using SPSS to perform a factorial ANOVA with bootstrapping -- 6.6 Recipe for a One-Way ANOVA. 6.7 Recipe for a Factorial ANOVA -- 6.8 Hands-On Exercises for One-Way ANOVA -- 6.8.1 Does language experience affect how well English speakers learning Spanish understand Verb + Subject sentences? -- 6.8.2 Test anxiety in ESL learners -- 6.9 Hands-On Exercise for Factorial ANOVA -- 6.9.1 Vowel fronting in California English -- Chapter Seven -- 7.1 Simple Regression -- 7.1.1 Using SPSS to perform a simple regression -- 7.2 Multiple Linear Regression -- 7.2.1 Running the initial analysis -- 7.2.2 Using SPSS to perform a multiple linear regression -- 7.2.3 Interpreting the outcome of the initial multiple linear regression analysis -- 7.2.4 Standardized coefficients -- 7.2.5 Collinearity -- 7.2.6 Using categorical variables in a multiple regression: Dummy coding -- 7.2.7 Centering variables to make the intercept more interpretable -- 7.2.7.1 Using SPSS to center a variable -- 7.2.8 Assumptions of multiple regression -- 7.2.8.1 Independence, number, and types of variables -- 7.2.8.2 Normal distribution of the residuals -- 7.2.8.3 Homoscedasticity of the residuals -- 7.2.8.4 Linearity of the data -- 7.2.9 Addressing violations of statistical assumptions -- 7.2.9.1 Deleting and winsorizing outliers -- 7.2.9.2 Identifying outliers -- 7.2.9.3 Using SPSS to identify outliers -- 7.2.10 Reporting the results of a multiple linear regression -- 7.2.11 Types of multiple linear regression -- 7.2.11.1 Simultaneous regression -- 7.2.11.2 Stepwise or stepping up/down regression -- 7.2.11.3 Hierarchical regression -- 7.2.12 Using SPSS to perform a hierarchical multiple linear regression -- 7.2.13 Finding the most parsimonious regression model -- 7.2.14 Contrast coding and coding interactions -- 7.2.15 Multiple regression with several categorical variables -- 7.2.16 Using SPSS to carry out a bootstrapped multiple regression -- 7.2.17 Recipe for a multiple regression. 7.2.18 Hands-on exercises for multiple linear regression -- 7.2.18.1 Reaction time -- 7.2.18.2 Mental calculation -- Chapter Eight -- 8.1 Fixed Variables and Random Factors -- 8.2 Random Intercept -- 8.3 Random Slope -- 8.4 Covariance Structures and the G Matrix in a Random Effects Model -- 8.5 Repeated Effect -- 8.5.1 Covariance structures and the R matrix in a repeated effects model -- 8.5.2 The variance and covariance of the residuals in a model with a repeated effect -- 8.5.3 More on covariance structures of the residuals in a model with a repeated effect -- 8.5.4 Testing the fit of different covariance structures with a likelihood ratio t -- 8.5.5 Using SPSS to run a marginal model -- 8.6 Simple Example of a Mixed-Effects Model -- 8.7 A Closer Look at the R and G Matrices -- 8.8 Using SPSS to Run a Mixed-Effects Model with a Random Slope and a Repeated Effect -- 8.9 Example of a Mixed-Effects Model with Random Effects for Subject and Item -- 8.9.1 Running a mixed-effects analysis with random effects for subject and test item -- 8.9.2 Using SPSS to carry out a mixed-effects analysis -- 8.9.3 Introduction to the Syntax Editor -- 8.9.4 Results of the Winter and Bergen study -- 8.9.5 Reporting the results of a mixed-effects model -- 8.10 Testing the Assumptions of a Mixed-Effects Model -- 8.10.1 Using SPSS to test the assumptions of mixed-effects models -- 8.11 More about Using the Syntax Editor -- 8.12 Recipe for a Mixed-Effects Model -- 8.13 Hands-On Exercises for Mixed-Effects Models -- 8.13.1 Grammaticality judgments -- 8.13.2 Formality in pitch -- Chapter Nine -- 9.1. Binomial Logistic Regression -- 9.1.1 Results of the binary mixed-effects logistic regression -- 9.1.1.1 Basic model information -- 9.1.1.2 Calculating the accuracy rate -- 9.1.1.3 Significance of the random effects in the model -- 9.1.1.4 Result for the fixed effects. Linguistics Statistical methods. http://id.loc.gov/authorities/subjects/sh85077229 Linguistique Méthodes statistiques. LANGUAGE ARTS & DISCIPLINES Linguistics Historical & Comparative. bisacsh Linguistics Statistical methods fast |
subject_GND | http://id.loc.gov/authorities/subjects/sh85077229 |
title | Statistics for linguistics : a step-by-step guide for novices / |
title_auth | Statistics for linguistics : a step-by-step guide for novices / |
title_exact_search | Statistics for linguistics : a step-by-step guide for novices / |
title_full | Statistics for linguistics : a step-by-step guide for novices / David Eddington. |
title_fullStr | Statistics for linguistics : a step-by-step guide for novices / David Eddington. |
title_full_unstemmed | Statistics for linguistics : a step-by-step guide for novices / David Eddington. |
title_short | Statistics for linguistics : |
title_sort | statistics for linguistics a step by step guide for novices |
title_sub | a step-by-step guide for novices / |
topic | Linguistics Statistical methods. http://id.loc.gov/authorities/subjects/sh85077229 Linguistique Méthodes statistiques. LANGUAGE ARTS & DISCIPLINES Linguistics Historical & Comparative. bisacsh Linguistics Statistical methods fast |
topic_facet | Linguistics Statistical methods. Linguistique Méthodes statistiques. LANGUAGE ARTS & DISCIPLINES Linguistics Historical & Comparative. Linguistics Statistical methods Electronic book. |
url | https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1155158 |
work_keys_str_mv | AT eddingtondavid statisticsforlinguisticsastepbystepguidefornovices |