Supervised machine learning for text analysis in R:
"Text data is important for many domains, from healthcare to marketing to the digital humanities, but specialized approaches are necessary to create features for machine learning from language. Supervised Machine Learning for Text Analysis in R explains how to preprocess text data for modeling,...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Boca Raton ; London ; New York
CRC Press, Taylor & Francis Group
2022
|
Ausgabe: | First edition |
Schriftenreihe: | Data science series
A Chapman & Hall book |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Zusammenfassung: | "Text data is important for many domains, from healthcare to marketing to the digital humanities, but specialized approaches are necessary to create features for machine learning from language. Supervised Machine Learning for Text Analysis in R explains how to preprocess text data for modeling, train models, and evaluate model performance using tools from the tidyverse and tidymodels ecosystem. Models like these can be used to make predictions for new observations, to understand what natural language features or characteristics contribute to differences in the output, and more. If you are already familiar with the basics of predictive modeling, use the comprehensive, detailed examples in this book to extend your skills to the domain of natural language processing"-- |
Beschreibung: | Literaturverzeichnis: Seite 369-378 |
Beschreibung: | xix, 381 Seiten Diagramme |
ISBN: | 9780367554187 9780367554194 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV047654752 | ||
003 | DE-604 | ||
005 | 20220909 | ||
007 | t | ||
008 | 211227s2022 |||| |||| 00||| eng d | ||
020 | |a 9780367554187 |c hardback |9 978-0-367-55418-7 | ||
020 | |a 9780367554194 |c paperback |9 978-0-367-55419-4 | ||
035 | |a (OCoLC)1281992085 | ||
035 | |a (DE-599)KXP1763808211 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-29T |a DE-739 |a DE-19 |a DE-703 |a DE-188 |a DE-634 | ||
050 | 0 | |a P98.5.S83 | |
082 | 0 | |a 006.3/5 | |
084 | |a ST 300 |0 (DE-625)143650: |2 rvk | ||
084 | |a ST 306 |0 (DE-625)143654: |2 rvk | ||
084 | |a ST 250 |0 (DE-625)143626: |2 rvk | ||
084 | |a QH 234 |0 (DE-625)141549: |2 rvk | ||
084 | |a MR 2800 |0 (DE-625)123496: |2 rvk | ||
100 | 1 | |a Hvitfeldt, Emil |e Verfasser |0 (DE-588)1246211653 |4 aut | |
245 | 1 | 0 | |a Supervised machine learning for text analysis in R |c Emil Hvitfeldt, Julia Silge |
250 | |a First edition | ||
264 | 1 | |a Boca Raton ; London ; New York |b CRC Press, Taylor & Francis Group |c 2022 | |
300 | |a xix, 381 Seiten |b Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 0 | |a Data science series | |
490 | 0 | |a A Chapman & Hall book | |
500 | |a Literaturverzeichnis: Seite 369-378 | ||
520 | 3 | |a "Text data is important for many domains, from healthcare to marketing to the digital humanities, but specialized approaches are necessary to create features for machine learning from language. Supervised Machine Learning for Text Analysis in R explains how to preprocess text data for modeling, train models, and evaluate model performance using tools from the tidyverse and tidymodels ecosystem. Models like these can be used to make predictions for new observations, to understand what natural language features or characteristics contribute to differences in the output, and more. If you are already familiar with the basics of predictive modeling, use the comprehensive, detailed examples in this book to extend your skills to the domain of natural language processing"-- | |
650 | 0 | 7 | |a R |g Programm |0 (DE-588)4705956-4 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Textanalyse |0 (DE-588)4194196-2 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
653 | 0 | |a Computational linguistics / Statistical methods | |
653 | 0 | |a Natural language processing (Computer science) | |
653 | 0 | |a Supervised learning (Machine learning) | |
653 | 0 | |a Predictive analytics | |
653 | 0 | |a Regression analysis | |
653 | 0 | |a Discriminant analysis | |
653 | 0 | |a R (Computer program language) | |
689 | 0 | 0 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | 1 | |a Textanalyse |0 (DE-588)4194196-2 |D s |
689 | 0 | 2 | |a R |g Programm |0 (DE-588)4705956-4 |D s |
689 | 0 | |5 DE-604 | |
700 | 1 | |a Silge, Julia |e Verfasser |0 (DE-588)1136429166 |4 aut | |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe |z 9781003093459 |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033038727&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-033038727 |
Datensatz im Suchindex
_version_ | 1804183121460985856 |
---|---|
adam_text | Contents Preface xiii I Natural Language Features 1 1 Language and modeling 3 1.1 1.2 1.3 1.4 1.5 3 5 6 7 8 8 2 3 Linguistics for text analysis...................................................... A glimpse into one area: morphology .................................... Different languages .................................................................. Other ways text can vary......................................................... Summary................................................................................... 1.5.1 In this chapter, you learned:.......................................... Tokenization 9 2.1 What is a token?........................................................................ 2.2 Types of tokens ........................................................................ 2.2.1 Character tokens............................................................ 2.2.2 Word tokens .................................................................. 2.2.3 Tokenizing by n-grams................................................... 2.2.4 Lines, sentence, and paragraph tokens........................ 2.3 Where does tokenization break down?.................................... 2.4 Building your own tokenizer ................................................... 2.4.1 Tokenize to characters, only keeping letters............... 2.4.2 Allow for hyphenated words.......................................... 2.4.3 Wrapping it in a function............................................. 2.5 Tokenization for non-Latin alphabets .................................... 2.6 Tokenization
benchmark ......................................................... 2.7 Summary.................................................................................... 2.7.1 In this chapter, you learned:.......................................... 9 13 16 18 19 22 25 26 27 29 32 33 34 35 35 Stop words 37 3.1 38 41 43 48 49 50 52 3.2 3.3 3.4 3.5 3.6 Using premade stop word lists ................................................ 3.1.1 Stop word removal in R................................................ Creating your own stop words list .......................................... All stop word lists are context-specific.................................... What happens when you remove stop words ........................ Stop words in languages other thanEnglish............................ Summary.................................................................................... vii
viii Contents 3.6.1 In this chapter, you learned:........................................... 52 Contents 7 Classification 7.1 4 Stemming 53 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 54 58 61 63 65 68 70 71 72 How to stem text in R .............................................................. Should you use stemming at all?.............................................. Understand a stemming algorithm........................................... Handling punctuation when stemming..................................... Compare some stemming options.............................................. Lemmatization and stemming ................................................. Stemming and stop words.......................................................... Summary...................................................................................... 4.8.1 In this chapter, you learned:........................................... 5 Word Embeddings 5.1 5.2 5.3 5.4 5.5 5.6 5.7 II Motivating embeddings for sparse, high-dimensional data . . Understand word embeddings by finding them yourself ... Exploring CFPB word embeddings........................................... Use pre-trained word embeddings ........................................... Fairness and word embeddings ................................................. Using word embeddings in the real world ............................... Summary...................................................................................... 5.7.1 In this chapter, you learned:........................................... Machine Learning Methods 7.2 7.3 7.4 7.5 7.6
7.7 7.8 7.9 73 73 77 81 88 93 95 96 97 99 ix 7.10 7.11 7.12 A first classification model ....................................................... 7.1.1 Building our first classification model ......................... 7.1.2 Evaluation ....................................................................... Compare to the null model .......................................................... Compare to a lasso classification model .................................. Tuning lasso hyperparameters .................................................... Case study: sparse encoding .................................................... Two-class or multiclass?................................................................ Case study: including non-text data ........................................ Case study: data censoring ........................................................ Case study: custom features .................................................... 7.9.1 Detect credit cards........................................................... 7.9.2 Calculate percentage censoring..................................... 7.9.3 Detect monetary amounts.............................................. What evaluation metrics are appropriate?............................... The full game: classification.......................................................... 7.11.1 Feature selection.............................................................. 7.11.2 Specify the model ........................................................... 7.11.3 Evaluate the
modeling.................................................... Summary...................................................................................... 7.12.1 In this chapter, you learned:.............................................. III Deep Learning Methods 155 156 158 161 166 167 170 179 183 191 195 201 202 204 205 206 208 209 210 212 220 221 223 Overview 101 Overview 225 6 Regression 105 6.1 106 107 112 117 119 122 126 129 133 137 139 142 142 143 144 146 153 153 8 Dense neural networks 231 8.1 8.2 232 237 237 240 244 248 253 257 263 267 271 272 272 A first regression model.............................................................. 6.1.1 Building our first regression model............................... 6.1.2 Evaluation ....................................................................... 6.2 Compare to the null model ....................................................... 6.3 Compare to a random forest model ........................................ 6.4 Case study: removing stop words.............................................. 6.5 Case study: varying n-grams .................................................... 6.6 Case study: lemmatization ....................................................... 6.7 Case study: feature hashing....................................................... 6.7.1 Text normalization........................................................... 6.8 What evaluation metrics are appropriate?............................... 6.9 The full game: regression .......................................................... 6.9.1 Preprocess the data
....................................................... 6.9.2 Specify the model .......................................................... 6.9.3 Tune the model................................................................. 6.9.4 Evaluate the modeling.................................................... 6.10 Summary...................................................................................... 6.10.1 In this chapter, you learned:........................................... 8.3 8.4 8.5 8.6 8.7 8.8 9 Kickstarter data .......................................................................... A first deep learning model....................................................... 8.2.1 Preprocessing for deep learning..................................... 8.2.2 One-hot sequence embeddingof text............................. 8.2.3 Simple flattened dense network..................................... 8.2.4 Evaluation ....................................................................... Using bag-of-words features....................................................... Using pre-trained word embeddings ........................................ Cross-validation for deep learning models ............................... Compare and evaluate DNN models ........................................ Limitations of deep learning .................................................... Summary...................................................................................... 8.8.1 In this chapter, you learned:........................................... Long short-term memory (LSTM) networks 273 9.1
273 275 279 A first LSTM model................................................................... 9.1.1 Building an LSTM........................................................... 9.1.2 Evaluation .......................................................................
x Contents 9.2 Compare to a recurrent neural network .................................. 9.3 Case study: bidirectional LSTM ............................................. 9.4 Case study: stacking LSTM layers .......................................... 9.5 Case study: padding...................................................... 9.6 Case study: training a regression model ................................. 9.7 Case study: vocabulary size..................................................... 9.8 The full game: LSTM .............................................................. 9.8.1 Preprocess the data ..................................................... 9.8.2 Specify the model ........................................................ 9.9 Summary................................................................................... 9.9.1 In this chapter, you learned:.......................................... 10 Convolutional neural networks 10.1 What are CNNs? .................................................................... 10.1.1 Kernel............................................................................. 10.1.2 Kernel size .................................................................... 10.2 A first CNN model .................................................................. 10.3 Case study: adding more layers................................................ 10.4 Case study: byte pair encoding................................................ 10.5 Case study: explainability with LIME.................................... 10.6 Case study: hyperparameter search
....................................... 10.7 Cross-validation for evaluation ................................................ 10.8 The full game: CNN.................................................................. 10.8.1 Preprocess the data ..................................................... 10.8.2 Specify the model ........................................................ 10.9 Summary................................................................................... 10.9.1 In this chapter, you learned:.......................................... IV Conclusion 283 286 288 289 292 295 297 297 298 301 302 303 303 304 304 305 309 317 324 330 334 337 337 338 341 342 343 Text models in the real world 345 Appendix 347 A Regular expressions 347 A.l Literal characters ..................................................................... A. 1.1 Meta characters ............................................................ A.2 Full stop, the wildcard ............................................................ A.3 Character classes........................................................................ A.3.1 Shorthand character classes.......................................... A.4 Quantifiers ................................................................................ A.5 Anchors...................................................................................... A.6 Additional resources.................................................................. В Data B.l Hans Christian Andersen fairy tales ....................................... 347 349 349 350 352 353 355 355 357 357
Contents В.2 Opinions of the Supreme Court of the United States............ B.3 Consumer Financial Protection Bureau (CFPB) complaints . B.4 Kickstarter campaign blurbs ................................................... C Baseline linear classifier C.l C.2 C.3 C.4 C.5 C.6 Read in the data....................................................................... Split into test/train and createresampling folds ................... Recipe for data preprocessing................................................... Lasso regularized classification model .................................... A model workflow ..................................................................... Tune the workflow..................................................................... χΐ 358 359 359 361 361 362 363 363 364 366 References 369 Index 379
|
adam_txt |
Contents Preface xiii I Natural Language Features 1 1 Language and modeling 3 1.1 1.2 1.3 1.4 1.5 3 5 6 7 8 8 2 3 Linguistics for text analysis. A glimpse into one area: morphology . Different languages . Other ways text can vary. Summary. 1.5.1 In this chapter, you learned:. Tokenization 9 2.1 What is a token?. 2.2 Types of tokens . 2.2.1 Character tokens. 2.2.2 Word tokens . 2.2.3 Tokenizing by n-grams. 2.2.4 Lines, sentence, and paragraph tokens. 2.3 Where does tokenization break down?. 2.4 Building your own tokenizer . 2.4.1 Tokenize to characters, only keeping letters. 2.4.2 Allow for hyphenated words. 2.4.3 Wrapping it in a function. 2.5 Tokenization for non-Latin alphabets . 2.6 Tokenization
benchmark . 2.7 Summary. 2.7.1 In this chapter, you learned:. 9 13 16 18 19 22 25 26 27 29 32 33 34 35 35 Stop words 37 3.1 38 41 43 48 49 50 52 3.2 3.3 3.4 3.5 3.6 Using premade stop word lists . 3.1.1 Stop word removal in R. Creating your own stop words list . All stop word lists are context-specific. What happens when you remove stop words . Stop words in languages other thanEnglish. Summary. vii
viii Contents 3.6.1 In this chapter, you learned:. 52 Contents 7 Classification 7.1 4 Stemming 53 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 54 58 61 63 65 68 70 71 72 How to stem text in R . Should you use stemming at all?. Understand a stemming algorithm. Handling punctuation when stemming. Compare some stemming options. Lemmatization and stemming . Stemming and stop words. Summary. 4.8.1 In this chapter, you learned:. 5 Word Embeddings 5.1 5.2 5.3 5.4 5.5 5.6 5.7 II Motivating embeddings for sparse, high-dimensional data . . Understand word embeddings by finding them yourself . Exploring CFPB word embeddings. Use pre-trained word embeddings . Fairness and word embeddings . Using word embeddings in the real world . Summary. 5.7.1 In this chapter, you learned:. Machine Learning Methods 7.2 7.3 7.4 7.5 7.6
7.7 7.8 7.9 73 73 77 81 88 93 95 96 97 99 ix 7.10 7.11 7.12 A first classification model . 7.1.1 Building our first classification model . 7.1.2 Evaluation . Compare to the null model . Compare to a lasso classification model . Tuning lasso hyperparameters . Case study: sparse encoding . Two-class or multiclass?. Case study: including non-text data . Case study: data censoring . Case study: custom features . 7.9.1 Detect credit cards. 7.9.2 Calculate percentage censoring. 7.9.3 Detect monetary amounts. What evaluation metrics are appropriate?. The full game: classification. 7.11.1 Feature selection. 7.11.2 Specify the model . 7.11.3 Evaluate the
modeling. Summary. 7.12.1 In this chapter, you learned:. III Deep Learning Methods 155 156 158 161 166 167 170 179 183 191 195 201 202 204 205 206 208 209 210 212 220 221 223 Overview 101 Overview 225 6 Regression 105 6.1 106 107 112 117 119 122 126 129 133 137 139 142 142 143 144 146 153 153 8 Dense neural networks 231 8.1 8.2 232 237 237 240 244 248 253 257 263 267 271 272 272 A first regression model. 6.1.1 Building our first regression model. 6.1.2 Evaluation . 6.2 Compare to the null model . 6.3 Compare to a random forest model . 6.4 Case study: removing stop words. 6.5 Case study: varying n-grams . 6.6 Case study: lemmatization . 6.7 Case study: feature hashing. 6.7.1 Text normalization. 6.8 What evaluation metrics are appropriate?. 6.9 The full game: regression . 6.9.1 Preprocess the data
. 6.9.2 Specify the model . 6.9.3 Tune the model. 6.9.4 Evaluate the modeling. 6.10 Summary. 6.10.1 In this chapter, you learned:. 8.3 8.4 8.5 8.6 8.7 8.8 9 Kickstarter data . A first deep learning model. 8.2.1 Preprocessing for deep learning. 8.2.2 One-hot sequence embeddingof text. 8.2.3 Simple flattened dense network. 8.2.4 Evaluation . Using bag-of-words features. Using pre-trained word embeddings . Cross-validation for deep learning models . Compare and evaluate DNN models . Limitations of deep learning . Summary. 8.8.1 In this chapter, you learned:. Long short-term memory (LSTM) networks 273 9.1
273 275 279 A first LSTM model. 9.1.1 Building an LSTM. 9.1.2 Evaluation .
x Contents 9.2 Compare to a recurrent neural network . 9.3 Case study: bidirectional LSTM . 9.4 Case study: stacking LSTM layers . 9.5 Case study: padding. 9.6 Case study: training a regression model . 9.7 Case study: vocabulary size. 9.8 The full game: LSTM . 9.8.1 Preprocess the data . 9.8.2 Specify the model . 9.9 Summary. 9.9.1 In this chapter, you learned:. 10 Convolutional neural networks 10.1 What are CNNs? . 10.1.1 Kernel. 10.1.2 Kernel size . 10.2 A first CNN model . 10.3 Case study: adding more layers. 10.4 Case study: byte pair encoding. 10.5 Case study: explainability with LIME. 10.6 Case study: hyperparameter search
. 10.7 Cross-validation for evaluation . 10.8 The full game: CNN. 10.8.1 Preprocess the data . 10.8.2 Specify the model . 10.9 Summary. 10.9.1 In this chapter, you learned:. IV Conclusion 283 286 288 289 292 295 297 297 298 301 302 303 303 304 304 305 309 317 324 330 334 337 337 338 341 342 343 Text models in the real world 345 Appendix 347 A Regular expressions 347 A.l Literal characters . A. 1.1 Meta characters . A.2 Full stop, the wildcard . A.3 Character classes. A.3.1 Shorthand character classes. A.4 Quantifiers . A.5 Anchors. A.6 Additional resources. В Data B.l Hans Christian Andersen fairy tales . 347 349 349 350 352 353 355 355 357 357
Contents В.2 Opinions of the Supreme Court of the United States. B.3 Consumer Financial Protection Bureau (CFPB) complaints . B.4 Kickstarter campaign blurbs . C Baseline linear classifier C.l C.2 C.3 C.4 C.5 C.6 Read in the data. Split into test/train and createresampling folds . Recipe for data preprocessing. Lasso regularized classification model . A model workflow . Tune the workflow. χΐ 358 359 359 361 361 362 363 363 364 366 References 369 Index 379 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Hvitfeldt, Emil Silge, Julia |
author_GND | (DE-588)1246211653 (DE-588)1136429166 |
author_facet | Hvitfeldt, Emil Silge, Julia |
author_role | aut aut |
author_sort | Hvitfeldt, Emil |
author_variant | e h eh j s js |
building | Verbundindex |
bvnumber | BV047654752 |
callnumber-first | P - Language and Literature |
callnumber-label | P98 |
callnumber-raw | P98.5.S83 |
callnumber-search | P98.5.S83 |
callnumber-sort | P 298.5 S83 |
callnumber-subject | P - Philology and Linguistics |
classification_rvk | ST 300 ST 306 ST 250 QH 234 MR 2800 |
ctrlnum | (OCoLC)1281992085 (DE-599)KXP1763808211 |
dewey-full | 006.3/5 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.3/5 |
dewey-search | 006.3/5 |
dewey-sort | 16.3 15 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik Soziologie Wirtschaftswissenschaften |
discipline_str_mv | Informatik Soziologie Wirtschaftswissenschaften |
edition | First edition |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03264nam a2200613 c 4500</leader><controlfield tag="001">BV047654752</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20220909 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">211227s2022 |||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780367554187</subfield><subfield code="c">hardback</subfield><subfield code="9">978-0-367-55418-7</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780367554194</subfield><subfield code="c">paperback</subfield><subfield code="9">978-0-367-55419-4</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1281992085</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KXP1763808211</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-29T</subfield><subfield code="a">DE-739</subfield><subfield code="a">DE-19</subfield><subfield code="a">DE-703</subfield><subfield code="a">DE-188</subfield><subfield code="a">DE-634</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">P98.5.S83</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.3/5</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="0">(DE-625)143650:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 306</subfield><subfield code="0">(DE-625)143654:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 250</subfield><subfield code="0">(DE-625)143626:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">QH 234</subfield><subfield code="0">(DE-625)141549:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">MR 2800</subfield><subfield code="0">(DE-625)123496:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hvitfeldt, Emil</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1246211653</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Supervised machine learning for text analysis in R</subfield><subfield code="c">Emil Hvitfeldt, Julia Silge</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">First edition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Boca Raton ; London ; New York</subfield><subfield code="b">CRC Press, Taylor & Francis Group</subfield><subfield code="c">2022</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xix, 381 Seiten</subfield><subfield code="b">Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Data science series</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">A Chapman & Hall book</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Literaturverzeichnis: Seite 369-378</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">"Text data is important for many domains, from healthcare to marketing to the digital humanities, but specialized approaches are necessary to create features for machine learning from language. Supervised Machine Learning for Text Analysis in R explains how to preprocess text data for modeling, train models, and evaluate model performance using tools from the tidyverse and tidymodels ecosystem. Models like these can be used to make predictions for new observations, to understand what natural language features or characteristics contribute to differences in the output, and more. If you are already familiar with the basics of predictive modeling, use the comprehensive, detailed examples in this book to extend your skills to the domain of natural language processing"--</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">R</subfield><subfield code="g">Programm</subfield><subfield code="0">(DE-588)4705956-4</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Textanalyse</subfield><subfield code="0">(DE-588)4194196-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computational linguistics / Statistical methods</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Natural language processing (Computer science)</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Supervised learning (Machine learning)</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Predictive analytics</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Regression analysis</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Discriminant analysis</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">R (Computer program language)</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Textanalyse</subfield><subfield code="0">(DE-588)4194196-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">R</subfield><subfield code="g">Programm</subfield><subfield code="0">(DE-588)4705956-4</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Silge, Julia</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1136429166</subfield><subfield code="4">aut</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe</subfield><subfield code="z">9781003093459</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033038727&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-033038727</subfield></datafield></record></collection> |
id | DE-604.BV047654752 |
illustrated | Not Illustrated |
index_date | 2024-07-03T18:50:49Z |
indexdate | 2024-07-10T09:18:24Z |
institution | BVB |
isbn | 9780367554187 9780367554194 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-033038727 |
oclc_num | 1281992085 |
open_access_boolean | |
owner | DE-29T DE-739 DE-19 DE-BY-UBM DE-703 DE-188 DE-634 |
owner_facet | DE-29T DE-739 DE-19 DE-BY-UBM DE-703 DE-188 DE-634 |
physical | xix, 381 Seiten Diagramme |
publishDate | 2022 |
publishDateSearch | 2022 |
publishDateSort | 2022 |
publisher | CRC Press, Taylor & Francis Group |
record_format | marc |
series2 | Data science series A Chapman & Hall book |
spelling | Hvitfeldt, Emil Verfasser (DE-588)1246211653 aut Supervised machine learning for text analysis in R Emil Hvitfeldt, Julia Silge First edition Boca Raton ; London ; New York CRC Press, Taylor & Francis Group 2022 xix, 381 Seiten Diagramme txt rdacontent n rdamedia nc rdacarrier Data science series A Chapman & Hall book Literaturverzeichnis: Seite 369-378 "Text data is important for many domains, from healthcare to marketing to the digital humanities, but specialized approaches are necessary to create features for machine learning from language. Supervised Machine Learning for Text Analysis in R explains how to preprocess text data for modeling, train models, and evaluate model performance using tools from the tidyverse and tidymodels ecosystem. Models like these can be used to make predictions for new observations, to understand what natural language features or characteristics contribute to differences in the output, and more. If you are already familiar with the basics of predictive modeling, use the comprehensive, detailed examples in this book to extend your skills to the domain of natural language processing"-- R Programm (DE-588)4705956-4 gnd rswk-swf Textanalyse (DE-588)4194196-2 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Computational linguistics / Statistical methods Natural language processing (Computer science) Supervised learning (Machine learning) Predictive analytics Regression analysis Discriminant analysis R (Computer program language) Maschinelles Lernen (DE-588)4193754-5 s Textanalyse (DE-588)4194196-2 s R Programm (DE-588)4705956-4 s DE-604 Silge, Julia Verfasser (DE-588)1136429166 aut Erscheint auch als Online-Ausgabe 9781003093459 Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033038727&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Hvitfeldt, Emil Silge, Julia Supervised machine learning for text analysis in R R Programm (DE-588)4705956-4 gnd Textanalyse (DE-588)4194196-2 gnd Maschinelles Lernen (DE-588)4193754-5 gnd |
subject_GND | (DE-588)4705956-4 (DE-588)4194196-2 (DE-588)4193754-5 |
title | Supervised machine learning for text analysis in R |
title_auth | Supervised machine learning for text analysis in R |
title_exact_search | Supervised machine learning for text analysis in R |
title_exact_search_txtP | Supervised machine learning for text analysis in R |
title_full | Supervised machine learning for text analysis in R Emil Hvitfeldt, Julia Silge |
title_fullStr | Supervised machine learning for text analysis in R Emil Hvitfeldt, Julia Silge |
title_full_unstemmed | Supervised machine learning for text analysis in R Emil Hvitfeldt, Julia Silge |
title_short | Supervised machine learning for text analysis in R |
title_sort | supervised machine learning for text analysis in r |
topic | R Programm (DE-588)4705956-4 gnd Textanalyse (DE-588)4194196-2 gnd Maschinelles Lernen (DE-588)4193754-5 gnd |
topic_facet | R Programm Textanalyse Maschinelles Lernen |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033038727&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT hvitfeldtemil supervisedmachinelearningfortextanalysisinr AT silgejulia supervisedmachinelearningfortextanalysisinr |