Automated essay scoring:
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
[San Rafael]
Morgan & Claypool Publishers
2022
|
Schriftenreihe: | Synthesis lectures on human language technologies
52 |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Beschreibung: | xx, 294 Seiten Illustrationen, Diagramme 23 cm |
ISBN: | 9781636392240 9781636392226 |
Internformat
MARC
LEADER | 00000nam a2200000 cb4500 | ||
---|---|---|---|
001 | BV048520628 | ||
003 | DE-604 | ||
005 | 20230102 | ||
007 | t | ||
008 | 221019s2022 xxua||| |||| 00||| eng d | ||
020 | |a 9781636392240 |c hc |9 978-1-63639-224-0 | ||
020 | |a 9781636392226 |c pbk |9 978-1-63639-222-6 | ||
035 | |a (OCoLC)1293053070 | ||
035 | |a (DE-599)KXP1782416714 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
044 | |a xxu |c XD-US | ||
049 | |a DE-739 | ||
084 | |a ST 306 |0 (DE-625)143654: |2 rvk | ||
100 | 1 | |a Klebanov, Beata Beigman |e Verfasser |0 (DE-588)110547772X |4 aut | |
245 | 1 | 0 | |a Automated essay scoring |c Beata Beigman Klebanov, Nitin Madnani, Educational Testing Service |
263 | |a 202111 | ||
264 | 1 | |a [San Rafael] |b Morgan & Claypool Publishers |c 2022 | |
300 | |a xx, 294 Seiten |b Illustrationen, Diagramme |c 23 cm | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 1 | |a Synthesis lectures on human language technologies |v 52 | |
650 | 0 | 7 | |a Testauswertung |0 (DE-588)4124305-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Natürliche Sprache |0 (DE-588)4041354-8 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Künstliche Intelligenz |0 (DE-588)4033447-8 |2 gnd |9 rswk-swf |
653 | 0 | |a Grading and marking (Students) / Data processing | |
653 | 0 | |a Educational tests and measurements / Data processing | |
653 | 0 | |a Natural language processing (Computer science) | |
689 | 0 | 0 | |a Natürliche Sprache |0 (DE-588)4041354-8 |D s |
689 | 0 | 1 | |a Künstliche Intelligenz |0 (DE-588)4033447-8 |D s |
689 | 0 | 2 | |a Testauswertung |0 (DE-588)4124305-5 |D s |
689 | 0 | |5 DE-604 | |
700 | 1 | |a Madnani, Nitin |d ca. 20./21. Jh. |e Verfasser |0 (DE-588)1277032939 |4 aut | |
776 | 0 | |z 9781636392233 |c PDF | |
830 | 0 | |a Synthesis lectures on human language technologies |v 52 |w (DE-604)BV035447238 |9 52 | |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033897523&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-033897523 |
Datensatz im Suchindex
_version_ | 1804184506773536768 |
---|---|
adam_text | xiii Contents Preface............................................ PART I 1 Introduction.............................. 1 3 Should We Do It? Can We Do It? ...... ........................................... 1.1 The Case for Automated Scoring of Essays........................... 3 1.1.1 Argument from Need and Positive Consequence ................. 4 1.1.2 Argument from Feasibility 1: Computers are Smart .............................. 4 1.1.3 Arguments from Feasibility II: Define the Goal ........................................ 5 1.1.4 Argument from Quality and Utility: High -Quality Low-Cost Large-Scale Scoring ....... 8 1.2 Challenges to Automated Scoring 1.2.1 Anticipated Objection #1: 1.2.2 Anticipated Objection #2: 1.2.3 Anticipated Objection #3: 1.2.4 Anticipated Objection #4: 1.3 Summary................................................................................... PART II 2 xix of Essays................. 9 Originality . ................... 9 Content....................................... 10 Gaming;........................................................... 11 Feedback ....................................... 12 Getting Hands-On . . ............ 13 15 Building an Automated Essay Scoring System ............................................. ..... 17 2.1 Introduction................. ......................... 2.2 Setting; 2.2.1 2.2.2 2.2.3 2.2.4 Up ......................... Data............... .................................................................. Model and Features ............ Evaluation Metrics . ..................................... Software .
............................ 17 17 18 24 .28 2.3 Building the System................................................. 2.3.1 Experiment 0: Use AU Features ................................... 28 29 17
xiv 2.3.2 2.3.3 2.3.4 2.3.5 2.3.6 2.3.7 2.3.8 2.3.9 2.3.10 2.3.11 2.4 3 44 Conclusions ................... From Lessons to Guidelines ............................ 45 3.1 Introduction............... .................... 45 3.2 Perspectives on Automated Scoring............. ......................... 45 3.3 Case Studies............. ............................. 49 3.3.1 Adding Automated Scoring to an Existing Assessment........................ 49 3.3.2 Creating a New Assessment that Includes Automated Scoring......... 51 3.3.3 Including Automated Scoring in a Classroom Setting ...................... 52 3.4 Summary ........................................... 52 3.5 Looking Ahead. ................................................................. 53 PART III 4 Experiment 1: Feature Fairness...................................................................33 Experiment 2: Feature Collinearity........................................................... 34 Experiment 3: Additional Evidence for Feature Contributions......... 34 Experiment 4: Feature Transformations................................. 35 Experiment 5: Negative Feature Contributions........................................ 36 Experiment 6: Test on Held-out Data............... ...................................... 38 Experiment 7: Cross-Task Evaluation ............................................. .39 Experiment 8: Task-Specific Models.............................. 39 Experiments 9a and 9b: More Reliable Human Ratings........................ 41 Experiment 10: A More Sophisticated Learner. ........... 43 A Deep Dive:
Models, Features, Architecture, and Evaluation .............. 55 Models........ ................................................ 57 4.1 Introduction.............................................................. 57 4.2 Linear Regression . ................. 57 4.3 Latent Semantic Analysis 4.4 Other Non-Neural Models .......................... 4.5 Neural Networks ............... 6.3 4.5.1 Deep Learning......................... 65 4.5.2 Interlude: Word Embeddings......................... ....... 66 4.5.3 Deep Learning for Automated Essay Scoring ........................ 67 4.5.4 Discussion.............................. . 73 ............... 60 62
XV Generic Features....................................................................... ................ 75 6 5.1 Introduction................................................................................................................. 75 5.2 Discourse-Level Features.................................................... 5.2.1 Essay Organization...................................... 5.2.2 Essay Development.................................. 5.2.3 Coherence............. ............................................ 5.3 Selection of Content: Vocabulary and Topicality . ................ 5.3.1 Vocabulary.................................. 5.3.2 Topicality. ........................ 5.4 Conventions . ............................. 92 5.4.1 Early ֊Approaches ............................ 93 5.4.2 Feature-Driven Supervised Learning.............................. 94 5.4.3 Neural Approaches to Detection of Grammatical Errors .......... 98 5.4.4 Grammaticality on a Scale ............................................. 99 Genre- and Task-Specific Features ................................. 77 77 78 80 86 86 .88 101 6.1 What ’s in an Essay? 6.2 Persuasive/Argumentative Writing................... 104 6.2.1 Persuasion vs. Argumentation . ................... 104 6.2.2 Features Based on Use of Evaluative Language ................................... 106 6.2.3 Features Based on Use of Figurative Language.......................... 107 6.2.4 Features Based on Argument Structure ............. 109 6.2.5 Features Based on Argument Content............................................. 115 6.2.6
Discussion: Between Content and Structure .......................................... 122 6.3 Narrative Writing/Convey Experience........................... 6.3.1 Scoring Narrative Essays . . .............. 6.3.2 Scoring Transcripts of Oral Narratives................................. 6.4 Expository Writing Based on Sources . ............................ 128 6.4.1 Scoring Human Summaries-An Overview............ .. 129 6.4.2 Approaches that Use Source Document as the Sole Reference .... . 130 6.4.3 Approaches that Use Source Document and Expert/Peer Summaries as References ......................................132 6.4.4 Approaches that Use Additional Expert-Provided Materials as References . .................................... 136 6.4.5 Approaches that Ilse Transformed Text and/or Expert/Peer Summaries as Reference............................................... 138 6.5 Reflective Writing ............................................. .............................. 101 124 124 127 14?
xvi 7 8 6.6 Other Tasks/Genres.................................................................................................. 151 6.7 Summary ................................................................................................................... 152 Automated Scoring Systems: From Prototype to Production. ....................... 155 7.1 Introduction................................................ 155 7.2 Criteria . ........................................................................................................... 156 7.3 Example Architecture ............................................. ............... 156 7.3.1 Prelude: Apache Storm............................................................................... 156 7.3.2 Architecture Details............................................................... 157 7.3.3 Evaluation........................................................ 159 7.3.4 Illustrating the Architecture ............................................................ 160 7.4 Conclusions ........................................................................... Evaluating for Real-World Use........ ........................ ......................................... ,165 8.1 Introduction........................... 165 8.2 Validity.......................................................................................................... 165 8.3 Fairness . ....................................................... 167 8.4 Fairness for Essay Scoring............................................................. 168 8.5 RSMTool
............................................ 169 8.6 Postscript: Connections to FATML.......................... .......................................... 171 PART IV 9 10 ■ ■ · 163 Further Afield: Feedback, Content, Speech, and Gaming..... 173 Automated Feedback.................. 175 9.1 What is Feedback?...................... 9.2 Feedback Systems..................................................................................................... 176 9.3 Evaluation of Feedback Systems . ............................................................... 175 179 Automated Scoring of Content ............................................................ 181 10.1 Introduction.......... ................................. 181 10.2 Approaches............ .............. 184 10.3 Response-Based Scoring................... 185 10.3.1 Features ................... 185 10.3.2 Model .......................................................................................................... 186
xvii 11 12 10.4 Emerging Trend: Deep Neural Networks............................................................ 186 10.5 Summary .......... ................................. Automated Scoring of Speech . ....................... 189 11.1 Introduction.................................................................................................. 189 11.2 Automated Speech Recognition for Speech Scoring ............................... 190 11.3 Features for Assessing SpontaneousSpeech ................................ 11.3.1 Delivery: Pronunciation, Fluency. ................................ 11.3.2 Language Use: Vocabulary, Grammar . .................... 11.3.3 Topic Development: Content, Discourse.................................... 191 191 193 195 11.4 Scori ng Models ...................... 197 Fooling the System: Gaming Strategies........ ........................ ... 199 12.1 Introduction ......................................... .................................. 199 12.2 Shell Language О I.? ................. 200 12.3 Artificially Generated Essays . ............................ 201 12.4 Off-Topic Responses ......................................................................... 203 12.5 Plagiarism ........................................................ 204 12.6 Other Related Work. ................................................................. 205 12.7 Summary ................................................................................. 207 PART V 13 .187 Summary and Discussion ................ 209 Looking Back, Looking Ahead.............. ..................... .
.211 13.1 Report Card: Where, are We Now? .......................... 13.1.1 Accomplishments ............................ 13.1.2 Needs Improvement................... 211 211 212 13.2 Going off the Page . ............. 214 13.2.1 Assessing Writing in Multiple Languages................................ 214 13.2.2 Standardized Testing . ........................................................ .. ........... .. . 214 13.2.3 Increased Attention to Fairness ...................... 215 13.2.4 Pervasiveness or Technology................... 216 13.3 Discussion............. 13.3.1 Support Consequential Decision Malting .......................... 13.3.2 Create a Better Written Product ................... 216 216 217
xviii 13.4 13.3.3 Help the User Learn to Write Better ....................................... 13.3.4 Relationships Between. Types oí՜ Use ........... 218 218 Conclusion . ........................................................................... 219 Definitions-in-Context . ................................................... 221 Index . ................. 223 References ............................. ............................................................................. .. . 229 Authors’ Biographies...................................... 293
|
adam_txt |
xiii Contents Preface. PART I 1 Introduction. 1 3 Should We Do It? Can We Do It? . . 1.1 The Case for Automated Scoring of Essays. 3 1.1.1 Argument from Need and Positive Consequence . 4 1.1.2 Argument from Feasibility 1: Computers are Smart . 4 1.1.3 Arguments from Feasibility II: Define the Goal . 5 1.1.4 Argument from Quality and Utility: High -Quality Low-Cost Large-Scale Scoring . 8 1.2 Challenges to Automated Scoring 1.2.1 Anticipated Objection #1: 1.2.2 Anticipated Objection #2: 1.2.3 Anticipated Objection #3: 1.2.4 Anticipated Objection #4: 1.3 Summary. PART II 2 xix of Essays. 9 Originality . . 9 Content. 10 Gaming;. 11 Feedback . 12 Getting Hands-On . . . 13 15 Building an Automated Essay Scoring System . . 17 2.1 Introduction. . 2.2 Setting; 2.2.1 2.2.2 2.2.3 2.2.4 Up . Data. . Model and Features . Evaluation Metrics . . Software .
. 17 17 18 24 .28 2.3 Building the System. 2.3.1 Experiment 0: Use AU Features . 28 29 17
xiv 2.3.2 2.3.3 2.3.4 2.3.5 2.3.6 2.3.7 2.3.8 2.3.9 2.3.10 2.3.11 2.4 3 44 Conclusions . From Lessons to Guidelines . 45 3.1 Introduction. . 45 3.2 Perspectives on Automated Scoring. . 45 3.3 Case Studies. . 49 3.3.1 Adding Automated Scoring to an Existing Assessment. 49 3.3.2 Creating a New Assessment that Includes Automated Scoring. 51 3.3.3 Including Automated Scoring in a Classroom Setting . 52 3.4 Summary . 52 3.5 Looking Ahead. . 53 PART III 4 Experiment 1: Feature Fairness.33 Experiment 2: Feature Collinearity. 34 Experiment 3: Additional Evidence for Feature Contributions. 34 Experiment 4: Feature Transformations. 35 Experiment 5: Negative Feature Contributions. 36 Experiment 6: Test on Held-out Data. . 38 Experiment 7: Cross-Task Evaluation . .39 Experiment 8: Task-Specific Models. 39 Experiments 9a and 9b: More Reliable Human Ratings. 41 Experiment 10: A More Sophisticated Learner. . 43 A Deep Dive:
Models, Features, Architecture, and Evaluation . 55 Models. . 57 4.1 Introduction. 57 4.2 Linear Regression . . 57 4.3 Latent Semantic Analysis 4.4 Other Non-Neural Models . 4.5 Neural Networks . 6.3 4.5.1 Deep Learning. 65 4.5.2 Interlude: Word Embeddings. . 66 4.5.3 Deep Learning for Automated Essay Scoring . 67 4.5.4 Discussion. . 73 . 60 62
XV Generic Features. . 75 6 5.1 Introduction. 75 5.2 Discourse-Level Features. 5.2.1 Essay Organization. 5.2.2 Essay Development. 5.2.3 Coherence. . 5.3 Selection of Content: Vocabulary and Topicality . . 5.3.1 Vocabulary. 5.3.2 Topicality. . 5.4 Conventions . . 92 5.4.1 Early ֊Approaches . 93 5.4.2 Feature-Driven Supervised Learning. 94 5.4.3 Neural Approaches to Detection of Grammatical Errors . 98 5.4.4 Grammaticality on a Scale . 99 Genre- and Task-Specific Features . 77 77 78 80 86 86 .88 101 6.1 What ’s in an Essay? 6.2 Persuasive/Argumentative Writing. 104 6.2.1 Persuasion vs. Argumentation . . 104 6.2.2 Features Based on Use of Evaluative Language . 106 6.2.3 Features Based on Use of Figurative Language. 107 6.2.4 Features Based on Argument Structure . 109 6.2.5 Features Based on Argument Content. 115 6.2.6
Discussion: Between Content and Structure . 122 6.3 Narrative Writing/Convey Experience. 6.3.1 Scoring Narrative Essays . . . 6.3.2 Scoring Transcripts of Oral Narratives. 6.4 Expository Writing Based on Sources . . 128 6.4.1 Scoring Human Summaries-An Overview. . 129 6.4.2 Approaches that Use Source Document as the Sole Reference . . 130 6.4.3 Approaches that Use Source Document and Expert/Peer Summaries as References .132 6.4.4 Approaches that Use Additional Expert-Provided Materials as References . . 136 6.4.5 Approaches that Ilse Transformed Text and/or Expert/Peer Summaries as Reference. 138 6.5 Reflective Writing . . 101 124 124 127 14?
xvi 7 8 6.6 Other Tasks/Genres. 151 6.7 Summary . 152 Automated Scoring Systems: From Prototype to Production. . 155 7.1 Introduction. 155 7.2 Criteria . . 156 7.3 Example Architecture . . 156 7.3.1 Prelude: Apache Storm. 156 7.3.2 Architecture Details. 157 7.3.3 Evaluation. 159 7.3.4 Illustrating the Architecture . 160 7.4 Conclusions . Evaluating for Real-World Use. . . ,165 8.1 Introduction. 165 8.2 Validity. 165 8.3 Fairness . . 167 8.4 Fairness for Essay Scoring. 168 8.5 RSMTool
. 169 8.6 Postscript: Connections to FATML. . 171 PART IV 9 10 ■ ■ · 163 Further Afield: Feedback, Content, Speech, and Gaming. 173 Automated Feedback. 175 9.1 What is Feedback?. 9.2 Feedback Systems. 176 9.3 Evaluation of Feedback Systems . . 175 179 Automated Scoring of Content . 181 10.1 Introduction. . 181 10.2 Approaches. . 184 10.3 Response-Based Scoring. 185 10.3.1 Features . 185 10.3.2 Model . 186
xvii 11 12 10.4 Emerging Trend: Deep Neural Networks. 186 10.5 Summary . . Automated Scoring of Speech . . 189 11.1 Introduction. 189 11.2 Automated Speech Recognition for Speech Scoring . 190 11.3 Features for Assessing SpontaneousSpeech . 11.3.1 Delivery: Pronunciation, Fluency. . 11.3.2 Language Use: Vocabulary, Grammar . . 11.3.3 Topic Development: Content, Discourse. 191 191 193 195 11.4 Scori ng Models . 197 Fooling the System: Gaming Strategies. . . 199 12.1 Introduction . . 199 12.2 Shell Language О I.? . 200 12.3 Artificially Generated Essays . . 201 12.4 Off-Topic Responses . 203 12.5 Plagiarism . 204 12.6 Other Related Work. . 205 12.7 Summary . 207 PART V 13 .187 Summary and Discussion . 209 Looking Back, Looking Ahead. . .
.211 13.1 Report Card: Where, are We Now? . 13.1.1 Accomplishments . 13.1.2 Needs Improvement. 211 211 212 13.2 Going off'the Page . . 214 13.2.1 Assessing Writing in Multiple Languages. 214 13.2.2 Standardized Testing . . . . . . 214 13.2.3 Increased Attention to Fairness . 215 13.2.4 Pervasiveness or Technology. 216 13.3 Discussion. 13.3.1 Support Consequential Decision Malting . 13.3.2 Create a Better Written Product . 216 216 217
xviii 13.4 13.3.3 Help the User Learn to Write Better . 13.3.4 Relationships Between. Types oí՜ Use . 218 218 Conclusion . . 219 Definitions-in-Context . . 221 Index . . 223 References . . . . 229 Authors’ Biographies. 293 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Klebanov, Beata Beigman Madnani, Nitin ca. 20./21. Jh |
author_GND | (DE-588)110547772X (DE-588)1277032939 |
author_facet | Klebanov, Beata Beigman Madnani, Nitin ca. 20./21. Jh |
author_role | aut aut |
author_sort | Klebanov, Beata Beigman |
author_variant | b b k bb bbk n m nm |
building | Verbundindex |
bvnumber | BV048520628 |
classification_rvk | ST 306 |
ctrlnum | (OCoLC)1293053070 (DE-599)KXP1782416714 |
discipline | Informatik |
discipline_str_mv | Informatik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02139nam a2200481 cb4500</leader><controlfield tag="001">BV048520628</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20230102 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">221019s2022 xxua||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781636392240</subfield><subfield code="c">hc</subfield><subfield code="9">978-1-63639-224-0</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781636392226</subfield><subfield code="c">pbk</subfield><subfield code="9">978-1-63639-222-6</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1293053070</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KXP1782416714</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="044" ind1=" " ind2=" "><subfield code="a">xxu</subfield><subfield code="c">XD-US</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-739</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 306</subfield><subfield code="0">(DE-625)143654:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Klebanov, Beata Beigman</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)110547772X</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Automated essay scoring</subfield><subfield code="c">Beata Beigman Klebanov, Nitin Madnani, Educational Testing Service</subfield></datafield><datafield tag="263" ind1=" " ind2=" "><subfield code="a">202111</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">[San Rafael]</subfield><subfield code="b">Morgan & Claypool Publishers</subfield><subfield code="c">2022</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xx, 294 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield><subfield code="c">23 cm</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Synthesis lectures on human language technologies</subfield><subfield code="v">52</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Testauswertung</subfield><subfield code="0">(DE-588)4124305-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Natürliche Sprache</subfield><subfield code="0">(DE-588)4041354-8</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Künstliche Intelligenz</subfield><subfield code="0">(DE-588)4033447-8</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Grading and marking (Students) / Data processing</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Educational tests and measurements / Data processing</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Natural language processing (Computer science)</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Natürliche Sprache</subfield><subfield code="0">(DE-588)4041354-8</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Künstliche Intelligenz</subfield><subfield code="0">(DE-588)4033447-8</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Testauswertung</subfield><subfield code="0">(DE-588)4124305-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Madnani, Nitin</subfield><subfield code="d">ca. 20./21. Jh.</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1277032939</subfield><subfield code="4">aut</subfield></datafield><datafield tag="776" ind1="0" ind2=" "><subfield code="z">9781636392233</subfield><subfield code="c">PDF</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Synthesis lectures on human language technologies</subfield><subfield code="v">52</subfield><subfield code="w">(DE-604)BV035447238</subfield><subfield code="9">52</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033897523&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-033897523</subfield></datafield></record></collection> |
id | DE-604.BV048520628 |
illustrated | Illustrated |
index_date | 2024-07-03T20:49:48Z |
indexdate | 2024-07-10T09:40:26Z |
institution | BVB |
isbn | 9781636392240 9781636392226 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-033897523 |
oclc_num | 1293053070 |
open_access_boolean | |
owner | DE-739 |
owner_facet | DE-739 |
physical | xx, 294 Seiten Illustrationen, Diagramme 23 cm |
publishDate | 2022 |
publishDateSearch | 2022 |
publishDateSort | 2022 |
publisher | Morgan & Claypool Publishers |
record_format | marc |
series | Synthesis lectures on human language technologies |
series2 | Synthesis lectures on human language technologies |
spelling | Klebanov, Beata Beigman Verfasser (DE-588)110547772X aut Automated essay scoring Beata Beigman Klebanov, Nitin Madnani, Educational Testing Service 202111 [San Rafael] Morgan & Claypool Publishers 2022 xx, 294 Seiten Illustrationen, Diagramme 23 cm txt rdacontent n rdamedia nc rdacarrier Synthesis lectures on human language technologies 52 Testauswertung (DE-588)4124305-5 gnd rswk-swf Natürliche Sprache (DE-588)4041354-8 gnd rswk-swf Künstliche Intelligenz (DE-588)4033447-8 gnd rswk-swf Grading and marking (Students) / Data processing Educational tests and measurements / Data processing Natural language processing (Computer science) Natürliche Sprache (DE-588)4041354-8 s Künstliche Intelligenz (DE-588)4033447-8 s Testauswertung (DE-588)4124305-5 s DE-604 Madnani, Nitin ca. 20./21. Jh. Verfasser (DE-588)1277032939 aut 9781636392233 PDF Synthesis lectures on human language technologies 52 (DE-604)BV035447238 52 Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033897523&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Klebanov, Beata Beigman Madnani, Nitin ca. 20./21. Jh Automated essay scoring Synthesis lectures on human language technologies Testauswertung (DE-588)4124305-5 gnd Natürliche Sprache (DE-588)4041354-8 gnd Künstliche Intelligenz (DE-588)4033447-8 gnd |
subject_GND | (DE-588)4124305-5 (DE-588)4041354-8 (DE-588)4033447-8 |
title | Automated essay scoring |
title_auth | Automated essay scoring |
title_exact_search | Automated essay scoring |
title_exact_search_txtP | Automated essay scoring |
title_full | Automated essay scoring Beata Beigman Klebanov, Nitin Madnani, Educational Testing Service |
title_fullStr | Automated essay scoring Beata Beigman Klebanov, Nitin Madnani, Educational Testing Service |
title_full_unstemmed | Automated essay scoring Beata Beigman Klebanov, Nitin Madnani, Educational Testing Service |
title_short | Automated essay scoring |
title_sort | automated essay scoring |
topic | Testauswertung (DE-588)4124305-5 gnd Natürliche Sprache (DE-588)4041354-8 gnd Künstliche Intelligenz (DE-588)4033447-8 gnd |
topic_facet | Testauswertung Natürliche Sprache Künstliche Intelligenz |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033897523&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
volume_link | (DE-604)BV035447238 |
work_keys_str_mv | AT klebanovbeatabeigman automatedessayscoring AT madnaninitin automatedessayscoring |