Measuring the user experience: collecting, analyzing, and presenting UX metrics
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Amsterdam
Morgan Kaufmann, Elsevier
[2023]
|
Ausgabe: | Third edition |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Beschreibung: | Previous edition: 2013 Includes bibliographical references and index |
Beschreibung: | xxiv, 352 Seiten Illustrationen, Diagramme 24 cm |
ISBN: | 9780128180808 0128180803 |
Internformat
MARC
LEADER | 00000nam a22000008c 4500 | ||
---|---|---|---|
001 | BV048233503 | ||
003 | DE-604 | ||
005 | 20231025 | ||
007 | t | ||
008 | 220518s2023 ne a||| |||| 00||| eng d | ||
020 | |a 9780128180808 |c pbk. : £ 42.99, ca. EUR 54.50 (DE) |9 978-0-12-818080-8 | ||
020 | |a 0128180803 |c pbk. |9 0-12-818080-3 | ||
035 | |a (OCoLC)1331792718 | ||
035 | |a (DE-599)KXP1795827602 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
044 | |a ne |c XA-NL | ||
049 | |a DE-860 |a DE-898 |a DE-1050 |a DE-739 |a DE-1102 |a DE-355 |a DE-473 | ||
050 | 0 | |a QA76.9.U83 | |
082 | 0 | |a 005.437 | |
082 | 0 | |a 303.48/34 | |
084 | |a ST 278 |0 (DE-625)143644: |2 rvk | ||
084 | |a ST 252 |0 (DE-625)143627: |2 rvk | ||
084 | |a CW 4000 |0 (DE-625)19177: |2 rvk | ||
084 | |a AP 18450 |0 (DE-625)7053: |2 rvk | ||
084 | |a TEC 660f |2 stub | ||
084 | |a TEC 980f |2 stub | ||
100 | 1 | |a Albert, Bill |e Verfasser |0 (DE-588)1029801681 |4 aut | |
245 | 1 | 0 | |a Measuring the user experience |b collecting, analyzing, and presenting UX metrics |c William (Bill) Albert, Thomas S. (Tom) Tullis |
250 | |a Third edition | ||
264 | 1 | |a Amsterdam |b Morgan Kaufmann, Elsevier |c [2023] | |
264 | 4 | |c © 2023 | |
300 | |a xxiv, 352 Seiten |b Illustrationen, Diagramme |c 24 cm | ||
336 | |b txt |2 rdacontent | ||
336 | |b sti |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
500 | |a Previous edition: 2013 | ||
500 | |a Includes bibliographical references and index | ||
650 | 4 | |a User interfaces (Computer systems) / Measurement | |
650 | 4 | |a User interfaces (Computer systems) / Evaluation | |
650 | 4 | |a Technology assessment | |
650 | 0 | 7 | |a Benutzerfreundlichkeit |0 (DE-588)4005541-3 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Mensch-Maschine-Kommunikation |0 (DE-588)4125909-9 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Datenerhebung |0 (DE-588)4155272-6 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Mensch-Maschine-Schnittstelle |0 (DE-588)4720440-0 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Bewertung |0 (DE-588)4006340-9 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Mensch-Maschine-Schnittstelle |0 (DE-588)4720440-0 |D s |
689 | 0 | 1 | |a Benutzerfreundlichkeit |0 (DE-588)4005541-3 |D s |
689 | 0 | 2 | |a Bewertung |0 (DE-588)4006340-9 |D s |
689 | 0 | 3 | |a Datenerhebung |0 (DE-588)4155272-6 |D s |
689 | 0 | |5 DE-604 | |
689 | 1 | 0 | |a Mensch-Maschine-Kommunikation |0 (DE-588)4125909-9 |D s |
689 | 1 | 1 | |a Benutzerfreundlichkeit |0 (DE-588)4005541-3 |D s |
689 | 1 | 2 | |a Bewertung |0 (DE-588)4006340-9 |D s |
689 | 1 | 3 | |a Datenerhebung |0 (DE-588)4155272-6 |D s |
689 | 1 | |5 DE-604 | |
700 | 1 | |a Tullis, Tom |d 1952- |e Verfasser |0 (DE-588)1029801010 |4 aut | |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033614169&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-033614169 |
Datensatz im Suchindex
_version_ | 1804184017931599872 |
---|---|
adam_text | Contents xv PREFACE ACKNOWLEDGMENTS A SPECIAL NOTE FROM CHERYL TULLIS SIROIS BIOGRAPHIES CHAPTER 1 xix xxi xxiii 1 Introduction 1.1 What Is User Experience 4 1.2 What Are User Experience Metrics? 1.3 The Value of UX Metrics 1.4 Metrics for Everyone 8 10 11 1.5 New Technologies in UX Metrics 1.6 Ten Myths About UX Metrics 12 13 Myth 1: Metrics Take Too Much Time to Collect 13 Myth 2: Myth 3: Small Myth 4: 14 UX Metrics Cost Too Much Money UX Metrics Are Not Useful When Focusing on Improvements UX Metrics Don’t Help Us Understand Causes 14 14 UX Metrics Are Too Noisy You Can Just Trust Your Gut Metrics Don’t Apply to New Products No Metrics Exist for the Type of Issues We Are Dealing With 14 15 15 15 Myth 9: Metrics Are Not Understood or Appreciated by Management Myth 10: It’s Difficult to Collect Reliable Data With a Small 16 Myth Myth Myth Myth 5: 6: 7: 8: Sample Size CHAPTER 2 Background 2.1 Independent and Dependent Variables 2.2 Types 2.2.1 2.2.2 2.2.3 2.2.4 of Data Nominal Data Ordinal Data Interval Data Ratio Data 2.3 Descriptive Statistics 2.3.1 Measures of Central Tendency 2.3.2 Measures of Variability 2.3.3 Confidence Intervals 2.3.4 Displaying Confidence Intervals as Error Bars 2.4 Comparing Means 2.4.1 Independent Samples 16 17 18 18 19 19 21 22 22 22 24 25 27 29 29
^^ Contents 2.4.2 Paired Samples 2.4.3 Comparing More Than Two Samples 2.5 Relationships Between Variables 2.5.1 Correlations 30 32 33 33 2.6 Nonparametrlc Tests 2.7.1 Column or Bar Graphs 34 35 36 37 2.7.2 Line Graphs 2.7.3 Scatterplots 39 40 2.6.1 The Chi-Square Test 2.7 Presenting Your Data Graphically 2.7.4 Pie or Donut Charts 42 2.7.5 Stacked Bar Graphs 2.8 Summary 43 CHAPTER 3 Planning 3.1 Study Goals 3.1.1 Formative User Research 3.1.2 Summative User Research 3.2 UX Goals 3.2.1 User Performance 3.2.2 User Preferences 3.2.2 User Emotions 45 47 48 48 49 50 50 50 3.4.3 Evaluating Frequent Use of the Same Product 51 51 53 53 55 55 3.4.4 Evaluating Navigation and/or Information Architecture 3.4.5 Increasing Awareness 56 56 3.4.6 Problem Discovery 3.4.7 Maximizing Usability for a Critical Product 3.4.8 Creating an Overall Positive User Experience 57 58 59 3.4.9 Evaluating the Impact of Subtle Changes 59 3.4.10 Comparing Alternative Designs 3.5.2 Unmoderated Usability Tests 60 60 61 62 3.5.3 Online Surveys 63 3.5.4 Information Architecture Tools 3.5.5 Click and Mouse Tools 64 64 64 3.3 Business Goals 3.4 Choosing the Right UX Metrics 3.4.1 Completing an eCommerce Transaction 3.4.2 Comparing Products 3.5 User Research Methods and Tools 3.5.1 Traditional (Moderated) Usability Tests 3.6 Other Study Details 3.6.1 Budgets and Timelines 64 3.6.2 Participants 66 67 68 69 3.6.3 Data Collection 3.6.4 Data Cleanup 3.7 Summary
Contents о CHAPTER 4 Performance Metrics 4.1 Task Success 4.1.1 Binary Success 4.1.2 Levels of Success 4.1.3 Issues in Measuring Success 4.2 Time-on-Task 4.2.1 Importance of Measuring Time-on-Task 4.2.2 How to Collect and Measure Time-on-Task 4.2.3 Analyzing and Presenting Time-on-Task Data 4.2.4 Issues to Consider When Using Time Data 4.3 Errors 4.3.1 When to Measure Errors 4.3.2 What Constitutes an Error? 4.3.3 Collecting and Measuring Errors 4.3.4 Analyzing and Presenting Errors 4.3.5 Issues to Consider When Using Error Metrics 4.4 Other Efficiency Metrics 4.4.1 Collecting and Measuring Efficiency 4.4.2 Analyzing and Presenting Efficiency Data 4.4.3 Efficiency as a Combination of Task Success and Time 4.5 Learnability 4.5.1 Collecting and Measuring Learnability Data 4.5.2 Analyzing and Presenting Learnability Data 4.5.3 Issues to Consider When Measuring Learnability 4.6 Summary CHAPTER 5 Self-Reported Metrics 71 73 74 78 81 82 83 83 86 89 91 91 92 92 93 96 96 97 98 100 101 103 104 106 106 109 5.1 Importance of Self-Reported Data 111 5.2 Rating Scales 111 5.2.1 Likert Scales 5.2.2 Semantic Differential Scales 5.2.3 When to Collect Self-Reported Data 5.2.4 How to Collect Ratings 5.2.5 Biases In Collecting Self-Reported Data 5.2.6 General Guidelines for Rating Scales 5.2.7 Analyzing Rating-Scale Data 5.3 Post-Task Ratings 5.3.1 Ease of Use 5.3.2 After-Scenario Questionnaire 5.3.3 Expectation Measure 5.3.4 A Comparison of Post-Task Self-Reported Metrics 5.4 Overall User Experience Ratings 5.4.1 System Usability Scale 5.4.2 Computer System Usability Questionnaire 5.4.3
Product Reaction Cards 5.4.4 User Experience Questionnaire 111 112 113 113 114 115 116 120 120 120 121 122 124 125 128 129 129
Contents 5.4.5 AttrakDiff 5.4.6 Net Promoter Score 5.4.7 Additional Tools for Measuring Self-Reported User Experience 5.4.8 A Comparison of Selected Overall Self-Reported Metrics 131 133 134 136 5.5 Using SUS to Compare Designs 138 5.6 Online Services 5.6.1 Website Analysis and Measurement Inventory 139 139 5.6.2 American Customer Satisfaction Index 140 5.6.3 OpinionLab 5.6.4 Issues With Live-Site Surveys 140 141 5.7 Other Tÿpes of Self-Reported Metrics 5.7.1 Assessing Attribute Priorities 5.7.2 Assessing Specific Attributes 5.7.3 Assessing Specific Elements 5.7.4 Open-Ended Questions 5.7.5 Awareness and Comprehension 5.7.5 Awareness and Usefulness Gaps 5.8 Summary CHAPTER 6 Issues-Based Metrics 6.1 What is a Usability Issue? 6.1.1 Real Issues Versus False Issues 6.2 How to Identify an Issue 6.2.1 Using Think-Aloud From One-on-One Studies 6.2.2 Using Verbatim Comments From Automated Studies 6.2.3 Using Web Analytics 6.2.4 Using Eye-Tracking 6.3 Severity Ratings 6.3.1 Severity Ratings Based on the User Experience 6.3.2 Severity Ratings Based on a Combination of Factors 6.3.3 Using a Severity Rating System 6.3.4 Some Caveats About Rating Systems 6.4 Analyzing and Reporting Metrics for Usability Issues 141 142 143 145 145 148 150 150 153 154 155 156 158 159 160 160 161 161 163 164 164 6.4.2 Frequency of Issues per Participant 165 165 166 6.4.3 Percentage of Participants 6.4.4 Issues by Category 167 167 6.4.5 Issues by Task 169 6.4.1 Frequency of Unique Issues 6.5 Consistency in Identifying Usability Issues 6.6 Bias in Identifying Usability Issues 169 170 6.7 Number of
Participants 6.7.1 Five Participants Is Enough 172 172 6.7.2 Five Participants Is Not Enough 6.7.3 What to Do? 6.7.4 Our Recommendation 6.8 Summary 173 174 175 175
Contents CHAPTER 7 Eye Tracking 7.1 How Eye Tracking Works 7.2 Mobile Eye Tracking 7.2.1 Measuring Glanceability 7.2.2 Understanding Mobile Users in Context 7.2.3 Mobile Eye Tracking Technology 177 178 180 181 182 183 7.2.4 Glasses 7.2.5 Device Stand 7.2.6 Software-Based Eye Tracking 7.3 Visualizing Eye Tracking Data 7.4 Areas of Interest 183 183 185 186 187 7.5 Common Eye Tracking Metrics 7.5.1 Dwell Time 7.5.2 Number of Fixations 7.5.3 Fixation Duration 189 189 190 190 7.5.4 Sequence 7.5.5 Time to First Fixation 190 190 7.5.6 Revisits 7.5.7 Hit Ratio 7.6 Tips for Analyzing Eye Tracking Data 191 191 191 7.7 Pupillary Response 7.8 Summary 192 193 CHAPTER 8 195 Measuring Emotion 8.1 Defining the Emotional User Experience 196 8.2 Methods to Measure Emotions 8.3 Measuring Emotions Through Verbal Expressions 8.4 Self-Report 8.5 Facial Expression Analysis 8.6 Galvanic Skin Response 199 200 202 203 206 210 8.7 Case Study: The Value of Biometrics 8.4 Summary 212 215 8.2.1 Five Challenges In Measuring Emotions Combined and Comparative Metrics 217 9.1 Single UX Scores 9.1.1 Combining Metrics Based on Target Goals 9.1.2 Combining Metrics Based on Percentages 217 218 219 9.1.3 Combining Metrics Based on Z-Scores 9.1.4 Using SUM: Single Usability Metric 226 229 CHAPTER 9 9.2 UX Scorecards and Framework 231 9.3 Comparison to Goals and Expert Performance 9.3.1 Comparison to Goals 231 236 237 237 9.3.2 Comparison to Expert Performance 9.5 Summary 239 241 9.2.1 UX Scorecards 9.2.2 UX Frameworks
Contents CHAPTER 10 Special Topics 243 243 244 247 10.1 Web Analytics 10.1.1 Basic Web Analytics 10.1.2 Click-Through Rates 10.1.3 Drop-off Rates 249 10.1.4 A/В Tests 250 10.2 Card-Sorting Data 10.2.1 Analyses of Open Card-Sort Data 10.2.2 Analyses of Closed Card-Sort Data 10.3 Tree Testing 251 10.4 First Click Testing 10.5 Accessibility Metrics 10.6 Return-on-Investment Metrics 10.7 Summary 265 267 270 274 CHAPTER 11 252 258 260 277 Case Studies 11.1 Thinking Fast and Slow in the Netflix TV User Interface 11.1.1 Background 11.1.2 Methods 278 278 279 281 283 283 11.1.3 Results 11.1.4 Discussion 11.1.5 Impact 11.2 Participate/Compete/Win (PCW) Framework: Evaluating Products and Features in the Marketplace 285 11.2.1 Introduction 11.2.2 Outlining Objective Criteria 285 286 11.2.3 Feature Analysis 11.2.4 “PCW” (Summative) Usability Testing 287 289 292 11.3 Enterprise UX Case Study: Uncovering the “UX Revenue Chain” 11.3.1 Introduction 11.3.1 Metric Identification and Selection 292 293 11.3.2 Methods 294 298 299 302 302 303 11.3.2 Analysis 11.3.4 Results 11.3.5 Conclusion 11.4 Competitive UX Benchmarking of Four Healthcare Websites 11.4.1 Methodology 11.4.2 Results 11.4.3 Summary and Recommendations 11.4.4 Acknowledgment and Contributions 11.4.5 Biography 11.5 Closing the SNAP Gap 11.5.1 Field Research 11.5.2 Weekly Reviews 11.5.3 Application Questions 11.5.4 Surveys 305 310 311 311 312 313 314 314 316
Contents 11.5.6 Success Metrie 317 318 11.5.7 Organizations 11.5.8 Biography 318 320 11.5.5 Testing Prototypes CHAPTER 12 Ten Keysto Success 321 12.1 Make the Data Come Alive 12.2 Don’t Wait to be Asked to Measure 12.3 Measurement is Less Expensive Than You Think 12.4 Plan Early 12.5 Benchmark Your Products 12.6 Explore Your Data 12.7 Speak the Language of Business 12.8 Show Your Confidence 321 323 324 325 325 326 327 328 12.Ә Don’t Misuse Metrics 12.10 Simplify Your Presentation 329 329 REFERENCES 333 INDEX 345
|
adam_txt |
Contents xv PREFACE ACKNOWLEDGMENTS A SPECIAL NOTE FROM CHERYL TULLIS SIROIS BIOGRAPHIES CHAPTER 1 xix xxi xxiii 1 Introduction 1.1 What Is User Experience 4 1.2 What Are User Experience Metrics? 1.3 The Value of UX Metrics 1.4 Metrics for Everyone 8 10 11 1.5 New Technologies in UX Metrics 1.6 Ten Myths About UX Metrics 12 13 Myth 1: Metrics Take Too Much Time to Collect 13 Myth 2: Myth 3: Small Myth 4: 14 UX Metrics Cost Too Much Money UX Metrics Are Not Useful When Focusing on Improvements UX Metrics Don’t Help Us Understand Causes 14 14 UX Metrics Are Too Noisy You Can Just Trust Your Gut Metrics Don’t Apply to New Products No Metrics Exist for the Type of Issues We Are Dealing With 14 15 15 15 Myth 9: Metrics Are Not Understood or Appreciated by Management Myth 10: It’s Difficult to Collect Reliable Data With a Small 16 Myth Myth Myth Myth 5: 6: 7: 8: Sample Size CHAPTER 2 Background 2.1 Independent and Dependent Variables 2.2 Types 2.2.1 2.2.2 2.2.3 2.2.4 of Data Nominal Data Ordinal Data Interval Data Ratio Data 2.3 Descriptive Statistics 2.3.1 Measures of Central Tendency 2.3.2 Measures of Variability 2.3.3 Confidence Intervals 2.3.4 Displaying Confidence Intervals as Error Bars 2.4 Comparing Means 2.4.1 Independent Samples 16 17 18 18 19 19 21 22 22 22 24 25 27 29 29
^^ Contents 2.4.2 Paired Samples 2.4.3 Comparing More Than Two Samples 2.5 Relationships Between Variables 2.5.1 Correlations 30 32 33 33 2.6 Nonparametrlc Tests 2.7.1 Column or Bar Graphs 34 35 36 37 2.7.2 Line Graphs 2.7.3 Scatterplots 39 40 2.6.1 The Chi-Square Test 2.7 Presenting Your Data Graphically 2.7.4 Pie or Donut Charts 42 2.7.5 Stacked Bar Graphs 2.8 Summary 43 CHAPTER 3 Planning 3.1 Study Goals 3.1.1 Formative User Research 3.1.2 Summative User Research 3.2 UX Goals 3.2.1 User Performance 3.2.2 User Preferences 3.2.2 User Emotions 45 47 48 48 49 50 50 50 3.4.3 Evaluating Frequent Use of the Same Product 51 51 53 53 55 55 3.4.4 Evaluating Navigation and/or Information Architecture 3.4.5 Increasing Awareness 56 56 3.4.6 Problem Discovery 3.4.7 Maximizing Usability for a Critical Product 3.4.8 Creating an Overall Positive User Experience 57 58 59 3.4.9 Evaluating the Impact of Subtle Changes 59 3.4.10 Comparing Alternative Designs 3.5.2 Unmoderated Usability Tests 60 60 61 62 3.5.3 Online Surveys 63 3.5.4 Information Architecture Tools 3.5.5 Click and Mouse Tools 64 64 64 3.3 Business Goals 3.4 Choosing the Right UX Metrics 3.4.1 Completing an eCommerce Transaction 3.4.2 Comparing Products 3.5 User Research Methods and Tools 3.5.1 Traditional (Moderated) Usability Tests 3.6 Other Study Details 3.6.1 Budgets and Timelines 64 3.6.2 Participants 66 67 68 69 3.6.3 Data Collection 3.6.4 Data Cleanup 3.7 Summary
Contents о CHAPTER 4 Performance Metrics 4.1 Task Success 4.1.1 Binary Success 4.1.2 Levels of Success 4.1.3 Issues in Measuring Success 4.2 Time-on-Task 4.2.1 Importance of Measuring Time-on-Task 4.2.2 How to Collect and Measure Time-on-Task 4.2.3 Analyzing and Presenting Time-on-Task Data 4.2.4 Issues to Consider When Using Time Data 4.3 Errors 4.3.1 When to Measure Errors 4.3.2 What Constitutes an Error? 4.3.3 Collecting and Measuring Errors 4.3.4 Analyzing and Presenting Errors 4.3.5 Issues to Consider When Using Error Metrics 4.4 Other Efficiency Metrics 4.4.1 Collecting and Measuring Efficiency 4.4.2 Analyzing and Presenting Efficiency Data 4.4.3 Efficiency as a Combination of Task Success and Time 4.5 Learnability 4.5.1 Collecting and Measuring Learnability Data 4.5.2 Analyzing and Presenting Learnability Data 4.5.3 Issues to Consider When Measuring Learnability 4.6 Summary CHAPTER 5 Self-Reported Metrics 71 73 74 78 81 82 83 83 86 89 91 91 92 92 93 96 96 97 98 100 101 103 104 106 106 109 5.1 Importance of Self-Reported Data 111 5.2 Rating Scales 111 5.2.1 Likert Scales 5.2.2 Semantic Differential Scales 5.2.3 When to Collect Self-Reported Data 5.2.4 How to Collect Ratings 5.2.5 Biases In Collecting Self-Reported Data 5.2.6 General Guidelines for Rating Scales 5.2.7 Analyzing Rating-Scale Data 5.3 Post-Task Ratings 5.3.1 Ease of Use 5.3.2 After-Scenario Questionnaire 5.3.3 Expectation Measure 5.3.4 A Comparison of Post-Task Self-Reported Metrics 5.4 Overall User Experience Ratings 5.4.1 System Usability Scale 5.4.2 Computer System Usability Questionnaire 5.4.3
Product Reaction Cards 5.4.4 User Experience Questionnaire 111 112 113 113 114 115 116 120 120 120 121 122 124 125 128 129 129
Contents 5.4.5 AttrakDiff 5.4.6 Net Promoter Score 5.4.7 Additional Tools for Measuring Self-Reported User Experience 5.4.8 A Comparison of Selected Overall Self-Reported Metrics 131 133 134 136 5.5 Using SUS to Compare Designs 138 5.6 Online Services 5.6.1 Website Analysis and Measurement Inventory 139 139 5.6.2 American Customer Satisfaction Index 140 5.6.3 OpinionLab 5.6.4 Issues With Live-Site Surveys 140 141 5.7 Other Tÿpes of Self-Reported Metrics 5.7.1 Assessing Attribute Priorities 5.7.2 Assessing Specific Attributes 5.7.3 Assessing Specific Elements 5.7.4 Open-Ended Questions 5.7.5 Awareness and Comprehension 5.7.5 Awareness and Usefulness Gaps 5.8 Summary CHAPTER 6 Issues-Based Metrics 6.1 What is a Usability Issue? 6.1.1 Real Issues Versus False Issues 6.2 How to Identify an Issue 6.2.1 Using Think-Aloud From One-on-One Studies 6.2.2 Using Verbatim Comments From Automated Studies 6.2.3 Using Web Analytics 6.2.4 Using Eye-Tracking 6.3 Severity Ratings 6.3.1 Severity Ratings Based on the User Experience 6.3.2 Severity Ratings Based on a Combination of Factors 6.3.3 Using a Severity Rating System 6.3.4 Some Caveats About Rating Systems 6.4 Analyzing and Reporting Metrics for Usability Issues 141 142 143 145 145 148 150 150 153 154 155 156 158 159 160 160 161 161 163 164 164 6.4.2 Frequency of Issues per Participant 165 165 166 6.4.3 Percentage of Participants 6.4.4 Issues by Category 167 167 6.4.5 Issues by Task 169 6.4.1 Frequency of Unique Issues 6.5 Consistency in Identifying Usability Issues 6.6 Bias in Identifying Usability Issues 169 170 6.7 Number of
Participants 6.7.1 Five Participants Is Enough 172 172 6.7.2 Five Participants Is Not Enough 6.7.3 What to Do? 6.7.4 Our Recommendation 6.8 Summary 173 174 175 175
Contents CHAPTER 7 Eye Tracking 7.1 How Eye Tracking Works 7.2 Mobile Eye Tracking 7.2.1 Measuring Glanceability 7.2.2 Understanding Mobile Users in Context 7.2.3 Mobile Eye Tracking Technology 177 178 180 181 182 183 7.2.4 Glasses 7.2.5 Device Stand 7.2.6 Software-Based Eye Tracking 7.3 Visualizing Eye Tracking Data 7.4 Areas of Interest 183 183 185 186 187 7.5 Common Eye Tracking Metrics 7.5.1 Dwell Time 7.5.2 Number of Fixations 7.5.3 Fixation Duration 189 189 190 190 7.5.4 Sequence 7.5.5 Time to First Fixation 190 190 7.5.6 Revisits 7.5.7 Hit Ratio 7.6 Tips for Analyzing Eye Tracking Data 191 191 191 7.7 Pupillary Response 7.8 Summary 192 193 CHAPTER 8 195 Measuring Emotion 8.1 Defining the Emotional User Experience 196 8.2 Methods to Measure Emotions 8.3 Measuring Emotions Through Verbal Expressions 8.4 Self-Report 8.5 Facial Expression Analysis 8.6 Galvanic Skin Response 199 200 202 203 206 210 8.7 Case Study: The Value of Biometrics 8.4 Summary 212 215 8.2.1 Five Challenges In Measuring Emotions Combined and Comparative Metrics 217 9.1 Single UX Scores 9.1.1 Combining Metrics Based on Target Goals 9.1.2 Combining Metrics Based on Percentages 217 218 219 9.1.3 Combining Metrics Based on Z-Scores 9.1.4 Using SUM: Single Usability Metric 226 229 CHAPTER 9 9.2 UX Scorecards and Framework 231 9.3 Comparison to Goals and Expert Performance 9.3.1 Comparison to Goals 231 236 237 237 9.3.2 Comparison to Expert Performance 9.5 Summary 239 241 9.2.1 UX Scorecards 9.2.2 UX Frameworks
Contents CHAPTER 10 Special Topics 243 243 244 247 10.1 Web Analytics 10.1.1 Basic Web Analytics 10.1.2 Click-Through Rates 10.1.3 Drop-off Rates 249 10.1.4 A/В Tests 250 10.2 Card-Sorting Data 10.2.1 Analyses of Open Card-Sort Data 10.2.2 Analyses of Closed Card-Sort Data 10.3 Tree Testing 251 10.4 First Click Testing 10.5 Accessibility Metrics 10.6 Return-on-Investment Metrics 10.7 Summary 265 267 270 274 CHAPTER 11 252 258 260 277 Case Studies 11.1 Thinking Fast and Slow in the Netflix TV User Interface 11.1.1 Background 11.1.2 Methods 278 278 279 281 283 283 11.1.3 Results 11.1.4 Discussion 11.1.5 Impact 11.2 Participate/Compete/Win (PCW) Framework: Evaluating Products and Features in the Marketplace 285 11.2.1 Introduction 11.2.2 Outlining Objective Criteria 285 286 11.2.3 Feature Analysis 11.2.4 “PCW” (Summative) Usability Testing 287 289 292 11.3 Enterprise UX Case Study: Uncovering the “UX Revenue Chain” 11.3.1 Introduction 11.3.1 Metric Identification and Selection 292 293 11.3.2 Methods 294 298 299 302 302 303 11.3.2 Analysis 11.3.4 Results 11.3.5 Conclusion 11.4 Competitive UX Benchmarking of Four Healthcare Websites 11.4.1 Methodology 11.4.2 Results 11.4.3 Summary and Recommendations 11.4.4 Acknowledgment and Contributions 11.4.5 Biography 11.5 Closing the SNAP Gap 11.5.1 Field Research 11.5.2 Weekly Reviews 11.5.3 Application Questions 11.5.4 Surveys 305 310 311 311 312 313 314 314 316
Contents 11.5.6 Success Metrie 317 318 11.5.7 Organizations 11.5.8 Biography 318 320 11.5.5 Testing Prototypes CHAPTER 12 Ten Keysto Success 321 12.1 Make the Data Come Alive 12.2 Don’t Wait to be Asked to Measure 12.3 Measurement is Less Expensive Than You Think 12.4 Plan Early 12.5 Benchmark Your Products 12.6 Explore Your Data 12.7 Speak the Language of Business 12.8 Show Your Confidence 321 323 324 325 325 326 327 328 12.Ә Don’t Misuse Metrics 12.10 Simplify Your Presentation 329 329 REFERENCES 333 INDEX 345 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Albert, Bill Tullis, Tom 1952- |
author_GND | (DE-588)1029801681 (DE-588)1029801010 |
author_facet | Albert, Bill Tullis, Tom 1952- |
author_role | aut aut |
author_sort | Albert, Bill |
author_variant | b a ba t t tt |
building | Verbundindex |
bvnumber | BV048233503 |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.9.U83 |
callnumber-search | QA76.9.U83 |
callnumber-sort | QA 276.9 U83 |
callnumber-subject | QA - Mathematics |
classification_rvk | ST 278 ST 252 CW 4000 AP 18450 |
classification_tum | TEC 660f TEC 980f |
ctrlnum | (OCoLC)1331792718 (DE-599)KXP1795827602 |
dewey-full | 005.437 303.48/34 |
dewey-hundreds | 000 - Computer science, information, general works 300 - Social sciences |
dewey-ones | 005 - Computer programming, programs, data, security 303 - Social processes |
dewey-raw | 005.437 303.48/34 |
dewey-search | 005.437 303.48/34 |
dewey-sort | 15.437 |
dewey-tens | 000 - Computer science, information, general works 300 - Social sciences |
discipline | Allgemeines Technik Informatik Soziologie Psychologie Arbeitswissenschaften |
discipline_str_mv | Allgemeines Technik Informatik Soziologie Psychologie Arbeitswissenschaften |
edition | Third edition |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02888nam a22006858c 4500</leader><controlfield tag="001">BV048233503</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20231025 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">220518s2023 ne a||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780128180808</subfield><subfield code="c">pbk. : £ 42.99, ca. EUR 54.50 (DE)</subfield><subfield code="9">978-0-12-818080-8</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0128180803</subfield><subfield code="c">pbk.</subfield><subfield code="9">0-12-818080-3</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1331792718</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KXP1795827602</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="044" ind1=" " ind2=" "><subfield code="a">ne</subfield><subfield code="c">XA-NL</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-860</subfield><subfield code="a">DE-898</subfield><subfield code="a">DE-1050</subfield><subfield code="a">DE-739</subfield><subfield code="a">DE-1102</subfield><subfield code="a">DE-355</subfield><subfield code="a">DE-473</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">QA76.9.U83</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">005.437</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">303.48/34</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 278</subfield><subfield code="0">(DE-625)143644:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 252</subfield><subfield code="0">(DE-625)143627:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">CW 4000</subfield><subfield code="0">(DE-625)19177:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">AP 18450</subfield><subfield code="0">(DE-625)7053:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">TEC 660f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">TEC 980f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Albert, Bill</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1029801681</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Measuring the user experience</subfield><subfield code="b">collecting, analyzing, and presenting UX metrics</subfield><subfield code="c">William (Bill) Albert, Thomas S. (Tom) Tullis</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">Third edition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Amsterdam</subfield><subfield code="b">Morgan Kaufmann, Elsevier</subfield><subfield code="c">[2023]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2023</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xxiv, 352 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield><subfield code="c">24 cm</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">sti</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Previous edition: 2013</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Includes bibliographical references and index</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">User interfaces (Computer systems) / Measurement</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">User interfaces (Computer systems) / Evaluation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Technology assessment</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Benutzerfreundlichkeit</subfield><subfield code="0">(DE-588)4005541-3</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Mensch-Maschine-Kommunikation</subfield><subfield code="0">(DE-588)4125909-9</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Datenerhebung</subfield><subfield code="0">(DE-588)4155272-6</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Mensch-Maschine-Schnittstelle</subfield><subfield code="0">(DE-588)4720440-0</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Bewertung</subfield><subfield code="0">(DE-588)4006340-9</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Mensch-Maschine-Schnittstelle</subfield><subfield code="0">(DE-588)4720440-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Benutzerfreundlichkeit</subfield><subfield code="0">(DE-588)4005541-3</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Bewertung</subfield><subfield code="0">(DE-588)4006340-9</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="3"><subfield code="a">Datenerhebung</subfield><subfield code="0">(DE-588)4155272-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="1" ind2="0"><subfield code="a">Mensch-Maschine-Kommunikation</subfield><subfield code="0">(DE-588)4125909-9</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="1"><subfield code="a">Benutzerfreundlichkeit</subfield><subfield code="0">(DE-588)4005541-3</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="2"><subfield code="a">Bewertung</subfield><subfield code="0">(DE-588)4006340-9</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="3"><subfield code="a">Datenerhebung</subfield><subfield code="0">(DE-588)4155272-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tullis, Tom</subfield><subfield code="d">1952-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1029801010</subfield><subfield code="4">aut</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033614169&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-033614169</subfield></datafield></record></collection> |
id | DE-604.BV048233503 |
illustrated | Illustrated |
index_date | 2024-07-03T19:51:47Z |
indexdate | 2024-07-10T09:32:39Z |
institution | BVB |
isbn | 9780128180808 0128180803 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-033614169 |
oclc_num | 1331792718 |
open_access_boolean | |
owner | DE-860 DE-898 DE-BY-UBR DE-1050 DE-739 DE-1102 DE-355 DE-BY-UBR DE-473 DE-BY-UBG |
owner_facet | DE-860 DE-898 DE-BY-UBR DE-1050 DE-739 DE-1102 DE-355 DE-BY-UBR DE-473 DE-BY-UBG |
physical | xxiv, 352 Seiten Illustrationen, Diagramme 24 cm |
publishDate | 2023 |
publishDateSearch | 2023 |
publishDateSort | 2023 |
publisher | Morgan Kaufmann, Elsevier |
record_format | marc |
spelling | Albert, Bill Verfasser (DE-588)1029801681 aut Measuring the user experience collecting, analyzing, and presenting UX metrics William (Bill) Albert, Thomas S. (Tom) Tullis Third edition Amsterdam Morgan Kaufmann, Elsevier [2023] © 2023 xxiv, 352 Seiten Illustrationen, Diagramme 24 cm txt rdacontent sti rdacontent n rdamedia nc rdacarrier Previous edition: 2013 Includes bibliographical references and index User interfaces (Computer systems) / Measurement User interfaces (Computer systems) / Evaluation Technology assessment Benutzerfreundlichkeit (DE-588)4005541-3 gnd rswk-swf Mensch-Maschine-Kommunikation (DE-588)4125909-9 gnd rswk-swf Datenerhebung (DE-588)4155272-6 gnd rswk-swf Mensch-Maschine-Schnittstelle (DE-588)4720440-0 gnd rswk-swf Bewertung (DE-588)4006340-9 gnd rswk-swf Mensch-Maschine-Schnittstelle (DE-588)4720440-0 s Benutzerfreundlichkeit (DE-588)4005541-3 s Bewertung (DE-588)4006340-9 s Datenerhebung (DE-588)4155272-6 s DE-604 Mensch-Maschine-Kommunikation (DE-588)4125909-9 s Tullis, Tom 1952- Verfasser (DE-588)1029801010 aut Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033614169&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Albert, Bill Tullis, Tom 1952- Measuring the user experience collecting, analyzing, and presenting UX metrics User interfaces (Computer systems) / Measurement User interfaces (Computer systems) / Evaluation Technology assessment Benutzerfreundlichkeit (DE-588)4005541-3 gnd Mensch-Maschine-Kommunikation (DE-588)4125909-9 gnd Datenerhebung (DE-588)4155272-6 gnd Mensch-Maschine-Schnittstelle (DE-588)4720440-0 gnd Bewertung (DE-588)4006340-9 gnd |
subject_GND | (DE-588)4005541-3 (DE-588)4125909-9 (DE-588)4155272-6 (DE-588)4720440-0 (DE-588)4006340-9 |
title | Measuring the user experience collecting, analyzing, and presenting UX metrics |
title_auth | Measuring the user experience collecting, analyzing, and presenting UX metrics |
title_exact_search | Measuring the user experience collecting, analyzing, and presenting UX metrics |
title_exact_search_txtP | Measuring the user experience collecting, analyzing, and presenting UX metrics |
title_full | Measuring the user experience collecting, analyzing, and presenting UX metrics William (Bill) Albert, Thomas S. (Tom) Tullis |
title_fullStr | Measuring the user experience collecting, analyzing, and presenting UX metrics William (Bill) Albert, Thomas S. (Tom) Tullis |
title_full_unstemmed | Measuring the user experience collecting, analyzing, and presenting UX metrics William (Bill) Albert, Thomas S. (Tom) Tullis |
title_short | Measuring the user experience |
title_sort | measuring the user experience collecting analyzing and presenting ux metrics |
title_sub | collecting, analyzing, and presenting UX metrics |
topic | User interfaces (Computer systems) / Measurement User interfaces (Computer systems) / Evaluation Technology assessment Benutzerfreundlichkeit (DE-588)4005541-3 gnd Mensch-Maschine-Kommunikation (DE-588)4125909-9 gnd Datenerhebung (DE-588)4155272-6 gnd Mensch-Maschine-Schnittstelle (DE-588)4720440-0 gnd Bewertung (DE-588)4006340-9 gnd |
topic_facet | User interfaces (Computer systems) / Measurement User interfaces (Computer systems) / Evaluation Technology assessment Benutzerfreundlichkeit Mensch-Maschine-Kommunikation Datenerhebung Mensch-Maschine-Schnittstelle Bewertung |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033614169&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT albertbill measuringtheuserexperiencecollectinganalyzingandpresentinguxmetrics AT tullistom measuringtheuserexperiencecollectinganalyzingandpresentinguxmetrics |