Measuring the user experience: collecting, analyzing, and presenting usability metrics
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Amsterdam [u.a.]
Elsevier [u.a.]
2008
|
Schriftenreihe: | The Morgan Kaufmann series in interactive technologies
|
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Beschreibung: | XVII, 317 S. Ill., graph. Darst. |
ISBN: | 9780123735584 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV023308291 | ||
003 | DE-604 | ||
005 | 20130605 | ||
007 | t | ||
008 | 080521s2008 ne ad|| |||| 00||| eng d | ||
010 | |a 2007043050 | ||
020 | |a 9780123735584 |c alk. paper |9 978-0-12-373558-4 | ||
035 | |a (OCoLC)176861317 | ||
035 | |a (DE-599)BVBBV023308291 | ||
040 | |a DE-604 |b ger |e rakwb | ||
041 | 0 | |a eng | |
044 | |a ne |c NL | ||
049 | |a DE-355 |a DE-473 |a DE-824 |a DE-634 |a DE-945 |a DE-11 |a DE-2070s |a DE-83 |a DE-1049 |a DE-91 | ||
050 | 0 | |a QA76.9.U83 | |
082 | 0 | |a 303.48/34 | |
084 | |a AP 18450 |0 (DE-625)7053: |2 rvk | ||
084 | |a CW 4000 |0 (DE-625)19177: |2 rvk | ||
084 | |a ST 252 |0 (DE-625)143627: |2 rvk | ||
084 | |a ST 278 |0 (DE-625)143644: |2 rvk | ||
084 | |a TEC 980f |2 stub | ||
084 | |a TEC 660f |2 stub | ||
100 | 1 | |a Tullis, Tom |d 1952- |e Verfasser |0 (DE-588)1029801010 |4 aut | |
245 | 1 | 0 | |a Measuring the user experience |b collecting, analyzing, and presenting usability metrics |c Tom Tullis ; Bill Albert |
264 | 1 | |a Amsterdam [u.a.] |b Elsevier [u.a.] |c 2008 | |
300 | |a XVII, 317 S. |b Ill., graph. Darst. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 0 | |a The Morgan Kaufmann series in interactive technologies | |
650 | 4 | |a User interfaces (Computer systems) | |
650 | 4 | |a User interfaces (Computer systems) |x Measurement | |
650 | 4 | |a Measurement | |
650 | 4 | |a Technology assessment | |
650 | 0 | 7 | |a Mensch-Maschine-Kommunikation |0 (DE-588)4125909-9 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Datenerhebung |0 (DE-588)4155272-6 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Bewertung |0 (DE-588)4006340-9 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Benutzerfreundlichkeit |0 (DE-588)4005541-3 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Mensch-Maschine-Schnittstelle |0 (DE-588)4720440-0 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Mensch-Maschine-Schnittstelle |0 (DE-588)4720440-0 |D s |
689 | 0 | 1 | |a Benutzerfreundlichkeit |0 (DE-588)4005541-3 |D s |
689 | 0 | 2 | |a Bewertung |0 (DE-588)4006340-9 |D s |
689 | 0 | 3 | |a Datenerhebung |0 (DE-588)4155272-6 |D s |
689 | 0 | |5 DE-604 | |
689 | 1 | 0 | |a Mensch-Maschine-Kommunikation |0 (DE-588)4125909-9 |D s |
689 | 1 | 1 | |a Benutzerfreundlichkeit |0 (DE-588)4005541-3 |D s |
689 | 1 | 2 | |a Bewertung |0 (DE-588)4006340-9 |D s |
689 | 1 | 3 | |a Datenerhebung |0 (DE-588)4155272-6 |D s |
689 | 1 | |8 1\p |5 DE-604 | |
700 | 1 | |a Albert, Bill |e Verfasser |0 (DE-588)1029801681 |4 aut | |
856 | 4 | 2 | |m Digitalisierung UB Regensburg |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016492615&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-016492615 | ||
883 | 1 | |8 1\p |a cgwrk |d 20201028 |q DE-101 |u https://d-nb.info/provenance/plan#cgwrk |
Datensatz im Suchindex
_version_ | 1804137640875786240 |
---|---|
adam_text | Contents
Preface
xv
Acknowledgments
xvii
CHAPTER
1
Introduction
..................................................
ι
1.1
Organization of This Book
..................................... 2
1.2
What Is Usability?
............................................. 4
1.3
Why Does Usability Matter?
................................... 5
1.4
What Are Usability Metrics?
................................... 7
1.5
The Value of Usability Metrics
................................. 8
1.6
Ten Common Myths about Usability Metrics
.................. 10
CHAPTER
2
Background
.................................................15
2.1
Designing a Usability Study
................................... 15
2.1.1
Selecting Participants
...................................16
2.1.2
Sample Size
.............................................17
2.1.3
Within-Subjects or Between-Subjects Study
..............18
2.1.4
Counterbalancing
.......................................19
2.1.5
Independent and Dependent Variables
..................20
2.2
Types of Data
................................................ 20
2.2.1
Nominal Data
...........................................20
2.2.2
Ordinal Data
............................................21
2.2.3
Interval Data
............................................22
2.2.4
Ratio Data
..............................................23
2.3
Metrics and Data
............................................. 23
2.4
Descriptive Statistics
.........................................24
2.4.1
Measures of Central Tendency
..........................25
2.4.2
Measures of Variability
..................................26
2.4.3
Confidence Intervals
....................................27
2.5
Comparing Means
............................................ 28
2.5.1
Independent Samples
...................................28
2.5.2
Paired Samples
.........................................29
2.5.3
Comparing More Than Two Samples
....................30
2.6
Relationships between Variables
............................. 31
2.6.1
Correlations
............................................32
2.7
Nonparametric Tests
......................................... 33
2.7.1
The Chi-Square Test
....................................33
2.8
Presenting Your Data Graphically
............................ 35
2.8.1
Column or Bar Graphs
..................................36
vìi
viii Contents
2.8.2
line Graphs
............................................38
2.8.3
Scatterplots
.............................................40
2.8.4
Pie Charts
..............................................42
2.8.5
Stacked Bar Graphs
.....................................42
2.9
Summary
..................................................... 44
CHAPTER
3
Planning a Usability Study
................................45
3.1
Study Goals
.................................................. 45
3.1.1
Formative Usability
.....................................45
3.1.2
Summative Usability
....................................46
3.2
User Goals
................................................... 47
3.2.1
Performance
............................................47
3-2.2
Satisfaction
.............................................47
3.3
Choosing the Right Metrics: Ten Types of Usability
Studies
....................................................... 48
3.3.1
Completing a Transaction
.............................48
3.3.2
Comparing Products
...................................50
3.3.3
Evaluating Frequent Use of the Same Product
..........50
3.3.4
Evaluating Navigation and/or Information
Architecture
...........................................51
3-3-5
Increasing Awareness
..................................52
3.3.6
Problem Discovery
....................................52
3.3.7
Maximizing Usability for a Critical Product
.............53
3.3.8
Creating an Overall Positive User Experience
..........54
3.3.9
Evaluating the Impact of Subtle Changes
...............54
З.ЗІО
Comparing Alternative Designs
........................55
3.4
Other Study Details
.......................................... 55
3.4.1
Budgets and Timelines
..................................55
3.4.2
Evaluation Methods
.....................................57
3.4.3
Participants
.............................................58
3.4.4
Data Collection
.........................................59
3.4.5
Data Cleanup
...........................................60
3.5
Summary
.....................................................61
CHAPTER
4
Performance Metrics
.......................................63
4.1
Task Success
.................................................64
4.1.1
Collecting Any Type of Success Metric
..................65
4.1.2
Binary Success
..........................................66
4.1.3
Levels of Success
.......................................69
4.1.4
Issues in Measuring Success
.............................73
Contents ix
4.2 Time-on-Task.................................................74
4.2.1
Importance
of Measuring Time-on-Task
.................74
4.2.2
How to Collect and Measure Time-on-Task
..............74
4.2.3
Analyzing and Presenting Time-on-Task Data
............77
4.2.4
Issues to Consider When Using Time Data
..............79
4.3
Errors
........................................................ 81
4.3.1
When to Measure Errors
................................81
4.3.2
What Constitutes an Error?
..............................82
4.3.3
Collecting and Measuring Errors
........................83
4.3.4
Analyzing and Presenting Errors
.........................84
4.3.5
Issues to Consider When Using Error Metrics
...........86
4.4
Efficiency
.................................................... 87
4.4.1
Collecting and Measuring Efficiency
.....................87
4.4.2
Analyzing and Presenting Efficiency Data
................88
4.4.3
Efficiency as a Combination of Task Success and Time
.. 90
4.5
Learnability
.................................................. 92
4.5.1
Collecting and Measuring Learnability Data
..............93
4.5.2
Analyzing and Presenting Learnability Data
..............94
4.5.3
Issues to Consider When Measuring Learnability
........96
4.6
Summary
.....................................................97
CHAPTER
5
Issues-Based Metrics
......................................99
5.1
Identifying Usability Issues
................................... 99
5.2
What Is a Usability Issue?
....................................100
5.2.1
Real Issues versus False Issues
........................ 101
5.3
How to Identify an Issue
....................................102
5.3.1
In-Person Studies
..................................... 103
5.3.2
Automated Studies
.................................... 103
5.3-3
When Issues Begin and End
........................... 103
5.3.4
Granularity
........................................... 104
5.3.5
Multiple Observers
................................... 104
5.4
Severity Ratings
.............................................105
5.4.1
Severity Ratings Based on the User Experience
........ 105
5.4.2
Severity Ratings Based on a Combination of Factors
... 106
5.4.3
Using a Severity Rating System
........................ 107
5.4.4
Some Caveats about Severity Ratings
.................. 108
5.5
Analyzing and Reporting Metrics for Usability Issues
.........108
5.5.1
Frequency of Unique Issues
........................... 109
5.5.2
Frequency of Issues per Participant
...................
Ill
5.5.3
Frequency of Participants
.............................
Ill
5-5.4
Issues by Category
.................................... 112
Contents
5.5.5
Issues
by Task
........................................
ИЗ
5.5.6
Reporting Positive Issues
.............................. 114
5.6
Consistency in Identifying Usability Issues
...................114
5.7
Bias in Identifying Usability Issues
...........................116
5.8
Number of Participants
......................................117
5.8.1
Five Participants Is Enough
........................... 118
5.8.2
Five Participants Is Not Enough
....................... 119
5.8.3
Our Recommendation
................................ 119
5.9
Summary
....................................................121
CHAPTER
6
Self-Reported Metrics
....................................123
6.1
Importance of Self-Reported Data
...........................123
6.2
Collecting Self-Reported Data
................................124
6.2.1
Likért
Scales
.......................................... 124
6.2.2
Semantic Differential Scales
........................... 125
6.2.3
When to Collect Self-Reported Data
................... 125
6.2.4
How to Collect Self-Reported Data
.................... 126
6.2.5
Biases in Collecting Self-Reported Data
................ 126
6.2.6
General Guidelines for Rating Scales
.................. 127
6.2.7
Analyzing Self-Reported Data
.......................... 127
6.3
Post-Task Ratings
............................................128
6.3.1
Ease of Use
........................................... 128
6.3.2
After-Scenario Questionnaire
.......................... 129
6.3.3
Expectation Measure
.................................. 129
6.3.4
Usability Magnitude Estimation
........................ 132
6.3.5
Comparison of Post-Task Self-Reported Metrics
........ 133
6.4
Post-Session Ratings
.........................................135
6.4.1
Aggregating Individual Task Ratings
................... 137
6.4.2
System Usability Scale
................................. 138
6.4.3
Computer System Usability Questionnaire
............. 139
6.4.4
Questionnaire for User Interface Satisfaction
.......... 139
6.4.5
Usefulness, Satisfaction, and Ease of Use
Questionnaire
..........................................142
6.4.6
Product Reaction Cards
............................... 142
6.4.7
Comparison of Post-Session Self-Reported Metrics
..... 144
6.5
Using
SUS
to Compare Designs
..............................147
6.5.1
Comparison of Senior-Friendly Websites
............ 147
6.5.2
Comparison of Windows ME and Windows XP
........ 147
6.5.3
Comparison of Paper Ballots
.......................... 148
6.6
Online Services
.............................................150
6.6.1
Website Analysis and Measurement Inventory
......... 150
Contents xi
6.6.2 American
Customer Satisfaction
Index................ 151
6.6.3 OpinionLab........................................... 153
6.6.4
Issues with live-Site Surveys
.......................... 157
6.7
Other Types of Self-Reported Metrics
........................158
6.7.1
Assessing Specific Attributes
.......................... 158
6.7.2
Assessing Specific Elements
........................... 161
6.7.3
Open-Ended Questions
............................... 162
6.7.4
Awareness and Comprehension
....................... 163
6.7.5
Awareness and Usefulness Gaps
....................... 165
6.8
Summary
....................................................166
CHAPTER
7
Behavioral and Physiological Metrics
.................167
7.1
Observing and Coding Overt Behaviors
......................167
7.1.1
Verbal Behaviors
...................................... 168
7.1.2
Nonverbal Behaviors
.................................. 169
7.2
Behaviors Requiring Equipment to Capture
.................. 171
7.2.1
Facial Expressions
.................................... 171
7.2.2
Eye-Tracking
.......................................... 175
7.2.3
Pupillary Response
................................... 180
7.2.4
Skin Conductance and Heart Rate
..................... 183
7.2.5
Other Measures
....................................... 186
7.3
Summary
....................................................188
CHAPTER
8
Combined and Comparative Metrics
....................191
8.1
Single Usability Scores
.......................................191
8.1.1
Combining Metrics Based on Target Goals
............ 192
8.1.2
Combining Metrics Based on Percentages
............. 193
8.1.3
Combining Metrics Based on z-Scores
................. 198
8.1.4
Using SUM: Single Usability Metric
.................... 202
8.2
Usability Scorecards
.........................................203
8.3
Comparison to Goals and Expert Performance
...............206
8.3-1
Comparison to Goals
................................. 206
8.3.2
Comparison to Expert Performance
................... 208
8.4
Summary
....................................................210
CHAPTER
9
Special Topics
............................................211
9.1
live Website Data
...........................................211
9.1.1
Server Logs
........................................... 211
9.1.2
Click-Through Rates
.................................. 213
9.1.3
Drop-Off Rates
........................................ 215
xii Contents
9.1.4
A/B Studies
........................................... 216
9.2
Card-Sorting Data
...........................................217
9.2.1
Analyses of Open Card-Sort Data
...................... 218
9.2.2
Analyses of Closed Card-Sort Data
..................... 225
9.3
Accessibility Data
...........................................227
9.4
Return-on-Investment Data
..................................231
9.5
Six Sigma
...................................................234
9.6
Summary
....................................................236
CHAPTER
10
Casestudies
............................................237
10.1
Redesigning a Website Cheaply and Quickly
Hoa Loranger
................................................. 237
10.1.1
Phase
1:
Testing Competitor Websites
..............237
10.1.2
Phase
2:
Testing Three Different Design Concepts
.... 239
10.1.3
Phase
3:
Testing a Single Design
.................... 243
10.1.4
Conclusion
......................................... 244
10.1.5
Biography
.......................................... 244
10.2
Usability Evaluation of a Speech Recognition IVR
James R. Lewis
................................................ 244
10.2.1
Method
........................................... 244
10.2.2
Results: Task-Level Measurements
................. 245
10.2.3
PSSUQ
............................................ 246
10.2.4
Participant Comments
.............................246
10.2.5
Usability Problems
................................ 247
10.2.6
Adequacy of Sample Size
.......................... 247
10.2.7
Recommendations Based on Participant
Behaviors and Comments
......................... 250
10.2.8
Discussion
........................................ 251
10.2.9
Biography
......................................... 251
10.2.10
References
........................................252
10.3
Redesign of the CDC.gov Website Robert Bailey,
Cari Wolfson,
and Janice Nail
............................... 252
10.3.1
Usability Testing Levels
............................. 253
10.3.2
Baseline Test
.......................................253
10.3-3
Task Scenarios
...................................... 254
10.3.4
Qualitative Findings
................................255
10.3-5
Wireframing and FirstClick Testing
................. 256
10.3.6
Final Prototype Testing (Prelaunch Test)
............258
10.3.7
Conclusions
........................................
2б1
10.3.8
Biographies
.........................................262
10.3.9
References
.........................................
2б2
Contents xiii
10.4
Usability Benchmarking: Mobile Music and Video
Scott Weiss and Chris Whitby
................................... 263
10.4.1
Project Goals and Methods
........................263
10.4.2
Qualitative and Quantitative Data
.................. 263
10.4.3
Research Domain
................................. 263
10.4.4
Comparative Analysis
.............................. 264
10.4.5
Study Operations: Number of Respondents
........ 264
10.4.6
Respondent Recruiting
............................ 265
10.4.7
Data Collection
.................................... 265
10.4.8
Time to Complete
................................. 266
10.4.9
Success or Failure
................................. 266
10.4.10
Number of Attempts
.............................. 266
10.4.11
Perception Metrics
................................ 266
10.4.12
Qualitative Findings
...............................267
10.4.13
Quantitative Findings
.............................. 267
10.4.14
Summary Findings and SUM Metrics
............... 267
10.4.15
Data Manipulation and Visualization
............... 267
10.4.16
Discussion
........................................ 269
10.4.17
Benchmark Changes and Future Work
............. 270
10.4.18
Biographies
....................................... 270
10.4.19
References
........................................ 270
10.5
Measuring the Effects of Drug Label Design and Similarity
on Pharmacists Performance
Agnieszka
Bojko............
271
10.5.1
Participants
......................................... 272
10.5.2
Apparatus
.......................................... 272
10.5.3
Stimuli
............................................. 272
10.5.4
Procedure
.......................................... 275
10.5.5
Analysis
............................................ 276
10.5.6
Results and Discussion
.............................. 277
10.5.7
Biography
.......................................... 279
10.5.8
References
......................................... 279
10.6
Making Metrics Matter Todd Zazelenchuk
................. 280
10.6.1
OneStart: Indiana University s Enterprise Portal
Project
.............................................280
10.6.2
Designing and Conducting the Study
................ 281
10.6.3
Analyzing and Interpreting the Results
..............282
10.6.4
Sharing the Findings and Recommendations
........ 283
10.6.5
Reflecting on the Impact
...........................286
10.6.6
Conclusion
......................................... 287
10.6.7
Acknowledgment
................................... 287
xiv Contents
10.6.8
Biography
.......................................... 287
10.6.9
References
......................................... 287
CHAPTER
11
Moving Forward
.........................................289
11.1
Sell Usability and the Power of Metrics
....................289
11.2
Start Small and Work Your Way Up
........................290
11.3
Make Sure You Have the Time and Money
................291
11.4
Plan Early and Often
......................................292
11.5
Benchmark Your Products
................................293
11.6
Explore Your Data
........................................294
11.7
Speak the Language of Business
...........................295
11.8
Show Your Confidence
....................................295
11.9
Don t Misuse Metrics
......................................296
11.10
Simplify Your Presentation
................................297
References
299
Index
307
|
adam_txt |
Contents
Preface
xv
Acknowledgments
xvii
CHAPTER
1
Introduction
.
ι
1.1
Organization of This Book
. 2
1.2
What Is Usability?
. 4
1.3
Why Does Usability Matter?
. 5
1.4
What Are Usability Metrics?
. 7
1.5
The Value of Usability Metrics
. 8
1.6
Ten Common Myths about Usability Metrics
. 10
CHAPTER
2
Background
.15
2.1
Designing a Usability Study
. 15
2.1.1
Selecting Participants
.16
2.1.2
Sample Size
.17
2.1.3
Within-Subjects or Between-Subjects Study
.18
2.1.4
Counterbalancing
.19
2.1.5
Independent and Dependent Variables
.20
2.2
Types of Data
. 20
2.2.1
Nominal Data
.20
2.2.2
Ordinal Data
.21
2.2.3
Interval Data
.22
2.2.4
Ratio Data
.23
2.3
Metrics and Data
. 23
2.4
Descriptive Statistics
.24
2.4.1
Measures of Central Tendency
.25
2.4.2
Measures of Variability
.26
2.4.3
Confidence Intervals
.27
2.5
Comparing Means
. 28
2.5.1
Independent Samples
.28
2.5.2
Paired Samples
.29
2.5.3
Comparing More Than Two Samples
.30
2.6
Relationships between Variables
. 31
2.6.1
Correlations
.32
2.7
Nonparametric Tests
. 33
2.7.1
The Chi-Square Test
.33
2.8
Presenting Your Data Graphically
. 35
2.8.1
Column or Bar Graphs
.36
vìi
viii Contents
2.8.2
line Graphs
.38
2.8.3
Scatterplots
.40
2.8.4
Pie Charts
.42
2.8.5
Stacked Bar Graphs
.42
2.9
Summary
. 44
CHAPTER
3
Planning a Usability Study
.45
3.1
Study Goals
. 45
3.1.1
Formative Usability
.45
3.1.2
Summative Usability
.46
3.2
User Goals
. 47
3.2.1
Performance
.47
3-2.2
Satisfaction
.47
3.3
Choosing the Right Metrics: Ten Types of Usability
Studies
. 48
3.3.1
Completing a Transaction
.48
3.3.2
Comparing Products
.50
3.3.3
Evaluating Frequent Use of the Same Product
.50
3.3.4
Evaluating Navigation and/or Information
Architecture
.51
3-3-5
Increasing Awareness
.52
3.3.6
Problem Discovery
.52
3.3.7
Maximizing Usability for a Critical Product
.53
3.3.8
Creating an Overall Positive User Experience
.54
3.3.9
Evaluating the Impact of Subtle Changes
.54
З.ЗІО
Comparing Alternative Designs
.55
3.4
Other Study Details
. 55
3.4.1
Budgets and Timelines
.55
3.4.2
Evaluation Methods
.57
3.4.3
Participants
.58
3.4.4
Data Collection
.59
3.4.5
Data Cleanup
.60
3.5
Summary
.61
CHAPTER
4
Performance Metrics
.63
4.1
Task Success
.64
4.1.1
Collecting Any Type of Success Metric
.65
4.1.2
Binary Success
.66
4.1.3
Levels of Success
.69
4.1.4
Issues in Measuring Success
.73
Contents ix
4.2 Time-on-Task.74
4.2.1
Importance
of Measuring Time-on-Task
.74
4.2.2
How to Collect and Measure Time-on-Task
.74
4.2.3
Analyzing and Presenting Time-on-Task Data
.77
4.2.4
Issues to Consider When Using Time Data
.79
4.3
Errors
. 81
4.3.1
When to Measure Errors
.81
4.3.2
What Constitutes an Error?
.82
4.3.3
Collecting and Measuring Errors
.83
4.3.4
Analyzing and Presenting Errors
.84
4.3.5
Issues to Consider When Using Error Metrics
.86
4.4
Efficiency
. 87
4.4.1
Collecting and Measuring Efficiency
.87
4.4.2
Analyzing and Presenting Efficiency Data
.88
4.4.3
Efficiency as a Combination of Task Success and Time
. 90
4.5
Learnability
. 92
4.5.1
Collecting and Measuring Learnability Data
.93
4.5.2
Analyzing and Presenting Learnability Data
.94
4.5.3
Issues to Consider When Measuring Learnability
.96
4.6
Summary
.97
CHAPTER
5
Issues-Based Metrics
.99
5.1
Identifying Usability Issues
. 99
5.2
What Is a Usability Issue?
.100
5.2.1
Real Issues versus False Issues
. 101
5.3
How to Identify an Issue
.102
5.3.1
In-Person Studies
. 103
5.3.2
Automated Studies
. 103
5.3-3
When Issues Begin and End
. 103
5.3.4
Granularity
. 104
5.3.5
Multiple Observers
. 104
5.4
Severity Ratings
.105
5.4.1
Severity Ratings Based on the User Experience
. 105
5.4.2
Severity Ratings Based on a Combination of Factors
. 106
5.4.3
Using a Severity Rating System
. 107
5.4.4
Some Caveats about Severity Ratings
. 108
5.5
Analyzing and Reporting Metrics for Usability Issues
.108
5.5.1
Frequency of Unique Issues
. 109
5.5.2
Frequency of Issues per Participant
.
Ill
5.5.3
Frequency of Participants
.
Ill
5-5.4
Issues by Category
. 112
Contents
5.5.5
Issues
by Task
.
ИЗ
5.5.6
Reporting Positive Issues
. 114
5.6
Consistency in Identifying Usability Issues
.114
5.7
Bias in Identifying Usability Issues
.116
5.8
Number of Participants
.117
5.8.1
Five Participants Is Enough
. 118
5.8.2
Five Participants Is Not Enough
. 119
5.8.3
Our Recommendation
. 119
5.9
Summary
.121
CHAPTER
6
Self-Reported Metrics
.123
6.1
Importance of Self-Reported Data
.123
6.2
Collecting Self-Reported Data
.124
6.2.1
Likért
Scales
. 124
6.2.2
Semantic Differential Scales
. 125
6.2.3
When to Collect Self-Reported Data
. 125
6.2.4
How to Collect Self-Reported Data
. 126
6.2.5
Biases in Collecting Self-Reported Data
. 126
6.2.6
General Guidelines for Rating Scales
. 127
6.2.7
Analyzing Self-Reported Data
. 127
6.3
Post-Task Ratings
.128
6.3.1
Ease of Use
. 128
6.3.2
After-Scenario Questionnaire
. 129
6.3.3
Expectation Measure
. 129
6.3.4
Usability Magnitude Estimation
. 132
6.3.5
Comparison of Post-Task Self-Reported Metrics
. 133
6.4
Post-Session Ratings
.135
6.4.1
Aggregating Individual Task Ratings
. 137
6.4.2
System Usability Scale
. 138
6.4.3
Computer System Usability Questionnaire
. 139
6.4.4
Questionnaire for User Interface Satisfaction
. 139
6.4.5
Usefulness, Satisfaction, and Ease of Use
Questionnaire
.142
6.4.6
Product Reaction Cards
. 142
6.4.7
Comparison of Post-Session Self-Reported Metrics
. 144
6.5
Using
SUS
to Compare Designs
.147
6.5.1
Comparison of "Senior-Friendly" Websites
. 147
6.5.2
Comparison of Windows ME and Windows XP
. 147
6.5.3
Comparison of Paper Ballots
. 148
6.6
Online Services
.150
6.6.1
Website Analysis and Measurement Inventory
. 150
Contents xi
6.6.2 American
Customer Satisfaction
Index. 151
6.6.3 OpinionLab. 153
6.6.4
Issues with live-Site Surveys
. 157
6.7
Other Types of Self-Reported Metrics
.158
6.7.1
Assessing Specific Attributes
. 158
6.7.2
Assessing Specific Elements
. 161
6.7.3
Open-Ended Questions
. 162
6.7.4
Awareness and Comprehension
. 163
6.7.5
Awareness and Usefulness Gaps
. 165
6.8
Summary
.166
CHAPTER
7
Behavioral and Physiological Metrics
.167
7.1
Observing and Coding Overt Behaviors
.167
7.1.1
Verbal Behaviors
. 168
7.1.2
Nonverbal Behaviors
. 169
7.2
Behaviors Requiring Equipment to Capture
. 171
7.2.1
Facial Expressions
. 171
7.2.2
Eye-Tracking
. 175
7.2.3
Pupillary Response
. 180
7.2.4
Skin Conductance and Heart Rate
. 183
7.2.5
Other Measures
. 186
7.3
Summary'
.188
CHAPTER
8
Combined and Comparative Metrics
.191
8.1
Single Usability Scores
.191
8.1.1
Combining Metrics Based on Target Goals
. 192
8.1.2
Combining Metrics Based on Percentages
. 193
8.1.3
Combining Metrics Based on z-Scores
. 198
8.1.4
Using SUM: Single Usability Metric
. 202
8.2
Usability Scorecards
.203
8.3
Comparison to Goals and Expert Performance
.206
8.3-1
Comparison to Goals
. 206
8.3.2
Comparison to Expert Performance
. 208
8.4
Summary
.210
CHAPTER
9
Special Topics
.211
9.1
live Website Data
.211
9.1.1
Server Logs
. 211
9.1.2
Click-Through Rates
. 213
9.1.3
Drop-Off Rates
. 215
xii Contents
9.1.4
A/B Studies
. 216
9.2
Card-Sorting Data
.217
9.2.1
Analyses of Open Card-Sort Data
. 218
9.2.2
Analyses of Closed Card-Sort Data
. 225
9.3
Accessibility Data
.227
9.4
Return-on-Investment Data
.231
9.5
Six Sigma
.234
9.6
Summary
.236
CHAPTER
10
Casestudies
.237
10.1
Redesigning a Website Cheaply and Quickly
Hoa Loranger
. 237
10.1.1
Phase
1:
Testing Competitor Websites
.237
10.1.2
Phase
2:
Testing Three Different Design Concepts
. 239
10.1.3
Phase
3:
Testing a Single Design
. 243
10.1.4
Conclusion
. 244
10.1.5
Biography
. 244
10.2
Usability Evaluation of a Speech Recognition IVR
James R. Lewis
. 244
10.2.1
Method
. 244
10.2.2
Results: Task-Level Measurements
. 245
10.2.3
PSSUQ
. 246
10.2.4
Participant Comments
.246
10.2.5
Usability Problems
. 247
10.2.6
Adequacy of Sample Size
. 247
10.2.7
Recommendations Based on Participant
Behaviors and Comments
. 250
10.2.8
Discussion
. 251
10.2.9
Biography
. 251
10.2.10
References
.252
10.3
Redesign of the CDC.gov Website Robert Bailey,
Cari Wolfson,
and Janice Nail
. 252
10.3.1
Usability Testing Levels
. 253
10.3.2
Baseline Test
.253
10.3-3
Task Scenarios
. 254
10.3.4
Qualitative Findings
.255
10.3-5
Wireframing and FirstClick Testing
. 256
10.3.6
Final Prototype Testing (Prelaunch Test)
.258
10.3.7
Conclusions
.
2б1
10.3.8
Biographies
.262
10.3.9
References
.
2б2
Contents xiii
10.4
Usability Benchmarking: Mobile Music and Video
Scott Weiss and Chris Whitby
. 263
10.4.1
Project Goals and Methods
.263
10.4.2
Qualitative and Quantitative Data
. 263
10.4.3
Research Domain
. 263
10.4.4
Comparative Analysis
. 264
10.4.5
Study Operations: Number of Respondents
. 264
10.4.6
Respondent Recruiting
. 265
10.4.7
Data Collection
. 265
10.4.8
Time to Complete
. 266
10.4.9
Success or Failure
. 266
10.4.10
Number of Attempts
. 266
10.4.11
Perception Metrics
. 266
10.4.12
Qualitative Findings
.267
10.4.13
Quantitative Findings
. 267
10.4.14
Summary Findings and SUM Metrics
. 267
10.4.15
Data Manipulation and Visualization
. 267
10.4.16
Discussion
. 269
10.4.17
Benchmark Changes and Future Work
. 270
10.4.18
Biographies
. 270
10.4.19
References
. 270
10.5
Measuring the Effects of Drug Label Design and Similarity
on Pharmacists' Performance
Agnieszka
Bojko.
271
10.5.1
Participants
. 272
10.5.2
Apparatus
. 272
10.5.3
Stimuli
. 272
10.5.4
Procedure
. 275
10.5.5
Analysis
. 276
10.5.6
Results and Discussion
. 277
10.5.7
Biography
. 279
10.5.8
References
. 279
10.6
Making Metrics Matter Todd Zazelenchuk
. 280
10.6.1
OneStart: Indiana University's Enterprise Portal
Project
.280
10.6.2
Designing and Conducting the Study
. 281
10.6.3
Analyzing and Interpreting the Results
.282
10.6.4
Sharing the Findings and Recommendations
. 283
10.6.5
Reflecting on the Impact
.286
10.6.6
Conclusion
. 287
10.6.7
Acknowledgment
. 287
xiv Contents
10.6.8
Biography
. 287
10.6.9
References
. 287
CHAPTER
11
Moving Forward
.289
11.1
Sell Usability and the Power of Metrics
.289
11.2
Start Small and Work Your Way Up
.290
11.3
Make Sure You Have the Time and Money
.291
11.4
Plan Early and Often
.292
11.5
Benchmark Your Products
.293
11.6
Explore Your Data
.294
11.7
Speak the Language of Business
.295
11.8
Show Your Confidence
.295
11.9
Don't Misuse Metrics
.296
11.10
Simplify Your Presentation
.297
References
299
Index
307 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Tullis, Tom 1952- Albert, Bill |
author_GND | (DE-588)1029801010 (DE-588)1029801681 |
author_facet | Tullis, Tom 1952- Albert, Bill |
author_role | aut aut |
author_sort | Tullis, Tom 1952- |
author_variant | t t tt b a ba |
building | Verbundindex |
bvnumber | BV023308291 |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.9.U83 |
callnumber-search | QA76.9.U83 |
callnumber-sort | QA 276.9 U83 |
callnumber-subject | QA - Mathematics |
classification_rvk | AP 18450 CW 4000 ST 252 ST 278 |
classification_tum | TEC 980f TEC 660f |
ctrlnum | (OCoLC)176861317 (DE-599)BVBBV023308291 |
dewey-full | 303.48/34 |
dewey-hundreds | 300 - Social sciences |
dewey-ones | 303 - Social processes |
dewey-raw | 303.48/34 |
dewey-search | 303.48/34 |
dewey-sort | 3303.48 234 |
dewey-tens | 300 - Social sciences |
discipline | Allgemeines Technik Informatik Soziologie Psychologie Arbeitswissenschaften |
discipline_str_mv | Allgemeines Technik Informatik Soziologie Psychologie Arbeitswissenschaften |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02772nam a2200649 c 4500</leader><controlfield tag="001">BV023308291</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20130605 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">080521s2008 ne ad|| |||| 00||| eng d</controlfield><datafield tag="010" ind1=" " ind2=" "><subfield code="a">2007043050</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780123735584</subfield><subfield code="c">alk. paper</subfield><subfield code="9">978-0-12-373558-4</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)176861317</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV023308291</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="044" ind1=" " ind2=" "><subfield code="a">ne</subfield><subfield code="c">NL</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-355</subfield><subfield code="a">DE-473</subfield><subfield code="a">DE-824</subfield><subfield code="a">DE-634</subfield><subfield code="a">DE-945</subfield><subfield code="a">DE-11</subfield><subfield code="a">DE-2070s</subfield><subfield code="a">DE-83</subfield><subfield code="a">DE-1049</subfield><subfield code="a">DE-91</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">QA76.9.U83</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">303.48/34</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">AP 18450</subfield><subfield code="0">(DE-625)7053:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">CW 4000</subfield><subfield code="0">(DE-625)19177:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 252</subfield><subfield code="0">(DE-625)143627:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 278</subfield><subfield code="0">(DE-625)143644:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">TEC 980f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">TEC 660f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Tullis, Tom</subfield><subfield code="d">1952-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1029801010</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Measuring the user experience</subfield><subfield code="b">collecting, analyzing, and presenting usability metrics</subfield><subfield code="c">Tom Tullis ; Bill Albert</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Amsterdam [u.a.]</subfield><subfield code="b">Elsevier [u.a.]</subfield><subfield code="c">2008</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">XVII, 317 S.</subfield><subfield code="b">Ill., graph. Darst.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">The Morgan Kaufmann series in interactive technologies</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">User interfaces (Computer systems)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">User interfaces (Computer systems)</subfield><subfield code="x">Measurement</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Measurement</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Technology assessment</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Mensch-Maschine-Kommunikation</subfield><subfield code="0">(DE-588)4125909-9</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Datenerhebung</subfield><subfield code="0">(DE-588)4155272-6</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Bewertung</subfield><subfield code="0">(DE-588)4006340-9</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Benutzerfreundlichkeit</subfield><subfield code="0">(DE-588)4005541-3</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Mensch-Maschine-Schnittstelle</subfield><subfield code="0">(DE-588)4720440-0</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Mensch-Maschine-Schnittstelle</subfield><subfield code="0">(DE-588)4720440-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Benutzerfreundlichkeit</subfield><subfield code="0">(DE-588)4005541-3</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Bewertung</subfield><subfield code="0">(DE-588)4006340-9</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="3"><subfield code="a">Datenerhebung</subfield><subfield code="0">(DE-588)4155272-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="1" ind2="0"><subfield code="a">Mensch-Maschine-Kommunikation</subfield><subfield code="0">(DE-588)4125909-9</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="1"><subfield code="a">Benutzerfreundlichkeit</subfield><subfield code="0">(DE-588)4005541-3</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="2"><subfield code="a">Bewertung</subfield><subfield code="0">(DE-588)4006340-9</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="3"><subfield code="a">Datenerhebung</subfield><subfield code="0">(DE-588)4155272-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Albert, Bill</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1029801681</subfield><subfield code="4">aut</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016492615&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-016492615</subfield></datafield><datafield tag="883" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="a">cgwrk</subfield><subfield code="d">20201028</subfield><subfield code="q">DE-101</subfield><subfield code="u">https://d-nb.info/provenance/plan#cgwrk</subfield></datafield></record></collection> |
id | DE-604.BV023308291 |
illustrated | Illustrated |
index_date | 2024-07-02T20:49:20Z |
indexdate | 2024-07-09T21:15:31Z |
institution | BVB |
isbn | 9780123735584 |
language | English |
lccn | 2007043050 |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-016492615 |
oclc_num | 176861317 |
open_access_boolean | |
owner | DE-355 DE-BY-UBR DE-473 DE-BY-UBG DE-824 DE-634 DE-945 DE-11 DE-2070s DE-83 DE-1049 DE-91 DE-BY-TUM |
owner_facet | DE-355 DE-BY-UBR DE-473 DE-BY-UBG DE-824 DE-634 DE-945 DE-11 DE-2070s DE-83 DE-1049 DE-91 DE-BY-TUM |
physical | XVII, 317 S. Ill., graph. Darst. |
publishDate | 2008 |
publishDateSearch | 2008 |
publishDateSort | 2008 |
publisher | Elsevier [u.a.] |
record_format | marc |
series2 | The Morgan Kaufmann series in interactive technologies |
spelling | Tullis, Tom 1952- Verfasser (DE-588)1029801010 aut Measuring the user experience collecting, analyzing, and presenting usability metrics Tom Tullis ; Bill Albert Amsterdam [u.a.] Elsevier [u.a.] 2008 XVII, 317 S. Ill., graph. Darst. txt rdacontent n rdamedia nc rdacarrier The Morgan Kaufmann series in interactive technologies User interfaces (Computer systems) User interfaces (Computer systems) Measurement Measurement Technology assessment Mensch-Maschine-Kommunikation (DE-588)4125909-9 gnd rswk-swf Datenerhebung (DE-588)4155272-6 gnd rswk-swf Bewertung (DE-588)4006340-9 gnd rswk-swf Benutzerfreundlichkeit (DE-588)4005541-3 gnd rswk-swf Mensch-Maschine-Schnittstelle (DE-588)4720440-0 gnd rswk-swf Mensch-Maschine-Schnittstelle (DE-588)4720440-0 s Benutzerfreundlichkeit (DE-588)4005541-3 s Bewertung (DE-588)4006340-9 s Datenerhebung (DE-588)4155272-6 s DE-604 Mensch-Maschine-Kommunikation (DE-588)4125909-9 s 1\p DE-604 Albert, Bill Verfasser (DE-588)1029801681 aut Digitalisierung UB Regensburg application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016492615&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis 1\p cgwrk 20201028 DE-101 https://d-nb.info/provenance/plan#cgwrk |
spellingShingle | Tullis, Tom 1952- Albert, Bill Measuring the user experience collecting, analyzing, and presenting usability metrics User interfaces (Computer systems) User interfaces (Computer systems) Measurement Measurement Technology assessment Mensch-Maschine-Kommunikation (DE-588)4125909-9 gnd Datenerhebung (DE-588)4155272-6 gnd Bewertung (DE-588)4006340-9 gnd Benutzerfreundlichkeit (DE-588)4005541-3 gnd Mensch-Maschine-Schnittstelle (DE-588)4720440-0 gnd |
subject_GND | (DE-588)4125909-9 (DE-588)4155272-6 (DE-588)4006340-9 (DE-588)4005541-3 (DE-588)4720440-0 |
title | Measuring the user experience collecting, analyzing, and presenting usability metrics |
title_auth | Measuring the user experience collecting, analyzing, and presenting usability metrics |
title_exact_search | Measuring the user experience collecting, analyzing, and presenting usability metrics |
title_exact_search_txtP | Measuring the user experience collecting, analyzing, and presenting usability metrics |
title_full | Measuring the user experience collecting, analyzing, and presenting usability metrics Tom Tullis ; Bill Albert |
title_fullStr | Measuring the user experience collecting, analyzing, and presenting usability metrics Tom Tullis ; Bill Albert |
title_full_unstemmed | Measuring the user experience collecting, analyzing, and presenting usability metrics Tom Tullis ; Bill Albert |
title_short | Measuring the user experience |
title_sort | measuring the user experience collecting analyzing and presenting usability metrics |
title_sub | collecting, analyzing, and presenting usability metrics |
topic | User interfaces (Computer systems) User interfaces (Computer systems) Measurement Measurement Technology assessment Mensch-Maschine-Kommunikation (DE-588)4125909-9 gnd Datenerhebung (DE-588)4155272-6 gnd Bewertung (DE-588)4006340-9 gnd Benutzerfreundlichkeit (DE-588)4005541-3 gnd Mensch-Maschine-Schnittstelle (DE-588)4720440-0 gnd |
topic_facet | User interfaces (Computer systems) User interfaces (Computer systems) Measurement Measurement Technology assessment Mensch-Maschine-Kommunikation Datenerhebung Bewertung Benutzerfreundlichkeit Mensch-Maschine-Schnittstelle |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016492615&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT tullistom measuringtheuserexperiencecollectinganalyzingandpresentingusabilitymetrics AT albertbill measuringtheuserexperiencecollectinganalyzingandpresentingusabilitymetrics |