Benchmarking the user experience: a practical guide to benchmarking websites, software, and product experiences
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Denver, CO
MeasuringU Press
[2018]
|
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis Klappentext |
Beschreibung: | xiii, 320 Seiten Illustrationen, Diagramme 23 cm |
ISBN: | 9780692149096 0692149090 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV045218828 | ||
003 | DE-604 | ||
005 | 20190204 | ||
007 | t | ||
008 | 181004s2018 a||| b||| 00||| eng d | ||
020 | |a 9780692149096 |9 978-0-692-14909-6 | ||
020 | |a 0692149090 |9 0-692-14909-0 | ||
035 | |a (OCoLC)1066052464 | ||
035 | |a (DE-599)BVBBV045218828 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-355 |a DE-473 |a DE-898 |a DE-20 | ||
084 | |a ST 280 |0 (DE-625)143645: |2 rvk | ||
084 | |a ST 278 |0 (DE-625)143644: |2 rvk | ||
100 | 1 | |a Sauro, Jeff |e Verfasser |0 (DE-588)1022650173 |4 aut | |
245 | 1 | 0 | |a Benchmarking the user experience |b a practical guide to benchmarking websites, software, and product experiences |c Jeff Sauro |
264 | 1 | |a Denver, CO |b MeasuringU Press |c [2018] | |
300 | |a xiii, 320 Seiten |b Illustrationen, Diagramme |c 23 cm | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
650 | 0 | 7 | |a Benchmark |0 (DE-588)4144457-7 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Statistik |0 (DE-588)4056995-0 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Benutzerfreundlichkeit |0 (DE-588)4005541-3 |2 gnd |9 rswk-swf |
653 | 0 | |a User interfaces (Computer systems) / Testing / Statistical methods | |
689 | 0 | 0 | |a Benutzerfreundlichkeit |0 (DE-588)4005541-3 |D s |
689 | 0 | 1 | |a Statistik |0 (DE-588)4056995-0 |D s |
689 | 0 | 2 | |a Benchmark |0 (DE-588)4144457-7 |D s |
689 | 0 | |5 DE-604 | |
856 | 4 | 2 | |m Digitalisierung UB Regensburg - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=030607476&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
856 | 4 | 2 | |m Digitalisierung UB Regensburg - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=030607476&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA |3 Klappentext |
999 | |a oai:aleph.bib-bvb.de:BVB01-030607476 |
Datensatz im Suchindex
_version_ | 1804178936104484864 |
---|---|
adam_text | .{$ «1
DEDICATION................................................. XI
ACKNOWLEDGEMENTS............................................XI
INTRODUCTION............................................. XII
CHAPTER I: An Introduction to User Experience Benchmarking... I
WHAT IS USER EXPERIENCE?.....................................I
WHAT IS BENCHMARKING AND WHY DO IT?..........................2
WHAT CAN YOU BENCHMARK?......................................4
TWO TYPES OF BENCHMARKING STUDIES............................4
Pros and Cons of Retrospective vs.Task-Based Benchmarks....5
DIFFERENT MODES OF UX BENCHMARKING...........................7
Benchmark Metrics ......................................... 10
Study-based metrics.................................... 10
Task-level metrics....................................... 10
CHAPTER SUMMARY AND TAKEAWAYS............................... I l
CHAPTER 2: Planning and Defining Your Study.................12
DEFINING STUDY GOALS....................................... 12
PICKING THE METHOD ........................................ 13
STAND-ALONE OR COMPARATIVE STUDY .......................... 14
Within vs. Between Subjects vs. Mixed .................... I 5
Sample size and power.................................. 16
Carryover effects..................................... 1 6
Impact on attitudes.................................... 17
Comparative judgment................................... 17
Preference............................................. 18
Participant study duration............................. 18
Summary of within vs. between subjects—
and an alternative solution—mixed...................... 18
CHAPTER SUMMARY AND TAKEAWAYS...............................20
CHAPTER 3: Working Through Study Details.............................21
DEFINE THE INTERFACE(S).............................................21
Platforms........................................................— 21
Interface Access...................................................22
Platform and Interface Access Examples.............................25
IDENTIFY THE TASKS..................................................25
Determining the Tasks..............................................26
Finding Top Tasks..................................................27
List the tasks..................................................27
Have representative participants select the tasks...............28
Randomizing and Limiting.........................................-.28
Separate the top tasks from the trivia! tasks...................29
DEFINE PARTICIPANTS ................................................30
Defining Subgroups.................................................3 I
CHAPTER SUMMARY AND TAKEAWAYS.......................................32
CHAPTER 4: Planning and Logistics...................................34
COMMERCIAL UNMODERATED RESEARCH
PLATFORMS...........................................................34
Do-It-Yourself or Survey-Only Based Platforms..................... 37
Remote Moderated Software ........................................37
Anticipating Complications........................................38
Authentication problems........................................38
Privacy concerns...............................................39
Making purchases...............................................39
Benchmarking with credit cards and personal data ..............40
DO-IT-YOURSELF VS. OUTSOURCING YOUR BENCHMARK......................44
HOW LONG DO BENCHMARK STUDIES TAKE?................................46
Typical Phases and Durations......................................46
Planning and study design......................................46
Participant recruitment and data collection....................47
Analysis.......................................................47
Presentation and road shows....................................47
BUDGETING FOR YOUR BENCHMARK.......................................49
iv BENCHMARKING THE USER EXPERIENCE
Unmoderated Technology Costs.........................................50
Moderated Technology Costs...........................................5 I
Participant and Recruiting Costs.....................................5 I
Unmoderated recruitment costs .....................................52
Moderated recruitment costs........................................53
Professional Services Costs..........................................53
CHAPTER SUMMARY AND TAKEAWAYS..........................................54
CHAPTER 5: Benchmark Metrics...........................................56
STUDY METRICS..........................................................56
The System Usability Scale (SUS).....................................56
Standardized User Experience Percentile
Rank-Questionnaire (SUPR-Q) .........................................58
Why I recommend the SUPR-Q over the SUS for website benchmarks.....61
Net Promoter Score (NPS).............................................63
Satisfaction.........................................................64
General satisfaction...............................................65
Attribute and product satisfaction.................................66
Brand Attitudes......................................................67
Brand Lift ..........................................................68
Usability Metric for User Experience (UMUX)-LITE.....................68
Standardized User Experience Percentile
Rank-Questionnaire for mobile apps (SUPR-Qm) ........................69
TASK METRICS...........................................................71
Task Completion Rate.................................................71
Task Time............................................................71
Task Ease............................................................72
Single Usability Metric (SUM)........................................73
ADDITIONAL TASK LEVEL METRICS..........................................73
Confidence ..........................................................73
Disasters............................................................74
Open-Ended...........................................................- 74
Task Specific Metrics................................................74
EXAMPLES OF APPLYING THE METRICS.......................................75
CHAPTER SUMMARY AND TAKEAWAYS..........................................75
CHAPTER 6: Sample Size Planning........................................77
Sample Size for Stand-Alone Studies...............................78
Sample Size for Comparison Studies (Within and Between)...........81
Sample Size for Comparing Against an Industry Benchmark...........85
CHAPTER SUMMARY AND TAKEAWAY........................................90
ANSWERS.............................................................91
CHAPTER 7: Writing the Study Script.................................93
CONTENTS OF THE STUDY SCRIPT........................................93
UNMODERATED STUDY SCRIPTS...........................................94
Study Information, Goals and Research Questions...................94
Welcome Message and Orientation...................................94
Technical details..............................................96
PRE-STUDY QUESTIONS.................................................97
Demographic and qualification questions...........................97
Screening questions............................................98
MODERATED BENCHMARK STUDY SCRIPTS...................................104
Study Information, Goals, and Research Questions..................105
Welcome Message and Orientation................................... 105
Pre-Study Questions...............................................107
Task Scenarios....................................................107
POST-TASK METRICS...................................................108
Post-Study Questions..............................................109
Open-Ended Closing Questions......................................1 10
WRITING TASK SCENARIOS..............................................I 10
Determining Tasks Success in an Unmoderated Study ...............I 13
Question participants..........................................1 14
Note when participants reach the URLs that indicate success....I 15
Observe session recordings.....................................I 15
View screenshots...............................................I 15
Post-Task Questions .............................................I 16
Preference (if applicable).....................................I 16
Detecting cheaters again: Pick this response...................1 I 7
Brand attitude (post)......................................... I 18
Final comments.................................................I 18
CHAPTER SUMMARY AND TAKEAWAY....................................... I 18
vi BENCHMARKING THE USER EXPERIENCE
CHAPTER 8: Preparing for Data Collection..........*...............119
UNMODERATED STUDY DATA COLLECTION:
PROGRAMMING THE PLATFORM.......................................... I 19
Should You Make Responses Mandatory?............................ 131
Cons of mandatory responses................................... I 3 I
Pros of mandatory responses...................................I 32
Additional thoughts for compensated respondents...............133
MODERATED STUDY DATA COLLECTION .................................. 133
Moderated In-Person Benchmark Studies...........................134
Task administration...........................................I 34
Metric recording.............................................. I 35
Session recording............................................ 137
Moderated Remote Benchmark Studies..............................138
Task administration........................................... I 38
Metric recording.............................................. I 38
Session recording.............................................138
Moderated Remote for Mobile Device Benchmark Studies............139
CHAPTER SUMMARY AND TAKEAWAYS..................................... 142
CHAPTER 9: Participant Recruitment and Data Collection............ 143
RECRUITMENT FOR UNMODERATED STUDIES............................... 144
Use Online Panel Sources ...................................... 144
Access Existing Customers and Prospects.........................149
Direct email..................................................149
Some guidance when paying participants........................I 5 I
RECRUITMENT FOR MODERATED STUDIES................................. 155
Use Professional Recruiting Firms............................... I 55
Develop Advertising.............................................155
Create Your Own Panel........................................... 155
Use Customer Lists .............................................156
Manage No Show Rates ...........................................156
DATA COLLECTION: PRETESTING AND SOFT LAUNCHING
TO PREPARE FOR THE FULL LAUNCH.................................... 157
Prepare for Moderated Benchmark Data Collection................. 157
Prepare for Unmoderated Benchmark Data Collection...............159
What happens when an unmoderated study doesn’t fill?..........161
How long is the typical online study?.........................163
ASSISTING IN MODERATED BENCHMARKS ...........................164
EFFECTS ON TASK TIME WHEN PARTICIPANTS
THINK ALOUD.................................................. 167
CHAPTER SUMMARY AND TAKEAWAYS................................168
CHAPTER 10: Analyzing and Displaying Your Data...............169
CLEANING DATA IN UNMODERATED
BENCHMARK STUDIES............................................169
Most Obvious and Easiest Detection Methods................170
Less Obvious and More Difficult Detection Methods.........171
Applying These Methods to Clean Your Data.................173
DEALING WITH MISSING OR INCOMPLETE DATA...................... 174
ANALYZING DATA AND PREPARING
FOR PRESENTATION............................................. 177
Platforms for Analyzing Data..............................177
Data Analysis.............................................178
Provide the participant data............................I 78
Analyze task data.......................................180
Summarize the data to a Single Usability Metric.........202
Additional task metrics.................................208
Study Level Metrics.......................................210
SUPR-Q..............................................—...210
Net Promoter Score (NPS)................................213
System Usability Scale (SUS) ...........................217
UMUX-L1TE......................*........................220
CHAPTER SUMMARY AND TAKEAWAYS..............................221
CHAPTER 11: Advanced Analysis............................. 222
ANALYZING PREFERENCE DATA..................................222
UNDERSTANDING BRAND LIFT AND DRAG..........................223
EXAMINING VARIABLES—CROSS TABBING..........................225
Collapsing Variables.................................... 226
CONTROLLING FOR PRIOR EXPERIENCE...........................228
About the Weighted T- Test...............................230
Using the Weighted T-Test................................230
UNDERSTANDING THE “WHY” BEHIND THE METRICS.................233
Verbatim Analysis........................................233
Log Files and Click Streams..............................234
BENCHMARKING THE USER EXPERIENCE
Visualizations of Click Behavior.....................................235
Heat maps.........................................................235
Click maps........................................................236
Tree maps.........................................................238
Linear tree maps..................................................239
Videos and Session Recordings........................................239
CONDUCTING STATISTICAL TESTS AND INTERPRETING P-VALUES................244
Between-Subjects Comparisons.........................................245
The two-sample t-test.............................................245
The N-l two-proportion test.......................................246
Within-Subjects Comparisons..........................................247
Paired t-test.....................................................247
McNemar Exact test................................................248
What Does Statistically Significant Mean?............................249
Additional Statistical Analyses......................................252
Regression analysis...............................................252
AN OVA............................................................254
Logistic regression...............................................255
How Confident Do You Need to Be?.....................................255
CHAPTER SUMMARY AND TAKEAWAY...........................................257
CHAPTER 12: Reporting Your Results ....................................259
ANATOMY OF A BENCHMARK REPORT..........................................259
Title Slide and Name.................................................259
Executive Summary....................................................260
Study and Methodology Overview.......................................261
Participant Summary..................................................262
Tasks...............................................................264
Task Metrics.........................................................265
Study Metrics........................................................268
Appendix demographics ...............................................269
UX Issues and Insights...............................................270
Verbatim comments.................................................272
RECOMMENDATIONS FOR BENCHMARK REPORTS
AND PRESENTATIONS......................................................273
How to Display Statistical Significance..............................275
Confidence interval error bars....................................275
Shaded graphs.........................................276
Asterisks.............................................277
Notes.................................................278
Connecting lines and hybrids..........................278
CHAPTER SUMMARY AND TAKEAWAYS............................279
CHAPTER 13: The End of the Book—Almost...................280
GOOD RESOURCES...........................................280
GOOD READS...............................................281
APPENDIX A:
A Checklist for Planning a UX Benchmark Study............284
APPENDIX B:
10 Best Practices for Competitive UX Benchmarking........289
APPENDIX C:
5 Common Mistakes Made in UX Benchmark Studies...........294
APPENDIX D:
Example Project Booking Form for Hotel Comparison Study .297
APPENDIX E:
Example Study Script for an Unmoderated Study ...........299
INDEX....................................................307
BENCHMARKING THE USER EXPERIENCE
Benchmarking the User Experience is a practical book about how
to measure the user experience of websites, software, mobile
apps, products, or just about anything people use. This book is
for UX researchers, designers, product owners, or anyone that
has a vested interest in improving the experience of websites
and products.
In this book, Jeff uses practical examples to illustrate what
benchmarking is and how to use this technique to measure users’
experiences. Actual studies conducted by MeasuringU are provided
to give you constructive information that will help you formulate
and understand your benchmarking studies, ultimately helping you
understand your audience and customers.
• . • ■ ■ . ■ -t‘ ; . ; ■ / • . * . . • • t
Jeff Sauro PhD, is the founding principal of MeasuringU, a user
experience research firm based in Denver, CO. For over twenty
years he’s been conducting UX research, including benchmarking
studies for clients such as Google, Faceboolc, eBay, Walmart,
Autodesk, Lenovo and PayPal. Prior to founding his firm, he
worked for Oracle, PeopleSoft, intuit and General Electric
Jeff has published over 25 peer-reviewed research articles and
five other books, including Customer Analytics for Dummies and
Quantifying fa User Experience, 2nd Jeff received his Ph.D.
in Research Methods and Statistics from the University of Denver,
Masters in Learning, Design and Technology from Stanford
University, and B.S. in Information Management Technology and
B.S. in Television, Radio and Film from Syracuse University. He
lives with his wife and three children in Denver, CO and can be
found online at measuringu.com and on Twitter @MeasuringU.
|
any_adam_object | 1 |
author | Sauro, Jeff |
author_GND | (DE-588)1022650173 |
author_facet | Sauro, Jeff |
author_role | aut |
author_sort | Sauro, Jeff |
author_variant | j s js |
building | Verbundindex |
bvnumber | BV045218828 |
classification_rvk | ST 280 ST 278 |
ctrlnum | (OCoLC)1066052464 (DE-599)BVBBV045218828 |
discipline | Informatik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02003nam a2200409 c 4500</leader><controlfield tag="001">BV045218828</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20190204 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">181004s2018 a||| b||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780692149096</subfield><subfield code="9">978-0-692-14909-6</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0692149090</subfield><subfield code="9">0-692-14909-0</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1066052464</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV045218828</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-355</subfield><subfield code="a">DE-473</subfield><subfield code="a">DE-898</subfield><subfield code="a">DE-20</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 280</subfield><subfield code="0">(DE-625)143645:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 278</subfield><subfield code="0">(DE-625)143644:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Sauro, Jeff</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1022650173</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Benchmarking the user experience</subfield><subfield code="b">a practical guide to benchmarking websites, software, and product experiences</subfield><subfield code="c">Jeff Sauro</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Denver, CO</subfield><subfield code="b">MeasuringU Press</subfield><subfield code="c">[2018]</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xiii, 320 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield><subfield code="c">23 cm</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Benchmark</subfield><subfield code="0">(DE-588)4144457-7</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Benutzerfreundlichkeit</subfield><subfield code="0">(DE-588)4005541-3</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">User interfaces (Computer systems) / Testing / Statistical methods</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Benutzerfreundlichkeit</subfield><subfield code="0">(DE-588)4005541-3</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Benchmark</subfield><subfield code="0">(DE-588)4144457-7</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=030607476&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=030607476&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Klappentext</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-030607476</subfield></datafield></record></collection> |
id | DE-604.BV045218828 |
illustrated | Illustrated |
indexdate | 2024-07-10T08:11:53Z |
institution | BVB |
isbn | 9780692149096 0692149090 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-030607476 |
oclc_num | 1066052464 |
open_access_boolean | |
owner | DE-355 DE-BY-UBR DE-473 DE-BY-UBG DE-898 DE-BY-UBR DE-20 |
owner_facet | DE-355 DE-BY-UBR DE-473 DE-BY-UBG DE-898 DE-BY-UBR DE-20 |
physical | xiii, 320 Seiten Illustrationen, Diagramme 23 cm |
publishDate | 2018 |
publishDateSearch | 2018 |
publishDateSort | 2018 |
publisher | MeasuringU Press |
record_format | marc |
spelling | Sauro, Jeff Verfasser (DE-588)1022650173 aut Benchmarking the user experience a practical guide to benchmarking websites, software, and product experiences Jeff Sauro Denver, CO MeasuringU Press [2018] xiii, 320 Seiten Illustrationen, Diagramme 23 cm txt rdacontent n rdamedia nc rdacarrier Benchmark (DE-588)4144457-7 gnd rswk-swf Statistik (DE-588)4056995-0 gnd rswk-swf Benutzerfreundlichkeit (DE-588)4005541-3 gnd rswk-swf User interfaces (Computer systems) / Testing / Statistical methods Benutzerfreundlichkeit (DE-588)4005541-3 s Statistik (DE-588)4056995-0 s Benchmark (DE-588)4144457-7 s DE-604 Digitalisierung UB Regensburg - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=030607476&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis Digitalisierung UB Regensburg - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=030607476&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA Klappentext |
spellingShingle | Sauro, Jeff Benchmarking the user experience a practical guide to benchmarking websites, software, and product experiences Benchmark (DE-588)4144457-7 gnd Statistik (DE-588)4056995-0 gnd Benutzerfreundlichkeit (DE-588)4005541-3 gnd |
subject_GND | (DE-588)4144457-7 (DE-588)4056995-0 (DE-588)4005541-3 |
title | Benchmarking the user experience a practical guide to benchmarking websites, software, and product experiences |
title_auth | Benchmarking the user experience a practical guide to benchmarking websites, software, and product experiences |
title_exact_search | Benchmarking the user experience a practical guide to benchmarking websites, software, and product experiences |
title_full | Benchmarking the user experience a practical guide to benchmarking websites, software, and product experiences Jeff Sauro |
title_fullStr | Benchmarking the user experience a practical guide to benchmarking websites, software, and product experiences Jeff Sauro |
title_full_unstemmed | Benchmarking the user experience a practical guide to benchmarking websites, software, and product experiences Jeff Sauro |
title_short | Benchmarking the user experience |
title_sort | benchmarking the user experience a practical guide to benchmarking websites software and product experiences |
title_sub | a practical guide to benchmarking websites, software, and product experiences |
topic | Benchmark (DE-588)4144457-7 gnd Statistik (DE-588)4056995-0 gnd Benutzerfreundlichkeit (DE-588)4005541-3 gnd |
topic_facet | Benchmark Statistik Benutzerfreundlichkeit |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=030607476&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=030607476&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT saurojeff benchmarkingtheuserexperienceapracticalguidetobenchmarkingwebsitessoftwareandproductexperiences |