Beyond the worst-case analysis of algorithms:
There are no silver bullets in algorithm design, and no single algorithmic idea is powerful and flexible enough to solve every computational problem. Nor are there silver bullets in algorithm analysis, as the most enlightening method for analyzing an algorithm often depends on the problem and the ap...
Gespeichert in:
Weitere Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
New York
Cambridge university press
2020
|
Zusammenfassung: | There are no silver bullets in algorithm design, and no single algorithmic idea is powerful and flexible enough to solve every computational problem. Nor are there silver bullets in algorithm analysis, as the most enlightening method for analyzing an algorithm often depends on the problem and the application. However, typical algorithms courses rely almost entirely on a single analysis framework, that of worst-case analysis, wherein an algorithm is assessed by its worst performance on any input of a given size. The purpose of this book is to popularize several alternatives to worst-case analysis and their most notable algorithmic applications, from clustering to linear programming to neural network training. Forty leading researchers have contributed introductions to different facets of this field, emphasizing the most important models and results, many of which can be taught in lectures to beginning graduate students in theoretical computer science and machine learning. |
Beschreibung: | Hier auch später erschienene, unveränderte Nachdrucke |
Beschreibung: | xvii, 686 Seiten Diagramme |
ISBN: | 9781108494311 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV047080563 | ||
003 | DE-604 | ||
005 | 20241028 | ||
007 | t| | ||
008 | 210108s2020 xx |||| |||| 00||| eng d | ||
020 | |a 9781108494311 |9 978-1-108-49431-1 | ||
035 | |a (OCoLC)1241738294 | ||
035 | |a (DE-599)HBZHT020537421 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-83 |a DE-898 | ||
084 | |a ST 134 |0 (DE-625)143590: |2 rvk | ||
084 | |a 68W40 |2 msc | ||
100 | 1 | |a Roughgarden, Tim |d 1975- |0 (DE-588)1117167275 |4 edt | |
245 | 1 | 0 | |a Beyond the worst-case analysis of algorithms |c Tim Roughgarden |
264 | 1 | |a New York |b Cambridge university press |c 2020 | |
264 | 4 | |c © 2020 | |
300 | |a xvii, 686 Seiten |b Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
500 | |a Hier auch später erschienene, unveränderte Nachdrucke | ||
505 | 8 | |a Forward Dan Spielman; Preface; 1. Introduction Tim Roughgarden; Part I. Refinements of Worst-Case Analysis: 2. Parameterized algorithms Fedor Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi; 3. From adaptive analysis to instance optimality Jérémy Barbay; 4. Resource augmentation Tim Roughgarden; Part II. Deterministic Models of Data: 5. Perturbation resilience Konstantin Makarychev and Yury Makarychev; 6. Approximation stability and proxy objectives Avrim Blum; 7. Sparse recovery Eric Price; Part III. Semi-Random Models: 8. Distributional analysis Tim Roughgarden; 9. Introduction to semi-random models Uriel Feige; 10. Semi-random stochastic block models Ankur Moitra; 11. Random-order models Anupam Gupta and Sahil Singla; 12. Self-improving algorithms C. Seshadhri; Part IV. Smoothed Analysis: 13. Smoothed analysis of local search Bodo Manthey; 14. Smoothed analysis of the simplex method Daniel Dadush and Sophie Huiberts; 15.- | |
505 | 8 | |a Smoothed analysis of Pareto curves in multiobjective optimization Heiko Röglin; Part V. Applications in Machine Learning and Statistics: 16. Noise in classification Maria-Florina Balcan and Nika Haghtalab; 17. Robust high-dimensional statistics Ilias Diakonikolas and Daniel Kane; 18. Nearest-neighbor classification and search Sanjoy Dasgupta and Samory Kpotufe; 19. Efficient tensor decomposition Aravindan Vijayaraghavan; 20. Topic models and nonnegative matrix factorization Rong Ge and Ankur Moitra; 21. Why do local methods solve nonconvex problems? Tengyu Ma; 22. Generalization in overparameterized models Moritz Hardt; 23. Instance-optimal distribution testing and learning Gregory Valiant and Paul Valiant; Part VI. Further Applications: 24. Beyond competitive analysis Anna R. Karlin and Elias Koutsoupias; 25. On the unreasonable effectiveness of satisfiability solvers Vijay Ganesh and Moshe Vardi; 26.- | |
505 | 8 | |a When simple hash functions suffice Kai-Min Chung, Michael Mitzenmacher and Salil Vadhan; 27. Prior-independent auctions Inbal Talgam-Cohen; 28. Distribution-free models of social networks Tim Roughgarden and C. Seshadhri; 29. Data-driven algorithm design Maria-Florina Balcan; 30. Algorithms with predictions Michael Mitzenmacher and Sergei Vassilvitskii. | |
520 | 3 | |a There are no silver bullets in algorithm design, and no single algorithmic idea is powerful and flexible enough to solve every computational problem. Nor are there silver bullets in algorithm analysis, as the most enlightening method for analyzing an algorithm often depends on the problem and the application. However, typical algorithms courses rely almost entirely on a single analysis framework, that of worst-case analysis, wherein an algorithm is assessed by its worst performance on any input of a given size. The purpose of this book is to popularize several alternatives to worst-case analysis and their most notable algorithmic applications, from clustering to linear programming to neural network training. Forty leading researchers have contributed introductions to different facets of this field, emphasizing the most important models and results, many of which can be taught in lectures to beginning graduate students in theoretical computer science and machine learning. | |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe |z 978-1-108-63743-5 |
943 | 1 | |a oai:aleph.bib-bvb.de:BVB01-032487387 |
Datensatz im Suchindex
_version_ | 1817704692130512896 |
---|---|
adam_text | |
adam_txt | |
any_adam_object | |
any_adam_object_boolean | |
author2 | Roughgarden, Tim 1975- |
author2_role | edt |
author2_variant | t r tr |
author_GND | (DE-588)1117167275 |
author_facet | Roughgarden, Tim 1975- |
building | Verbundindex |
bvnumber | BV047080563 |
classification_rvk | ST 134 |
contents | Forward Dan Spielman; Preface; 1. Introduction Tim Roughgarden; Part I. Refinements of Worst-Case Analysis: 2. Parameterized algorithms Fedor Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi; 3. From adaptive analysis to instance optimality Jérémy Barbay; 4. Resource augmentation Tim Roughgarden; Part II. Deterministic Models of Data: 5. Perturbation resilience Konstantin Makarychev and Yury Makarychev; 6. Approximation stability and proxy objectives Avrim Blum; 7. Sparse recovery Eric Price; Part III. Semi-Random Models: 8. Distributional analysis Tim Roughgarden; 9. Introduction to semi-random models Uriel Feige; 10. Semi-random stochastic block models Ankur Moitra; 11. Random-order models Anupam Gupta and Sahil Singla; 12. Self-improving algorithms C. Seshadhri; Part IV. Smoothed Analysis: 13. Smoothed analysis of local search Bodo Manthey; 14. Smoothed analysis of the simplex method Daniel Dadush and Sophie Huiberts; 15.- Smoothed analysis of Pareto curves in multiobjective optimization Heiko Röglin; Part V. Applications in Machine Learning and Statistics: 16. Noise in classification Maria-Florina Balcan and Nika Haghtalab; 17. Robust high-dimensional statistics Ilias Diakonikolas and Daniel Kane; 18. Nearest-neighbor classification and search Sanjoy Dasgupta and Samory Kpotufe; 19. Efficient tensor decomposition Aravindan Vijayaraghavan; 20. Topic models and nonnegative matrix factorization Rong Ge and Ankur Moitra; 21. Why do local methods solve nonconvex problems? Tengyu Ma; 22. Generalization in overparameterized models Moritz Hardt; 23. Instance-optimal distribution testing and learning Gregory Valiant and Paul Valiant; Part VI. Further Applications: 24. Beyond competitive analysis Anna R. Karlin and Elias Koutsoupias; 25. On the unreasonable effectiveness of satisfiability solvers Vijay Ganesh and Moshe Vardi; 26.- When simple hash functions suffice Kai-Min Chung, Michael Mitzenmacher and Salil Vadhan; 27. Prior-independent auctions Inbal Talgam-Cohen; 28. Distribution-free models of social networks Tim Roughgarden and C. Seshadhri; 29. Data-driven algorithm design Maria-Florina Balcan; 30. Algorithms with predictions Michael Mitzenmacher and Sergei Vassilvitskii. |
ctrlnum | (OCoLC)1241738294 (DE-599)HBZHT020537421 |
discipline | Informatik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>00000nam a2200000 c 4500</leader><controlfield tag="001">BV047080563</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20241028</controlfield><controlfield tag="007">t|</controlfield><controlfield tag="008">210108s2020 xx |||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781108494311</subfield><subfield code="9">978-1-108-49431-1</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1241738294</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)HBZHT020537421</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-83</subfield><subfield code="a">DE-898</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 134</subfield><subfield code="0">(DE-625)143590:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">68W40</subfield><subfield code="2">msc</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Roughgarden, Tim</subfield><subfield code="d">1975-</subfield><subfield code="0">(DE-588)1117167275</subfield><subfield code="4">edt</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Beyond the worst-case analysis of algorithms</subfield><subfield code="c">Tim Roughgarden</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">New York</subfield><subfield code="b">Cambridge university press</subfield><subfield code="c">2020</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2020</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xvii, 686 Seiten</subfield><subfield code="b">Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Hier auch später erschienene, unveränderte Nachdrucke</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Forward Dan Spielman; Preface; 1. Introduction Tim Roughgarden; Part I. Refinements of Worst-Case Analysis: 2. Parameterized algorithms Fedor Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi; 3. From adaptive analysis to instance optimality Jérémy Barbay; 4. Resource augmentation Tim Roughgarden; Part II. Deterministic Models of Data: 5. Perturbation resilience Konstantin Makarychev and Yury Makarychev; 6. Approximation stability and proxy objectives Avrim Blum; 7. Sparse recovery Eric Price; Part III. Semi-Random Models: 8. Distributional analysis Tim Roughgarden; 9. Introduction to semi-random models Uriel Feige; 10. Semi-random stochastic block models Ankur Moitra; 11. Random-order models Anupam Gupta and Sahil Singla; 12. Self-improving algorithms C. Seshadhri; Part IV. Smoothed Analysis: 13. Smoothed analysis of local search Bodo Manthey; 14. Smoothed analysis of the simplex method Daniel Dadush and Sophie Huiberts; 15.-</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Smoothed analysis of Pareto curves in multiobjective optimization Heiko Röglin; Part V. Applications in Machine Learning and Statistics: 16. Noise in classification Maria-Florina Balcan and Nika Haghtalab; 17. Robust high-dimensional statistics Ilias Diakonikolas and Daniel Kane; 18. Nearest-neighbor classification and search Sanjoy Dasgupta and Samory Kpotufe; 19. Efficient tensor decomposition Aravindan Vijayaraghavan; 20. Topic models and nonnegative matrix factorization Rong Ge and Ankur Moitra; 21. Why do local methods solve nonconvex problems? Tengyu Ma; 22. Generalization in overparameterized models Moritz Hardt; 23. Instance-optimal distribution testing and learning Gregory Valiant and Paul Valiant; Part VI. Further Applications: 24. Beyond competitive analysis Anna R. Karlin and Elias Koutsoupias; 25. On the unreasonable effectiveness of satisfiability solvers Vijay Ganesh and Moshe Vardi; 26.-</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">When simple hash functions suffice Kai-Min Chung, Michael Mitzenmacher and Salil Vadhan; 27. Prior-independent auctions Inbal Talgam-Cohen; 28. Distribution-free models of social networks Tim Roughgarden and C. Seshadhri; 29. Data-driven algorithm design Maria-Florina Balcan; 30. Algorithms with predictions Michael Mitzenmacher and Sergei Vassilvitskii.</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">There are no silver bullets in algorithm design, and no single algorithmic idea is powerful and flexible enough to solve every computational problem. Nor are there silver bullets in algorithm analysis, as the most enlightening method for analyzing an algorithm often depends on the problem and the application. However, typical algorithms courses rely almost entirely on a single analysis framework, that of worst-case analysis, wherein an algorithm is assessed by its worst performance on any input of a given size. The purpose of this book is to popularize several alternatives to worst-case analysis and their most notable algorithmic applications, from clustering to linear programming to neural network training. Forty leading researchers have contributed introductions to different facets of this field, emphasizing the most important models and results, many of which can be taught in lectures to beginning graduate students in theoretical computer science and machine learning.</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe</subfield><subfield code="z">978-1-108-63743-5</subfield></datafield><datafield tag="943" ind1="1" ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-032487387</subfield></datafield></record></collection> |
id | DE-604.BV047080563 |
illustrated | Not Illustrated |
index_date | 2024-07-03T16:16:21Z |
indexdate | 2024-12-06T15:17:58Z |
institution | BVB |
isbn | 9781108494311 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-032487387 |
oclc_num | 1241738294 |
open_access_boolean | |
owner | DE-83 DE-898 DE-BY-UBR |
owner_facet | DE-83 DE-898 DE-BY-UBR |
physical | xvii, 686 Seiten Diagramme |
publishDate | 2020 |
publishDateSearch | 2020 |
publishDateSort | 2020 |
publisher | Cambridge university press |
record_format | marc |
spelling | Roughgarden, Tim 1975- (DE-588)1117167275 edt Beyond the worst-case analysis of algorithms Tim Roughgarden New York Cambridge university press 2020 © 2020 xvii, 686 Seiten Diagramme txt rdacontent n rdamedia nc rdacarrier Hier auch später erschienene, unveränderte Nachdrucke Forward Dan Spielman; Preface; 1. Introduction Tim Roughgarden; Part I. Refinements of Worst-Case Analysis: 2. Parameterized algorithms Fedor Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi; 3. From adaptive analysis to instance optimality Jérémy Barbay; 4. Resource augmentation Tim Roughgarden; Part II. Deterministic Models of Data: 5. Perturbation resilience Konstantin Makarychev and Yury Makarychev; 6. Approximation stability and proxy objectives Avrim Blum; 7. Sparse recovery Eric Price; Part III. Semi-Random Models: 8. Distributional analysis Tim Roughgarden; 9. Introduction to semi-random models Uriel Feige; 10. Semi-random stochastic block models Ankur Moitra; 11. Random-order models Anupam Gupta and Sahil Singla; 12. Self-improving algorithms C. Seshadhri; Part IV. Smoothed Analysis: 13. Smoothed analysis of local search Bodo Manthey; 14. Smoothed analysis of the simplex method Daniel Dadush and Sophie Huiberts; 15.- Smoothed analysis of Pareto curves in multiobjective optimization Heiko Röglin; Part V. Applications in Machine Learning and Statistics: 16. Noise in classification Maria-Florina Balcan and Nika Haghtalab; 17. Robust high-dimensional statistics Ilias Diakonikolas and Daniel Kane; 18. Nearest-neighbor classification and search Sanjoy Dasgupta and Samory Kpotufe; 19. Efficient tensor decomposition Aravindan Vijayaraghavan; 20. Topic models and nonnegative matrix factorization Rong Ge and Ankur Moitra; 21. Why do local methods solve nonconvex problems? Tengyu Ma; 22. Generalization in overparameterized models Moritz Hardt; 23. Instance-optimal distribution testing and learning Gregory Valiant and Paul Valiant; Part VI. Further Applications: 24. Beyond competitive analysis Anna R. Karlin and Elias Koutsoupias; 25. On the unreasonable effectiveness of satisfiability solvers Vijay Ganesh and Moshe Vardi; 26.- When simple hash functions suffice Kai-Min Chung, Michael Mitzenmacher and Salil Vadhan; 27. Prior-independent auctions Inbal Talgam-Cohen; 28. Distribution-free models of social networks Tim Roughgarden and C. Seshadhri; 29. Data-driven algorithm design Maria-Florina Balcan; 30. Algorithms with predictions Michael Mitzenmacher and Sergei Vassilvitskii. There are no silver bullets in algorithm design, and no single algorithmic idea is powerful and flexible enough to solve every computational problem. Nor are there silver bullets in algorithm analysis, as the most enlightening method for analyzing an algorithm often depends on the problem and the application. However, typical algorithms courses rely almost entirely on a single analysis framework, that of worst-case analysis, wherein an algorithm is assessed by its worst performance on any input of a given size. The purpose of this book is to popularize several alternatives to worst-case analysis and their most notable algorithmic applications, from clustering to linear programming to neural network training. Forty leading researchers have contributed introductions to different facets of this field, emphasizing the most important models and results, many of which can be taught in lectures to beginning graduate students in theoretical computer science and machine learning. Erscheint auch als Online-Ausgabe 978-1-108-63743-5 |
spellingShingle | Beyond the worst-case analysis of algorithms Forward Dan Spielman; Preface; 1. Introduction Tim Roughgarden; Part I. Refinements of Worst-Case Analysis: 2. Parameterized algorithms Fedor Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi; 3. From adaptive analysis to instance optimality Jérémy Barbay; 4. Resource augmentation Tim Roughgarden; Part II. Deterministic Models of Data: 5. Perturbation resilience Konstantin Makarychev and Yury Makarychev; 6. Approximation stability and proxy objectives Avrim Blum; 7. Sparse recovery Eric Price; Part III. Semi-Random Models: 8. Distributional analysis Tim Roughgarden; 9. Introduction to semi-random models Uriel Feige; 10. Semi-random stochastic block models Ankur Moitra; 11. Random-order models Anupam Gupta and Sahil Singla; 12. Self-improving algorithms C. Seshadhri; Part IV. Smoothed Analysis: 13. Smoothed analysis of local search Bodo Manthey; 14. Smoothed analysis of the simplex method Daniel Dadush and Sophie Huiberts; 15.- Smoothed analysis of Pareto curves in multiobjective optimization Heiko Röglin; Part V. Applications in Machine Learning and Statistics: 16. Noise in classification Maria-Florina Balcan and Nika Haghtalab; 17. Robust high-dimensional statistics Ilias Diakonikolas and Daniel Kane; 18. Nearest-neighbor classification and search Sanjoy Dasgupta and Samory Kpotufe; 19. Efficient tensor decomposition Aravindan Vijayaraghavan; 20. Topic models and nonnegative matrix factorization Rong Ge and Ankur Moitra; 21. Why do local methods solve nonconvex problems? Tengyu Ma; 22. Generalization in overparameterized models Moritz Hardt; 23. Instance-optimal distribution testing and learning Gregory Valiant and Paul Valiant; Part VI. Further Applications: 24. Beyond competitive analysis Anna R. Karlin and Elias Koutsoupias; 25. On the unreasonable effectiveness of satisfiability solvers Vijay Ganesh and Moshe Vardi; 26.- When simple hash functions suffice Kai-Min Chung, Michael Mitzenmacher and Salil Vadhan; 27. Prior-independent auctions Inbal Talgam-Cohen; 28. Distribution-free models of social networks Tim Roughgarden and C. Seshadhri; 29. Data-driven algorithm design Maria-Florina Balcan; 30. Algorithms with predictions Michael Mitzenmacher and Sergei Vassilvitskii. |
title | Beyond the worst-case analysis of algorithms |
title_auth | Beyond the worst-case analysis of algorithms |
title_exact_search | Beyond the worst-case analysis of algorithms |
title_exact_search_txtP | Beyond the worst-case analysis of algorithms |
title_full | Beyond the worst-case analysis of algorithms Tim Roughgarden |
title_fullStr | Beyond the worst-case analysis of algorithms Tim Roughgarden |
title_full_unstemmed | Beyond the worst-case analysis of algorithms Tim Roughgarden |
title_short | Beyond the worst-case analysis of algorithms |
title_sort | beyond the worst case analysis of algorithms |
work_keys_str_mv | AT roughgardentim beyondtheworstcaseanalysisofalgorithms |