Mathematical pictures at a data science exhibition:
"In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text provides deep and comprehensive coverage of the mathematical theory supporting the field. Composed of 27 lecture-length ch...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Cambridge ; New York, NY
Cambridge University Press
2022
|
Ausgabe: | First published |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Zusammenfassung: | "In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text provides deep and comprehensive coverage of the mathematical theory supporting the field. Composed of 27 lecture-length chapters with exercises, it embarks the readers on an engaging itinerary through key subjects in data science, including machine learning, optimal recovery, compressive sensing (also known as compressed sensing), optimization, and neural networks. While standard material is covered, the book also includes distinctive presentations of topics such as reproducing kernel Hilbert spaces, spectral clustering, optimal recovery, compressive sensing, group testing, and applications of semidefinite programming. Students and data scientists with less mathematical background will appreciate the appendices that supply more details on some of the abstract concepts"-- |
Beschreibung: | 2204 |
Beschreibung: | XX, 318 Seiten Illustrationen, Diagramme |
ISBN: | 9781316518885 1316518884 9781009001854 100900185X |
Internformat
MARC
LEADER | 00000nam a22000008c 4500 | ||
---|---|---|---|
001 | BV048298430 | ||
003 | DE-604 | ||
005 | 20220725 | ||
007 | t | ||
008 | 220624s2022 a||| b||| 00||| eng d | ||
020 | |a 9781316518885 |9 978-1-316-51888-5 | ||
020 | |a 1316518884 |9 1-316-51888-4 | ||
020 | |a 9781009001854 |9 978-1-00-900185-4 | ||
020 | |a 100900185X |9 1-00-900185-X | ||
035 | |a (OCoLC)1334022033 | ||
035 | |a (DE-599)BVBBV048298430 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-739 | ||
084 | |a SK 990 |0 (DE-625)143278: |2 rvk | ||
100 | 1 | |a Foucart, Simon |d 1977- |e Verfasser |0 (DE-588)1041275412 |4 aut | |
245 | 1 | 0 | |a Mathematical pictures at a data science exhibition |c Simon Foucart |
250 | |a First published | ||
264 | 1 | |a Cambridge ; New York, NY |b Cambridge University Press |c 2022 | |
300 | |a XX, 318 Seiten |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
500 | |a 2204 | ||
520 | 3 | |a "In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text provides deep and comprehensive coverage of the mathematical theory supporting the field. Composed of 27 lecture-length chapters with exercises, it embarks the readers on an engaging itinerary through key subjects in data science, including machine learning, optimal recovery, compressive sensing (also known as compressed sensing), optimization, and neural networks. While standard material is covered, the book also includes distinctive presentations of topics such as reproducing kernel Hilbert spaces, spectral clustering, optimal recovery, compressive sensing, group testing, and applications of semidefinite programming. Students and data scientists with less mathematical background will appreciate the appendices that supply more details on some of the abstract concepts"-- | |
650 | 0 | 7 | |a Data Science |0 (DE-588)1140936166 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Angewandte Mathematik |0 (DE-588)4142443-8 |2 gnd |9 rswk-swf |
653 | 0 | |a Big data / Mathematics | |
653 | 0 | |a Information science / Mathematics | |
653 | 0 | |a Computer science / Mathematics | |
653 | 0 | |a Données volumineuses / Mathématiques | |
653 | 0 | |a Sciences de l'information / Mathématiques | |
653 | 0 | |a Informatique / Mathématiques | |
653 | 0 | |a COMPUTERS / General | |
653 | 0 | |a Computer science / Mathematics | |
689 | 0 | 0 | |a Angewandte Mathematik |0 (DE-588)4142443-8 |D s |
689 | 0 | 1 | |a Data Science |0 (DE-588)1140936166 |D s |
689 | 0 | |5 DE-604 | |
776 | 0 | 8 | |i ebook version |z 9781009003933 |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033678255&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-033678255 |
Datensatz im Suchindex
_version_ | 1804184137870868480 |
---|---|
adam_text | Contents page xiii xvii Preface Notation 1 PARTONE MACHINE LEARNING Executive Summary 1 3 Rudiments of Statistical Learning Theory 4 4 6 8 9 1.1 True and Empirical Risks 1.2 PAC-Learnability 1.3 Validation Exercises 2 Vapnik-Chervonenkis Dimension 2.1 Definitions 2.2 Examples 2.3 Sauer Lemma Exercises 3 Learnability for Binary Classification 3.1 Uniform Convergence Property 3.2 Finite VC-Dimension Implies PAC-Learnability 3.3 No-Free-Lunch Theorem Exercises 4 10 10 11 13 15 16 16 17 20 22 23 23 25 27 29 Support Vector Machines 4.1 Linear Separability 4.2 Hard and Soft SVM 4.3 Kernel Trick Exercises vii
viii Contents Contents ix 5 Reproducing Kernel Hilbert Spaces 5.1 Abstract Definition 5.2 Moore-Aronszajn Theorem 5.3 Mercer Theorem Exercises 31 31 33 37 39 12 Curse of Dimensionality 12.1 Notions of Tractability 12.2 Integration of Trigonometric Polynomials 12.3 Integration in Weighted Sobolev Spaces Exercises 94 94 96 98 100 6 Regression and Regularization 6.1 Empirical Risk Minimization 6.2 Regularization 6.3 Classification via Regression Exercises 41 41 43 44 45 13 Quasi-Monte Carlo Integration 13.1 Variation and Discrepancy 13.2 Koksma-Hlawka Inequality 13.3 Low-Discrepancy Sets Exercises 102 103 105 107 111 7 Clustering 7.1 Single-Linkage Clustering 7.2 Center-Based Clustering 7.3 Spectral Clustering Exercises 47 48 49 51 54 PART THREE COMPRESSIVE SENSING Executive Summary 113 115 14 Dimension Reduction 8.1 Principal Component Analysis 8.2 Johnson-Lindenstrauss Lemma 8.3 Locally Linear Embedding Exercises 56 56 58 61 64 Sparse Recovery from Linear Observations 14.1 ¿o-Minimization 14.2 ^-Minimization 14.3 A -Restricted Isometry Property Exercises 116 117 118 119 122 15 PART TWO OPTIMAL RECOVERY Executive Summary 65 67 The Complexity of Sparse Recovery 15.1 Limitations Imposed by Stability and Robustness 15.2 Gelfand Width of the Ą-Ball 15.3 Irrelevance of 62-Stability Exercises 123 123 127 130 130 9 Foundational Results of Optimal Recovery 9.1 Models, Errors, and Optimality 9.2 Linearity of Optimal Recovery Maps 9.3 An Extremal Property of Splines Exercises 68 69 71 73 75 16 Low-Rank Recovery from Linear Observations 16.1 Nuclear Norm Minimization 16.2 Í1 -Rank
Restricted Isometry Property 16.3 Semidefinite Programming Formulation Exercises 132 132 134 136 137 10 Approximability Models 10.1 The Model Set 10.2 Optimality in a Hilbert Setting 10.3 Optimality for Linear Functionals Exercises 76 76 78 82 85 17 Sparse Recovery from One-Bit Observations 17.1 Estimating the Direction via Hard Thresholding 17.2 Estimating the Direction via Linear Programming 17.3 Estimating Both the Direction and the Magnitude Exercises 139 139 141 145 147 11 Ideal Selection of Observation Schemes 11.1 Hilbert Setting 11.2 Integration of Lipschitz Functions 11.3 Adaptivity Does Not Help Much Exercises 86 86 88 91 92 18 Group Testing 18.1 Properties of the Test Matrix 18.2 Satisfying the Separabity Condition 18.3 Recovery via a Linear Feasibility Program Exercises 149 149 151 154 155 8
x 19 20 Contents Contents PART FOUR OPTIMIZATION Executive Summary 157 159 Basic Convex Optimization 19.1 Gradient Descent for Unconstrained Convex Programs 19.2 Rates of Convergence for Steepest Descent 19.3 Stochastic Gradient Descent Exercises 160 160 162 165 168 Snippets of Linear Programming 20.1 Maximizers of a Convex Function 20.2 The Simplex Algorithm 20.3 Illustrative Linear Programs Exercises 169 169 171 173 175 t xi 25 Expressiveness of Shallow Networks 25.1 Activation Functions and Universal Approximation 25.2 Approximation Rate with ReLU: Upper Bound 25.3 Approximation Rate with ReLU: Lower Bound Exercises 216 216 219 221 224 26 Various Advantages of Depth 26.1 Omnipotent Activation Functions 26.2 Compact Supports 26.3 Approximation Power Exercises 226 226 229 230 237 Tidbits on Neural Network Training 27.1 Backpropagation 27.2 Overparametrized Empirical-Risk Landscapes 27.3 Convolutional Neural Networks Exercises 239 239 241 245 246 APPENDICES 247 27 21 Duality Theory and Practice 21.1 Duality in Linear Programming 21.2 Examples of Robust Optimization 21.3 Duality in Conic Programming Exercises 177 178 181 183 185 22 Semidefinite Programming in Action 22.1 Schur Complement 22.2 Sum-of-Squares Technique 22.3 Method of Moments Exercises 186 187 188 190 192 Appendix A Instances of Nonconvex Optimization 23.1 Quadratically Constrained Quadratic Programs 23.2 Dynamic Programming 23.3 Projected Gradient Descent Exercises 194 194 197 199 202 Appendix В 259 259 262 267 272 Appendix C 274 274 277 281 283 Appendix D 285 285 288 291 295 23 24 PART FIVE NEURAL NETWORKS
Executive Summary 205 207 First Encounter with ReLU Networks 24.1 Some Terminology 24.2 Shallow ReLU Networks and CPwL Functions 24.3 Deep ReLU Networks and CPwL Functions Exercises 208 209 210 213 215 High-Dimensional Geometry A. 1 Volumes A.2 Covering and Packing Numbers A.3 Projected Cross-Polytope Exercises Probability Theory В. 1 Tails and Moment Generating Functions B.2 Concentration Inequalities B.3 Restricted Isometry Properties Exercises Functional Analysis C. 1 Completeness C.2 Convexity C.3 Extreme Points Exercises Matrix Analysis D. 1 Eigenvalues of Self-Adjoint Matrices D.2 Singular Values D.3 Matrix Norms Exercises 249 250 252 255 257
xii Contents Appendix E Approximation Theory E. 1 Classic Uniform Approximation Theorems E.2 Riesz-Fejér and Carathéodory-Toeplitz Theorems E.3 Kolmogorov Superposition Theorem Exercises References Index 297 297 304 307 310 311 315
|
adam_txt |
Contents page xiii xvii Preface Notation 1 PARTONE MACHINE LEARNING Executive Summary 1 3 Rudiments of Statistical Learning Theory 4 4 6 8 9 1.1 True and Empirical Risks 1.2 PAC-Learnability 1.3 Validation Exercises 2 Vapnik-Chervonenkis Dimension 2.1 Definitions 2.2 Examples 2.3 Sauer Lemma Exercises 3 Learnability for Binary Classification 3.1 Uniform Convergence Property 3.2 Finite VC-Dimension Implies PAC-Learnability 3.3 No-Free-Lunch Theorem Exercises 4 10 10 11 13 15 16 16 17 20 22 23 23 25 27 29 Support Vector Machines 4.1 Linear Separability 4.2 Hard and Soft SVM 4.3 Kernel Trick Exercises vii
viii Contents Contents ix 5 Reproducing Kernel Hilbert Spaces 5.1 Abstract Definition 5.2 Moore-Aronszajn Theorem 5.3 Mercer Theorem Exercises 31 31 33 37 39 12 Curse of Dimensionality 12.1 Notions of Tractability 12.2 Integration of Trigonometric Polynomials 12.3 Integration in Weighted Sobolev Spaces Exercises 94 94 96 98 100 6 Regression and Regularization 6.1 Empirical Risk Minimization 6.2 Regularization 6.3 Classification via Regression Exercises 41 41 43 44 45 13 Quasi-Monte Carlo Integration 13.1 Variation and Discrepancy 13.2 Koksma-Hlawka Inequality 13.3 Low-Discrepancy Sets Exercises 102 103 105 107 111 7 Clustering 7.1 Single-Linkage Clustering 7.2 Center-Based Clustering 7.3 Spectral Clustering Exercises 47 48 49 51 54 PART THREE COMPRESSIVE SENSING Executive Summary 113 115 14 Dimension Reduction 8.1 Principal Component Analysis 8.2 Johnson-Lindenstrauss Lemma 8.3 Locally Linear Embedding Exercises 56 56 58 61 64 Sparse Recovery from Linear Observations 14.1 ¿o-Minimization 14.2 ^-Minimization 14.3 A -Restricted Isometry Property Exercises 116 117 118 119 122 15 PART TWO OPTIMAL RECOVERY Executive Summary 65 67 The Complexity of Sparse Recovery 15.1 Limitations Imposed by Stability and Robustness 15.2 Gelfand Width of the Ą-Ball 15.3 Irrelevance of 62-Stability Exercises 123 123 127 130 130 9 Foundational Results of Optimal Recovery 9.1 Models, Errors, and Optimality 9.2 Linearity of Optimal Recovery Maps 9.3 An Extremal Property of Splines Exercises 68 69 71 73 75 16 Low-Rank Recovery from Linear Observations 16.1 Nuclear Norm Minimization 16.2 Í1 -Rank
Restricted Isometry Property 16.3 Semidefinite Programming Formulation Exercises 132 132 134 136 137 10 Approximability Models 10.1 The Model Set 10.2 Optimality in a Hilbert Setting 10.3 Optimality for Linear Functionals Exercises 76 76 78 82 85 17 Sparse Recovery from One-Bit Observations 17.1 Estimating the Direction via Hard Thresholding 17.2 Estimating the Direction via Linear Programming 17.3 Estimating Both the Direction and the Magnitude Exercises 139 139 141 145 147 11 Ideal Selection of Observation Schemes 11.1 Hilbert Setting 11.2 Integration of Lipschitz Functions 11.3 Adaptivity Does Not Help Much Exercises 86 86 88 91 92 18 Group Testing 18.1 Properties of the Test Matrix 18.2 Satisfying the Separabity Condition 18.3 Recovery via a Linear Feasibility Program Exercises 149 149 151 154 155 8
x 19 20 Contents Contents PART FOUR OPTIMIZATION Executive Summary 157 159 Basic Convex Optimization 19.1 Gradient Descent for Unconstrained Convex Programs 19.2 Rates of Convergence for Steepest Descent 19.3 Stochastic Gradient Descent Exercises 160 160 162 165 168 Snippets of Linear Programming 20.1 Maximizers of a Convex Function 20.2 The Simplex Algorithm 20.3 Illustrative Linear Programs Exercises 169 169 171 173 175 t xi 25 Expressiveness of Shallow Networks 25.1 Activation Functions and Universal Approximation 25.2 Approximation Rate with ReLU: Upper Bound 25.3 Approximation Rate with ReLU: Lower Bound Exercises 216 216 219 221 224 26 Various Advantages of Depth 26.1 Omnipotent Activation Functions 26.2 Compact Supports 26.3 Approximation Power Exercises 226 226 229 230 237 Tidbits on Neural Network Training 27.1 Backpropagation 27.2 Overparametrized Empirical-Risk Landscapes 27.3 Convolutional Neural Networks Exercises 239 239 241 245 246 APPENDICES 247 27 21 Duality Theory and Practice 21.1 Duality in Linear Programming 21.2 Examples of Robust Optimization 21.3 Duality in Conic Programming Exercises 177 178 181 183 185 22 Semidefinite Programming in Action 22.1 Schur Complement 22.2 Sum-of-Squares Technique 22.3 Method of Moments Exercises 186 187 188 190 192 Appendix A Instances of Nonconvex Optimization 23.1 Quadratically Constrained Quadratic Programs 23.2 Dynamic Programming 23.3 Projected Gradient Descent Exercises 194 194 197 199 202 Appendix В 259 259 262 267 272 Appendix C 274 274 277 281 283 Appendix D 285 285 288 291 295 23 24 PART FIVE NEURAL NETWORKS
Executive Summary 205 207 First Encounter with ReLU Networks 24.1 Some Terminology 24.2 Shallow ReLU Networks and CPwL Functions 24.3 Deep ReLU Networks and CPwL Functions Exercises 208 209 210 213 215 High-Dimensional Geometry A. 1 Volumes A.2 Covering and Packing Numbers A.3 Projected Cross-Polytope Exercises Probability Theory В. 1 Tails and Moment Generating Functions B.2 Concentration Inequalities B.3 Restricted Isometry Properties Exercises Functional Analysis C. 1 Completeness C.2 Convexity C.3 Extreme Points Exercises Matrix Analysis D. 1 Eigenvalues of Self-Adjoint Matrices D.2 Singular Values D.3 Matrix Norms Exercises 249 250 252 255 257
xii Contents Appendix E Approximation Theory E. 1 Classic Uniform Approximation Theorems E.2 Riesz-Fejér and Carathéodory-Toeplitz Theorems E.3 Kolmogorov Superposition Theorem Exercises References Index 297 297 304 307 310 311 315 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Foucart, Simon 1977- |
author_GND | (DE-588)1041275412 |
author_facet | Foucart, Simon 1977- |
author_role | aut |
author_sort | Foucart, Simon 1977- |
author_variant | s f sf |
building | Verbundindex |
bvnumber | BV048298430 |
classification_rvk | SK 990 |
ctrlnum | (OCoLC)1334022033 (DE-599)BVBBV048298430 |
discipline | Mathematik |
discipline_str_mv | Mathematik |
edition | First published |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02973nam a22005178c 4500</leader><controlfield tag="001">BV048298430</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20220725 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">220624s2022 a||| b||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781316518885</subfield><subfield code="9">978-1-316-51888-5</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1316518884</subfield><subfield code="9">1-316-51888-4</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781009001854</subfield><subfield code="9">978-1-00-900185-4</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">100900185X</subfield><subfield code="9">1-00-900185-X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1334022033</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV048298430</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-739</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SK 990</subfield><subfield code="0">(DE-625)143278:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Foucart, Simon</subfield><subfield code="d">1977-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1041275412</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Mathematical pictures at a data science exhibition</subfield><subfield code="c">Simon Foucart</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">First published</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cambridge ; New York, NY</subfield><subfield code="b">Cambridge University Press</subfield><subfield code="c">2022</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">XX, 318 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">2204</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">"In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text provides deep and comprehensive coverage of the mathematical theory supporting the field. Composed of 27 lecture-length chapters with exercises, it embarks the readers on an engaging itinerary through key subjects in data science, including machine learning, optimal recovery, compressive sensing (also known as compressed sensing), optimization, and neural networks. While standard material is covered, the book also includes distinctive presentations of topics such as reproducing kernel Hilbert spaces, spectral clustering, optimal recovery, compressive sensing, group testing, and applications of semidefinite programming. Students and data scientists with less mathematical background will appreciate the appendices that supply more details on some of the abstract concepts"--</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Data Science</subfield><subfield code="0">(DE-588)1140936166</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Angewandte Mathematik</subfield><subfield code="0">(DE-588)4142443-8</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Big data / Mathematics</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Information science / Mathematics</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computer science / Mathematics</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Données volumineuses / Mathématiques</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Sciences de l'information / Mathématiques</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Informatique / Mathématiques</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">COMPUTERS / General</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computer science / Mathematics</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Angewandte Mathematik</subfield><subfield code="0">(DE-588)4142443-8</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Data Science</subfield><subfield code="0">(DE-588)1140936166</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">ebook version</subfield><subfield code="z">9781009003933</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033678255&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-033678255</subfield></datafield></record></collection> |
id | DE-604.BV048298430 |
illustrated | Illustrated |
index_date | 2024-07-03T20:06:00Z |
indexdate | 2024-07-10T09:34:34Z |
institution | BVB |
isbn | 9781316518885 1316518884 9781009001854 100900185X |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-033678255 |
oclc_num | 1334022033 |
open_access_boolean | |
owner | DE-739 |
owner_facet | DE-739 |
physical | XX, 318 Seiten Illustrationen, Diagramme |
publishDate | 2022 |
publishDateSearch | 2022 |
publishDateSort | 2022 |
publisher | Cambridge University Press |
record_format | marc |
spelling | Foucart, Simon 1977- Verfasser (DE-588)1041275412 aut Mathematical pictures at a data science exhibition Simon Foucart First published Cambridge ; New York, NY Cambridge University Press 2022 XX, 318 Seiten Illustrationen, Diagramme txt rdacontent n rdamedia nc rdacarrier 2204 "In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text provides deep and comprehensive coverage of the mathematical theory supporting the field. Composed of 27 lecture-length chapters with exercises, it embarks the readers on an engaging itinerary through key subjects in data science, including machine learning, optimal recovery, compressive sensing (also known as compressed sensing), optimization, and neural networks. While standard material is covered, the book also includes distinctive presentations of topics such as reproducing kernel Hilbert spaces, spectral clustering, optimal recovery, compressive sensing, group testing, and applications of semidefinite programming. Students and data scientists with less mathematical background will appreciate the appendices that supply more details on some of the abstract concepts"-- Data Science (DE-588)1140936166 gnd rswk-swf Angewandte Mathematik (DE-588)4142443-8 gnd rswk-swf Big data / Mathematics Information science / Mathematics Computer science / Mathematics Données volumineuses / Mathématiques Sciences de l'information / Mathématiques Informatique / Mathématiques COMPUTERS / General Angewandte Mathematik (DE-588)4142443-8 s Data Science (DE-588)1140936166 s DE-604 ebook version 9781009003933 Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033678255&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Foucart, Simon 1977- Mathematical pictures at a data science exhibition Data Science (DE-588)1140936166 gnd Angewandte Mathematik (DE-588)4142443-8 gnd |
subject_GND | (DE-588)1140936166 (DE-588)4142443-8 |
title | Mathematical pictures at a data science exhibition |
title_auth | Mathematical pictures at a data science exhibition |
title_exact_search | Mathematical pictures at a data science exhibition |
title_exact_search_txtP | Mathematical pictures at a data science exhibition |
title_full | Mathematical pictures at a data science exhibition Simon Foucart |
title_fullStr | Mathematical pictures at a data science exhibition Simon Foucart |
title_full_unstemmed | Mathematical pictures at a data science exhibition Simon Foucart |
title_short | Mathematical pictures at a data science exhibition |
title_sort | mathematical pictures at a data science exhibition |
topic | Data Science (DE-588)1140936166 gnd Angewandte Mathematik (DE-588)4142443-8 gnd |
topic_facet | Data Science Angewandte Mathematik |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033678255&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT foucartsimon mathematicalpicturesatadatascienceexhibition |