High-dimensional data analysis with low-dimensional models: principles, computation, and applications
Connecting theory with practice, this systematic and rigorous introduction covers the fundamental principles, algorithms and applications of key mathematical models for high-dimensional data analysis. Comprehensive in its approach, it provides unified coverage of many different low-dimensional model...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Cambridge
Cambridge University Press
2022
|
Ausgabe: | First published |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Zusammenfassung: | Connecting theory with practice, this systematic and rigorous introduction covers the fundamental principles, algorithms and applications of key mathematical models for high-dimensional data analysis. Comprehensive in its approach, it provides unified coverage of many different low-dimensional models and analytical techniques, including sparse and low-rank models, and both convex and non-convex formulations. Readers will learn how to develop efficient and scalable algorithms for solving real-world problems, supported by numerous examples and exercises throughout, and how to use the computational tools learnt in several application contexts. Applications presented include scientific imaging, communication, face recognition, 3D vision, and deep networks for classification. With code available online, this is an ideal textbook for senior and graduate students in computer science, data science, and electrical engineering, as well as for those taking courses on sparsity, low-dimensional structures, and high-dimensional data. Foreword by Emmanuel Candès. |
Beschreibung: | xxix, 685 Seiten Illustrationen, Diagramme |
ISBN: | 9781108489737 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV047919575 | ||
003 | DE-604 | ||
005 | 20230406 | ||
007 | t | ||
008 | 220408s2022 a||| |||| 00||| eng d | ||
020 | |a 9781108489737 |9 978-1-108-48973-7 | ||
024 | 3 | |a 9781108489737 | |
035 | |a (ELiSA)ELiSA-9781108489737 | ||
035 | |a (OCoLC)1314901405 | ||
035 | |a (DE-599)HBZHT021209227 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-29T |a DE-83 |a DE-739 |a DE-1043 | ||
084 | |a SK 870 |0 (DE-625)143265: |2 rvk | ||
084 | |a 68T09 |2 msc | ||
100 | 1 | |a Wright, John |e Verfasser |0 (DE-588)125879327X |4 aut | |
245 | 1 | 0 | |a High-dimensional data analysis with low-dimensional models |b principles, computation, and applications |c John Wright ; Columbia University, New York ; Yi Ma ; University of California, Berkeley |
250 | |a First published | ||
264 | 1 | |a Cambridge |b Cambridge University Press |c 2022 | |
300 | |a xxix, 685 Seiten |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
520 | 3 | |a Connecting theory with practice, this systematic and rigorous introduction covers the fundamental principles, algorithms and applications of key mathematical models for high-dimensional data analysis. Comprehensive in its approach, it provides unified coverage of many different low-dimensional models and analytical techniques, including sparse and low-rank models, and both convex and non-convex formulations. Readers will learn how to develop efficient and scalable algorithms for solving real-world problems, supported by numerous examples and exercises throughout, and how to use the computational tools learnt in several application contexts. Applications presented include scientific imaging, communication, face recognition, 3D vision, and deep networks for classification. With code available online, this is an ideal textbook for senior and graduate students in computer science, data science, and electrical engineering, as well as for those taking courses on sparsity, low-dimensional structures, and high-dimensional data. Foreword by Emmanuel Candès. | |
650 | 0 | 7 | |a Dimensionsreduktion |0 (DE-588)4224279-4 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Datenanalyse |0 (DE-588)4123037-1 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Datenanalyse |0 (DE-588)4123037-1 |D s |
689 | 0 | 1 | |a Dimensionsreduktion |0 (DE-588)4224279-4 |D s |
689 | 0 | |5 DE-604 | |
700 | 1 | |a Ma, Yi |d 1972- |0 (DE-588)1047588889 |4 aut | |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033301184&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-033301184 |
Datensatz im Suchindex
_version_ | 1804183543705763840 |
---|---|
adam_text | Contents Foreword Preface Acknowledgements 1 Introduction 1.1 2 page xv xix xxviii 1 1 A Universal Task: Pursuit of Low-Dimensional Structure 1.1.1 Identifying Dynamical Systems and Serial Data 1.1.2 Patterns and Orders in a Man-Made World 1.1.3 Efficient Data Acquisition and Processing 1.1.4 Interpretation of Data with Graphical Models 1.2 A Brief History 1.2.1 Neural Science: Sparse Coding 1.2.2 Signal Processing: Sparse Error Correction 1.2.3 Classical Statistics: Sparse Regression Analysis 1.2.4 Data Analysis: Principal Component Analysis 1.3 The Modern Era 1.3.1 From Curses to Blessings of High Dimensionality 1.3.2 Compressive Sensing, Error Correction, and Deep Learning 1.3.3 High-Dimensional Geometry and Nonasymptotic Statistics 1.3.4 Scalable Optimization: Convex and Nonconvex 1.3.5 A Perfect Storm 1.4 Exercises 1 3 5 7 9 9 12 15 18 20 20 22 24 26 28 29 Parti Principles of Low-DimensionalModels 31 Sparse Signal Models 33 33 34 37 39 41 42 44 2.1 2.2 Applications of Sparse SignalModeling 2.1.1 An Example from Medical Imaging 2.1.2 An Example from Image Processing 2.1.3 An Example from Face Recognition Recovering a Sparse Solution 2.2.1 Norms on Vector Spaces 2.2.2 The f° Norm
viii Contents 2.3 2.4 2.5 2.6 3 Convex Methods for Sparse Signal Recovery 3.1 Why Does i1 Minimization Succeed? Geometric Intuitions 3.2 A First Correctness Result for Incoherent Matrices 3.2.1 Coherence of a Matrix 3.2.2 Correctness of f1 Minimization 3.2.3 Constructing an Incoherent Matrix 3.2.4 Limitations of Incoherence 3.3 Towards Stronger Correctness Results 3.3.1 The Restricted Isometry Property (RIP) 3.3.2 Restricted Strong Convexity Condition 3.3.3 Success of ť1 Minimization under RIP 3.4 Matrices with Restricted Isometry Property 3.4.1 The Johnson-Lindenstrauss Lemma 3.4.2 RIP of Gaussian Random Matrices 3.4.3 RIP of Non-Gaussian Matrices 3.5 Noisy Observations or Approximate Sparsity 3.5.1 Stable Recovery of Sparse Signals 3.5.2 Recovery of Inexact Sparse Signals 3.6 Phase Transitions in Sparse Recovery 3.6.1 Phase Transitions: Main Conclusions 3.6.2 Phase Transitions via Coefficient-Space Geometry 3.7 3.8 3.9 4 2.2.3 The Sparsest Solution: Minimizing the ť ° Norm 2.2.4 Computational Complexity of f° Minimization Relaxing the Sparse Recovery Problem 2.3.1 Convex Functions 2.3.2 A Convex Surrogate for the t° Norm: the £1 Norm 2.3.3 A Simple Test of f1 Minimization 2.3.4 Sparse Error Correction via Logan’s Phenomenon Summary Notes Exercises 3.6.3 Phase Transitions via Observation-Space Geometry 3.6.4 Phase Transitions in Support Recovery Summary Notes Exercises Convex Methods for Low-Rank Matrix Recovery 4.1 Motivating Examples of Low-Rank Modeling 4.1.1 3D Shape from Photometric Measurements 4.1.2 Recommendation Systems 4.1.3 Euclidean Distance Matrix Embedding 45
48 51 51 54 55 60 62 63 64 69 69 72 72 74 77 79 82 82 84 88 91 91 94 99 101 103 110 112 114 115 118 119 127 128 128 132 132 132 134 136
Contents 5 4.1.4 Latent Semantic Analysis 4.2 Representing Low-Rank Matrix via SVD 4.2.1 Singular Vectors via Nonconvex Optimization 4.2.2 Best Low-Rank Matrix Approximation 4.3 Recovering a Low-Rank Matrix 4.3.1 General Rank Minimization Problems 4.3.2 Convex Relaxation of Rank Minimization 4.3.3 Nuclear Norm as a Convex Envelope of Rank 4.3.4 Success of Nuclear Norm under Rank-RIP 4.3.5 Rank-RIP of Random Measurements 4.3.6 Noise, Inexact Low Rank, and Phase Transition 4.4 Low-Rank Matrix Completion 4.4.1 Nuclear Norm Minimization for Matrix Completion 4.4.2 Algorithm via Augmented Lagrange Multiplier 4.4.3 When Does Nuclear Norm Minimization Succeed? 4.4.4 Proving Correctness of Nuclear Norm Minimization 4.4.5 Stable Matrix Completion with Noise 4.5 Summary 4.6 Notes 4.7 Exercises 136 137 138 141 142 142 143 146 147 153 158 163 164 165 168 170 181 183 184 185 Decomposing Low-Rank and Sparse Matrices 5.1 Robust PC A and Motivating Examples 5.1.1 Problem Formulation 191 191 191 193 194 197 197 198 200 205 205 207 217 219 223 225 227 228 229 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6 ix 5.1.2 Matrix Rigidity and Planted Clique 5.1.3 Applications of Robust PCA Robust PCA via Principal Component Pursuit 5.2.1 Convex Relaxation for Sparse Low-Rank Separation 5.2.2 Solving PCP via Alternating Directions Method 5.2.3 Numerical Simulations and Experiments of PCP Identifiability and Exact Recovery 5.3.1 Identifiability Conditions 5.3.2 Correctness of Principal Component Pursuit 5.3.3 Some Extensions to the Main Result Stable Principal Component Pursuit with Noise Compressive Principal
Component Pursuit Matrix Completion with Corrupted Entries Summary Notes Exercises Recovering General Low-Dimensional Models 6.1 Concise Signal Models 233 233
Contents 6.2 6.3 6.4 6.5 7 6.1.1 Atomic Sets and Examples 6.1.2 Atomic Norm Minimization for Structured Signals Geometry, Measure Concentration, and Phase Transition 6.2.1 Success Condition as Two Nonintersecting Cones 6.2.2 Intrinsic Volumes and Kinematic Formula 6.2.3 Statistical Dimension and Phase Transition 6.2.4 Statistical Dimension of Descent Cone of the ť1 Norm 6.2.5 Phase Transition in Decomposing Structured Signals Limitations of Convex Relaxation 6.3.1 Suboptimality of Convex Relaxation for Multiple Structures 6.3.2 Intractable Convex Relaxation for High-Order Tensors 6.3.3 Lack of Convex Relaxation for Bilinear Problems 6.3.4 Nonlinear Low-Dimensional Structures 6.3.5 Return of Nonconvex Formulation and Optimization Notes Exercises 234 237 241 241 243 247 249 253 255 255 257 258 259 259 260 260 Nonconvex Methods for Low-Dimensional Models 7.1 Introduction 7.1.1 Nonlinearity, Symmetry, and Nonconvexity 7.1.2 Symmetry and the Global Geometry of Optimization 7.1.3 A Taxonomy of Symmetric Nonconvex Problems 7.2 Nonconvex Problems with Rotational Symmetries 7.2.1 Minimal Example: Phase Retrieval with One Unknown 7.2.2 Generalized Phase Retrieval 7.2.3 Low-Rank Matrix Recovery 7.2.4 Other Nonconvex Problems with Rotational Symmetry 7.3 Nonconvex Problems with Discrete Symmetries 7.3.1 Minimal Example: Dictionary Learning with One-Sparsity 7.3.2 Dictionary Learning 7.3.3 Sparse Blind Deconvolution 7.3.4 Other Nonconvex Problems with Discrete Symmetry 7.4 Notes and Open Problems 7.5 Exercises 263 263 264 267 269 271 271 272 276 282 283 283 286 289 292 294 297 Part II
Computation for Large-Scale Problems 303 Convex Optimization for Structured Signal Recovery 305 306 308 308 310 314 8.1 8.2 Challenges and Opportunities Proximal Gradient Methods 8.2.1 Convergence of Gradient Descent 8.2.2 From Gradient to Proximal Gradient 8.2.3 Proximal Gradient for the Lasso and Stable PCP
Contents 8.3 8.4 8.5 8.6 8.7 8.8 9 8.2.4 Convergence of Proximal Gradient Accelerated Proximal Gradient Methods 8.3.1 Acceleration via Nesterov’s Method 8.3.2 APG for Basis Pursuit Denoising 8.3.3 APG for Stable Principal Component Pursuit 8.3.4 Convergence of APG 8.3.5 Further Developments on Acceleration Augmented Lagrange Multipliers 8.4.1 ALM for Basis Pursuit 8.4.2 ALM for Principal Component Pursuit 8.4.3 Convergence of ALM Alternating Direction Method of Multipliers 8.5.1 ADMM for Principal Component Pursuit 8.5.2 Monotone Operators 8.5.3 Convergence of ALM and ADMM Leveraging Problem Structures for Better Scalability 8.6.1 Frank-Wolfe for Structured Constraint Set 8.6.2 Frank-Wolfe for Stable Matrix Completion 8.6.3 Connection to Greedy Methods for Sparsity 8.6.4 Stochastic Gradient Descent for Finite Sum Notes Exercises Nonconvex Optimization for High-Dimensional Problems 9.1 Challenges and Opportunities 9.1.1 Finding Critical Points via Gradient Descent 9.1.2 Finding Critical Points via Newton’s Method 9.2 Cubic Regularization of Newton’s Method 9.2.1 Convergence to Second-Order Stationary Points 9.2.2 More Scalable Solution to the Subproblem 9.3 Gradient and Negative Curvature Descent 9.3.1 Hybrid Gradient and Negative Curvature Descent 9.3.2 Computing Negative Curvature via the Lanczos Method 9.3.3 Overall Complexity in First-Order Oracle 9.4 Negative Curvature and Newton Descent 9.4.1 Curvature Guided Newton Descent 9.4.2 Inexact Negative Curvature and Newton Descent 9.4.3 Overall Complexity in First-Order Oracle 9.5 Gradient Descent with Small Random Noise
9.5.1 Diffusion Process and Laplace’s Method 9.5.2 Noisy Gradient with Langevin Monte Carlo 9.5.3 Negative Curvature Descent with Random Noise 9.5.4 Complexity of Perturbed Gradient Descent 9.6 Leveraging Symmetry Structure: Generalized Power Iteration xi 316 318 319 322 322 323 326 327 332 332 333 334 335 336 340 346 347 351 352 356 358 360 365 366 367 370 372 373 377 377 378 380 383 385 385 389 393 395 395 398 400 405 407
xii 10 Contents 9.6.1 Power Iteration for Computing Singular Vectors 9.6.2 Complete Dictionary Learning 9.6.3 Optimization over Stiefel Manifolds 9.6.4 Fixed Point of a Contraction Mapping 9.7 Notes 9.8 Exercises 407 409 410 411 413 414 Partili Applications to Real-World Problems 419 Magnetic Resonance Imaging 421 421 422 422 424 425 427 427 430 433 437 438 10.1 Introduction 10.2 Formation of MR Images 10.2.1 Basic Physics 10.2.2 Selective Excitation and Spatial Encoding 10.2.3 Sampling and Reconstruction 10.3 Sparsity and Compressive Sampling of MR Images 10.3.1 Sparsity of MR Images 10.3.2 Compressive Sampling of MR Images 10.4 Algorithms for MR Image Recovery 10.5 Notes 10.6 Exercises Wideband Spectrum Sensing 12 11.1 Introduction 11.1.1 Wideband Communications 11.1.2 Nyquist Sampling and Beyond 11.2 Wideband Interferer Detection 11.2.1 Conventional Scanning Approaches 11.2.2 Compressive Sensing in the Frequency Domain 11.3 System Implementation and Performance 11.3.1 Quadrature Analog to Information Converter 11.3.2 A Prototype Circuit Implementation 11.3.3 Recent Developments in Hardware Implementation 11.4 Notes 440 440 440 441 442 443 445 447 448 449 454 455 Scientific Imaging Problems 12.1 Introduction 12.2 Data Model and Optimization Formulation 12.3 Symmetry in Short-and-Sparse Deconvolution 12.4 Algorithms for Short-and-Sparse Deconvolution 12.4.1 Alternating Descent Method 12.4.2 Additional Heuristics for Highly Coherent Problems 12.4.3 Computational Examples 456 456 456 459 461 462 463 465
Contents xiii ѕгљѕаиЈ»и и«вЖ И 13 12.5 Extensions: Multiple Motifs 12.6 Exercises 465 467 Robust Face Recognition 468 468 470 473 477 479 480 13.1 13.2 13.3 13.4 13.5 13.6 14 Introduction Classification Based on Sparse Representation Robustness to Occlusion or Corruption Dense Error Correction with the Cross and Bouquet Notes Exercises Robust Photometric Stereo 14.1 Introduction 14.2 Photometric Stereo via Low-Rank Matrix Recovery 14.2.1 Lambertian Surface under Directional Lights 14.2.2 Modeling Shadows and Specularities 14.3 Robust Matrix Completion Algorithm 14.4 Experimental Evaluation 14.4.1 Quantitative Evaluation with Synthetic Images 14.4.2 Qualitative Evaluation with Real Images 14.5 Notes 15 Structured Texture Recovery Introduction Low-Rank Textures Structured Texture Inpainting Transform-Invariant Low-Rank Textures 15.4.1 Deformed and Corrupted Low-Rank Textures 15.4.2 The TILT Algorithm 15.5 Applications of TILT 15.5.1 Rectifying Planar Low-Rank Textures 15.5.2 Rectifying Generalized Cylindrical Surfaces 15.5.3 Calibrating Camera Lens Distortion 15.6 Notes 15.1 15.2 15.3 15.4 16 Deep Networks for Classification 16.1 Introduction 16.1.1 Deep Learning in a Nutshell 16.1.2 The Practice of Deep Learning 16.1.3 Challenges with Nonlinearity and Discriminativeness 16.2 Desiderata for Learning Discriminative Representation 16.2.1 Measure of Compactness for a Representation 16.2.2 Principle of Maximal Coding Rate Reduction 482 482 483 484 486 489 491 492 496 497 499 499 500 502 507 507 509 512 513 514 517 523 526 526 527 529 531 532 533 536
xiv Contents 16.3 16.4 16.5 16.6 16.2.3 Properties of the Rate Reduction Function 16.2.4 Experiments on Real Data Deep Networks from First Principles 16.3.1 Deep Networks from Optimizing Rate Reduction 16.3.2 Convolutional Networks from Invariant Rate Reduction 16.3.3 Simulations and Experiments Guaranteed Manifold Classification by Deep Networks 16.4.1 Minimal Case: Two 1D Submanifolds 16.4.2 Problem Formulation and Analysis 16.4.3 Main Conclusion Epilogue: Open Problems and Future Directions Exercises 537 539 542 542 548 555 560 560 562 564 565 570 Appendices 573 Appendix A Facts from Linear Algebra and Matrix Analysis 575 Appendix В Convex Sets and Functions 598 Appendix C Optimization Problems and Optimality Conditions 608 Appendix 0 Methods for Optimization 614 Appendix E Facts from High-Dimensional Statistics 626 References List of Symbols Index 633 671 674
|
adam_txt |
Contents Foreword Preface Acknowledgements 1 Introduction 1.1 2 page xv xix xxviii 1 1 A Universal Task: Pursuit of Low-Dimensional Structure 1.1.1 Identifying Dynamical Systems and Serial Data 1.1.2 Patterns and Orders in a Man-Made World 1.1.3 Efficient Data Acquisition and Processing 1.1.4 Interpretation of Data with Graphical Models 1.2 A Brief History 1.2.1 Neural Science: Sparse Coding 1.2.2 Signal Processing: Sparse Error Correction 1.2.3 Classical Statistics: Sparse Regression Analysis 1.2.4 Data Analysis: Principal Component Analysis 1.3 The Modern Era 1.3.1 From Curses to Blessings of High Dimensionality 1.3.2 Compressive Sensing, Error Correction, and Deep Learning 1.3.3 High-Dimensional Geometry and Nonasymptotic Statistics 1.3.4 Scalable Optimization: Convex and Nonconvex 1.3.5 A Perfect Storm 1.4 Exercises 1 3 5 7 9 9 12 15 18 20 20 22 24 26 28 29 Parti Principles of Low-DimensionalModels 31 Sparse Signal Models 33 33 34 37 39 41 42 44 2.1 2.2 Applications of Sparse SignalModeling 2.1.1 An Example from Medical Imaging 2.1.2 An Example from Image Processing 2.1.3 An Example from Face Recognition Recovering a Sparse Solution 2.2.1 Norms on Vector Spaces 2.2.2 The f° Norm
viii Contents 2.3 2.4 2.5 2.6 3 Convex Methods for Sparse Signal Recovery 3.1 Why Does i1 Minimization Succeed? Geometric Intuitions 3.2 A First Correctness Result for Incoherent Matrices 3.2.1 Coherence of a Matrix 3.2.2 Correctness of f1 Minimization 3.2.3 Constructing an Incoherent Matrix 3.2.4 Limitations of Incoherence 3.3 Towards Stronger Correctness Results 3.3.1 The Restricted Isometry Property (RIP) 3.3.2 Restricted Strong Convexity Condition 3.3.3 Success of ť1 Minimization under RIP 3.4 Matrices with Restricted Isometry Property 3.4.1 The Johnson-Lindenstrauss Lemma 3.4.2 RIP of Gaussian Random Matrices 3.4.3 RIP of Non-Gaussian Matrices 3.5 Noisy Observations or Approximate Sparsity 3.5.1 Stable Recovery of Sparse Signals 3.5.2 Recovery of Inexact Sparse Signals 3.6 Phase Transitions in Sparse Recovery 3.6.1 Phase Transitions: Main Conclusions 3.6.2 Phase Transitions via Coefficient-Space Geometry 3.7 3.8 3.9 4 2.2.3 The Sparsest Solution: Minimizing the ť ° Norm 2.2.4 Computational Complexity of f° Minimization Relaxing the Sparse Recovery Problem 2.3.1 Convex Functions 2.3.2 A Convex Surrogate for the t° Norm: the £1 Norm 2.3.3 A Simple Test of f1 Minimization 2.3.4 Sparse Error Correction via Logan’s Phenomenon Summary Notes Exercises 3.6.3 Phase Transitions via Observation-Space Geometry 3.6.4 Phase Transitions in Support Recovery Summary Notes Exercises Convex Methods for Low-Rank Matrix Recovery 4.1 Motivating Examples of Low-Rank Modeling 4.1.1 3D Shape from Photometric Measurements 4.1.2 Recommendation Systems 4.1.3 Euclidean Distance Matrix Embedding 45
48 51 51 54 55 60 62 63 64 69 69 72 72 74 77 79 82 82 84 88 91 91 94 99 101 103 110 112 114 115 118 119 127 128 128 132 132 132 134 136
Contents 5 4.1.4 Latent Semantic Analysis 4.2 Representing Low-Rank Matrix via SVD 4.2.1 Singular Vectors via Nonconvex Optimization 4.2.2 Best Low-Rank Matrix Approximation 4.3 Recovering a Low-Rank Matrix 4.3.1 General Rank Minimization Problems 4.3.2 Convex Relaxation of Rank Minimization 4.3.3 Nuclear Norm as a Convex Envelope of Rank 4.3.4 Success of Nuclear Norm under Rank-RIP 4.3.5 Rank-RIP of Random Measurements 4.3.6 Noise, Inexact Low Rank, and Phase Transition 4.4 Low-Rank Matrix Completion 4.4.1 Nuclear Norm Minimization for Matrix Completion 4.4.2 Algorithm via Augmented Lagrange Multiplier 4.4.3 When Does Nuclear Norm Minimization Succeed? 4.4.4 Proving Correctness of Nuclear Norm Minimization 4.4.5 Stable Matrix Completion with Noise 4.5 Summary 4.6 Notes 4.7 Exercises 136 137 138 141 142 142 143 146 147 153 158 163 164 165 168 170 181 183 184 185 Decomposing Low-Rank and Sparse Matrices 5.1 Robust PC A and Motivating Examples 5.1.1 Problem Formulation 191 191 191 193 194 197 197 198 200 205 205 207 217 219 223 225 227 228 229 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6 ix 5.1.2 Matrix Rigidity and Planted Clique 5.1.3 Applications of Robust PCA Robust PCA via Principal Component Pursuit 5.2.1 Convex Relaxation for Sparse Low-Rank Separation 5.2.2 Solving PCP via Alternating Directions Method 5.2.3 Numerical Simulations and Experiments of PCP Identifiability and Exact Recovery 5.3.1 Identifiability Conditions 5.3.2 Correctness of Principal Component Pursuit 5.3.3 Some Extensions to the Main Result Stable Principal Component Pursuit with Noise Compressive Principal
Component Pursuit Matrix Completion with Corrupted Entries Summary Notes Exercises Recovering General Low-Dimensional Models 6.1 Concise Signal Models 233 233
Contents 6.2 6.3 6.4 6.5 7 6.1.1 Atomic Sets and Examples 6.1.2 Atomic Norm Minimization for Structured Signals Geometry, Measure Concentration, and Phase Transition 6.2.1 Success Condition as Two Nonintersecting Cones 6.2.2 Intrinsic Volumes and Kinematic Formula 6.2.3 Statistical Dimension and Phase Transition 6.2.4 Statistical Dimension of Descent Cone of the ť1 Norm 6.2.5 Phase Transition in Decomposing Structured Signals Limitations of Convex Relaxation 6.3.1 Suboptimality of Convex Relaxation for Multiple Structures 6.3.2 Intractable Convex Relaxation for High-Order Tensors 6.3.3 Lack of Convex Relaxation for Bilinear Problems 6.3.4 Nonlinear Low-Dimensional Structures 6.3.5 Return of Nonconvex Formulation and Optimization Notes Exercises 234 237 241 241 243 247 249 253 255 255 257 258 259 259 260 260 Nonconvex Methods for Low-Dimensional Models 7.1 Introduction 7.1.1 Nonlinearity, Symmetry, and Nonconvexity 7.1.2 Symmetry and the Global Geometry of Optimization 7.1.3 A Taxonomy of Symmetric Nonconvex Problems 7.2 Nonconvex Problems with Rotational Symmetries 7.2.1 Minimal Example: Phase Retrieval with One Unknown 7.2.2 Generalized Phase Retrieval 7.2.3 Low-Rank Matrix Recovery 7.2.4 Other Nonconvex Problems with Rotational Symmetry 7.3 Nonconvex Problems with Discrete Symmetries 7.3.1 Minimal Example: Dictionary Learning with One-Sparsity 7.3.2 Dictionary Learning 7.3.3 Sparse Blind Deconvolution 7.3.4 Other Nonconvex Problems with Discrete Symmetry 7.4 Notes and Open Problems 7.5 Exercises 263 263 264 267 269 271 271 272 276 282 283 283 286 289 292 294 297 Part II
Computation for Large-Scale Problems 303 Convex Optimization for Structured Signal Recovery 305 306 308 308 310 314 8.1 8.2 Challenges and Opportunities Proximal Gradient Methods 8.2.1 Convergence of Gradient Descent 8.2.2 From Gradient to Proximal Gradient 8.2.3 Proximal Gradient for the Lasso and Stable PCP
Contents 8.3 8.4 8.5 8.6 8.7 8.8 9 8.2.4 Convergence of Proximal Gradient Accelerated Proximal Gradient Methods 8.3.1 Acceleration via Nesterov’s Method 8.3.2 APG for Basis Pursuit Denoising 8.3.3 APG for Stable Principal Component Pursuit 8.3.4 Convergence of APG 8.3.5 Further Developments on Acceleration Augmented Lagrange Multipliers 8.4.1 ALM for Basis Pursuit 8.4.2 ALM for Principal Component Pursuit 8.4.3 Convergence of ALM Alternating Direction Method of Multipliers 8.5.1 ADMM for Principal Component Pursuit 8.5.2 Monotone Operators 8.5.3 Convergence of ALM and ADMM Leveraging Problem Structures for Better Scalability 8.6.1 Frank-Wolfe for Structured Constraint Set 8.6.2 Frank-Wolfe for Stable Matrix Completion 8.6.3 Connection to Greedy Methods for Sparsity 8.6.4 Stochastic Gradient Descent for Finite Sum Notes Exercises Nonconvex Optimization for High-Dimensional Problems 9.1 Challenges and Opportunities 9.1.1 Finding Critical Points via Gradient Descent 9.1.2 Finding Critical Points via Newton’s Method 9.2 Cubic Regularization of Newton’s Method 9.2.1 Convergence to Second-Order Stationary Points 9.2.2 More Scalable Solution to the Subproblem 9.3 Gradient and Negative Curvature Descent 9.3.1 Hybrid Gradient and Negative Curvature Descent 9.3.2 Computing Negative Curvature via the Lanczos Method 9.3.3 Overall Complexity in First-Order Oracle 9.4 Negative Curvature and Newton Descent 9.4.1 Curvature Guided Newton Descent 9.4.2 Inexact Negative Curvature and Newton Descent 9.4.3 Overall Complexity in First-Order Oracle 9.5 Gradient Descent with Small Random Noise
9.5.1 Diffusion Process and Laplace’s Method 9.5.2 Noisy Gradient with Langevin Monte Carlo 9.5.3 Negative Curvature Descent with Random Noise 9.5.4 Complexity of Perturbed Gradient Descent 9.6 Leveraging Symmetry Structure: Generalized Power Iteration xi 316 318 319 322 322 323 326 327 332 332 333 334 335 336 340 346 347 351 352 356 358 360 365 366 367 370 372 373 377 377 378 380 383 385 385 389 393 395 395 398 400 405 407
xii 10 Contents 9.6.1 Power Iteration for Computing Singular Vectors 9.6.2 Complete Dictionary Learning 9.6.3 Optimization over Stiefel Manifolds 9.6.4 Fixed Point of a Contraction Mapping 9.7 Notes 9.8 Exercises 407 409 410 411 413 414 Partili Applications to Real-World Problems 419 Magnetic Resonance Imaging 421 421 422 422 424 425 427 427 430 433 437 438 10.1 Introduction 10.2 Formation of MR Images 10.2.1 Basic Physics 10.2.2 Selective Excitation and Spatial Encoding 10.2.3 Sampling and Reconstruction 10.3 Sparsity and Compressive Sampling of MR Images 10.3.1 Sparsity of MR Images 10.3.2 Compressive Sampling of MR Images 10.4 Algorithms for MR Image Recovery 10.5 Notes 10.6 Exercises Wideband Spectrum Sensing 12 11.1 Introduction 11.1.1 Wideband Communications 11.1.2 Nyquist Sampling and Beyond 11.2 Wideband Interferer Detection 11.2.1 Conventional Scanning Approaches 11.2.2 Compressive Sensing in the Frequency Domain 11.3 System Implementation and Performance 11.3.1 Quadrature Analog to Information Converter 11.3.2 A Prototype Circuit Implementation 11.3.3 Recent Developments in Hardware Implementation 11.4 Notes 440 440 440 441 442 443 445 447 448 449 454 455 Scientific Imaging Problems 12.1 Introduction 12.2 Data Model and Optimization Formulation 12.3 Symmetry in Short-and-Sparse Deconvolution 12.4 Algorithms for Short-and-Sparse Deconvolution 12.4.1 Alternating Descent Method 12.4.2 Additional Heuristics for Highly Coherent Problems 12.4.3 Computational Examples 456 456 456 459 461 462 463 465
Contents xiii ѕгљѕаиЈ»и и«вЖ'И 13 12.5 Extensions: Multiple Motifs 12.6 Exercises 465 467 Robust Face Recognition 468 468 470 473 477 479 480 13.1 13.2 13.3 13.4 13.5 13.6 14 Introduction Classification Based on Sparse Representation Robustness to Occlusion or Corruption Dense Error Correction with the Cross and Bouquet Notes Exercises Robust Photometric Stereo 14.1 Introduction 14.2 Photometric Stereo via Low-Rank Matrix Recovery 14.2.1 Lambertian Surface under Directional Lights 14.2.2 Modeling Shadows and Specularities 14.3 Robust Matrix Completion Algorithm 14.4 Experimental Evaluation 14.4.1 Quantitative Evaluation with Synthetic Images 14.4.2 Qualitative Evaluation with Real Images 14.5 Notes 15 Structured Texture Recovery Introduction Low-Rank Textures Structured Texture Inpainting Transform-Invariant Low-Rank Textures 15.4.1 Deformed and Corrupted Low-Rank Textures 15.4.2 The TILT Algorithm 15.5 Applications of TILT 15.5.1 Rectifying Planar Low-Rank Textures 15.5.2 Rectifying Generalized Cylindrical Surfaces 15.5.3 Calibrating Camera Lens Distortion 15.6 Notes 15.1 15.2 15.3 15.4 16 Deep Networks for Classification 16.1 Introduction 16.1.1 Deep Learning in a Nutshell 16.1.2 The Practice of Deep Learning 16.1.3 Challenges with Nonlinearity and Discriminativeness 16.2 Desiderata for Learning Discriminative Representation 16.2.1 Measure of Compactness for a Representation 16.2.2 Principle of Maximal Coding Rate Reduction 482 482 483 484 486 489 491 492 496 497 499 499 500 502 507 507 509 512 513 514 517 523 526 526 527 529 531 532 533 536
xiv Contents 16.3 16.4 16.5 16.6 16.2.3 Properties of the Rate Reduction Function 16.2.4 Experiments on Real Data Deep Networks from First Principles 16.3.1 Deep Networks from Optimizing Rate Reduction 16.3.2 Convolutional Networks from Invariant Rate Reduction 16.3.3 Simulations and Experiments Guaranteed Manifold Classification by Deep Networks 16.4.1 Minimal Case: Two 1D Submanifolds 16.4.2 Problem Formulation and Analysis 16.4.3 Main Conclusion Epilogue: Open Problems and Future Directions Exercises 537 539 542 542 548 555 560 560 562 564 565 570 Appendices 573 Appendix A Facts from Linear Algebra and Matrix Analysis 575 Appendix В Convex Sets and Functions 598 Appendix C Optimization Problems and Optimality Conditions 608 Appendix 0 Methods for Optimization 614 Appendix E Facts from High-Dimensional Statistics 626 References List of Symbols Index 633 671 674 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Wright, John Ma, Yi 1972- |
author_GND | (DE-588)125879327X (DE-588)1047588889 |
author_facet | Wright, John Ma, Yi 1972- |
author_role | aut aut |
author_sort | Wright, John |
author_variant | j w jw y m ym |
building | Verbundindex |
bvnumber | BV047919575 |
classification_rvk | SK 870 |
ctrlnum | (ELiSA)ELiSA-9781108489737 (OCoLC)1314901405 (DE-599)HBZHT021209227 |
discipline | Mathematik |
discipline_str_mv | Mathematik |
edition | First published |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02783nam a2200409 c 4500</leader><controlfield tag="001">BV047919575</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20230406 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">220408s2022 a||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781108489737</subfield><subfield code="9">978-1-108-48973-7</subfield></datafield><datafield tag="024" ind1="3" ind2=" "><subfield code="a">9781108489737</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELiSA)ELiSA-9781108489737</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1314901405</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)HBZHT021209227</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-29T</subfield><subfield code="a">DE-83</subfield><subfield code="a">DE-739</subfield><subfield code="a">DE-1043</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SK 870</subfield><subfield code="0">(DE-625)143265:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">68T09</subfield><subfield code="2">msc</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wright, John</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)125879327X</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">High-dimensional data analysis with low-dimensional models</subfield><subfield code="b">principles, computation, and applications</subfield><subfield code="c">John Wright ; Columbia University, New York ; Yi Ma ; University of California, Berkeley</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">First published</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cambridge</subfield><subfield code="b">Cambridge University Press</subfield><subfield code="c">2022</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xxix, 685 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Connecting theory with practice, this systematic and rigorous introduction covers the fundamental principles, algorithms and applications of key mathematical models for high-dimensional data analysis. Comprehensive in its approach, it provides unified coverage of many different low-dimensional models and analytical techniques, including sparse and low-rank models, and both convex and non-convex formulations. Readers will learn how to develop efficient and scalable algorithms for solving real-world problems, supported by numerous examples and exercises throughout, and how to use the computational tools learnt in several application contexts. Applications presented include scientific imaging, communication, face recognition, 3D vision, and deep networks for classification. With code available online, this is an ideal textbook for senior and graduate students in computer science, data science, and electrical engineering, as well as for those taking courses on sparsity, low-dimensional structures, and high-dimensional data. Foreword by Emmanuel Candès.</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Dimensionsreduktion</subfield><subfield code="0">(DE-588)4224279-4</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Datenanalyse</subfield><subfield code="0">(DE-588)4123037-1</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Datenanalyse</subfield><subfield code="0">(DE-588)4123037-1</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Dimensionsreduktion</subfield><subfield code="0">(DE-588)4224279-4</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ma, Yi</subfield><subfield code="d">1972-</subfield><subfield code="0">(DE-588)1047588889</subfield><subfield code="4">aut</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033301184&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-033301184</subfield></datafield></record></collection> |
id | DE-604.BV047919575 |
illustrated | Illustrated |
index_date | 2024-07-03T19:33:23Z |
indexdate | 2024-07-10T09:25:07Z |
institution | BVB |
isbn | 9781108489737 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-033301184 |
oclc_num | 1314901405 |
open_access_boolean | |
owner | DE-29T DE-83 DE-739 DE-1043 |
owner_facet | DE-29T DE-83 DE-739 DE-1043 |
physical | xxix, 685 Seiten Illustrationen, Diagramme |
publishDate | 2022 |
publishDateSearch | 2022 |
publishDateSort | 2022 |
publisher | Cambridge University Press |
record_format | marc |
spelling | Wright, John Verfasser (DE-588)125879327X aut High-dimensional data analysis with low-dimensional models principles, computation, and applications John Wright ; Columbia University, New York ; Yi Ma ; University of California, Berkeley First published Cambridge Cambridge University Press 2022 xxix, 685 Seiten Illustrationen, Diagramme txt rdacontent n rdamedia nc rdacarrier Connecting theory with practice, this systematic and rigorous introduction covers the fundamental principles, algorithms and applications of key mathematical models for high-dimensional data analysis. Comprehensive in its approach, it provides unified coverage of many different low-dimensional models and analytical techniques, including sparse and low-rank models, and both convex and non-convex formulations. Readers will learn how to develop efficient and scalable algorithms for solving real-world problems, supported by numerous examples and exercises throughout, and how to use the computational tools learnt in several application contexts. Applications presented include scientific imaging, communication, face recognition, 3D vision, and deep networks for classification. With code available online, this is an ideal textbook for senior and graduate students in computer science, data science, and electrical engineering, as well as for those taking courses on sparsity, low-dimensional structures, and high-dimensional data. Foreword by Emmanuel Candès. Dimensionsreduktion (DE-588)4224279-4 gnd rswk-swf Datenanalyse (DE-588)4123037-1 gnd rswk-swf Datenanalyse (DE-588)4123037-1 s Dimensionsreduktion (DE-588)4224279-4 s DE-604 Ma, Yi 1972- (DE-588)1047588889 aut Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033301184&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Wright, John Ma, Yi 1972- High-dimensional data analysis with low-dimensional models principles, computation, and applications Dimensionsreduktion (DE-588)4224279-4 gnd Datenanalyse (DE-588)4123037-1 gnd |
subject_GND | (DE-588)4224279-4 (DE-588)4123037-1 |
title | High-dimensional data analysis with low-dimensional models principles, computation, and applications |
title_auth | High-dimensional data analysis with low-dimensional models principles, computation, and applications |
title_exact_search | High-dimensional data analysis with low-dimensional models principles, computation, and applications |
title_exact_search_txtP | High-dimensional data analysis with low-dimensional models principles, computation, and applications |
title_full | High-dimensional data analysis with low-dimensional models principles, computation, and applications John Wright ; Columbia University, New York ; Yi Ma ; University of California, Berkeley |
title_fullStr | High-dimensional data analysis with low-dimensional models principles, computation, and applications John Wright ; Columbia University, New York ; Yi Ma ; University of California, Berkeley |
title_full_unstemmed | High-dimensional data analysis with low-dimensional models principles, computation, and applications John Wright ; Columbia University, New York ; Yi Ma ; University of California, Berkeley |
title_short | High-dimensional data analysis with low-dimensional models |
title_sort | high dimensional data analysis with low dimensional models principles computation and applications |
title_sub | principles, computation, and applications |
topic | Dimensionsreduktion (DE-588)4224279-4 gnd Datenanalyse (DE-588)4123037-1 gnd |
topic_facet | Dimensionsreduktion Datenanalyse |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033301184&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT wrightjohn highdimensionaldataanalysiswithlowdimensionalmodelsprinciplescomputationandapplications AT mayi highdimensionaldataanalysiswithlowdimensionalmodelsprinciplescomputationandapplications |