Pattern recognition and machine learning:
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
New York [u.a.]
Springer
2013
|
Ausgabe: | 11. (corr. printing) |
Schriftenreihe: | Information science and statistics
|
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Beschreibung: | XX, 738 S. Ill., graph. Darst. |
ISBN: | 0387310738 9780387310732 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV041564301 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | t | ||
008 | 140115s2013 ad|| |||| 00||| eng d | ||
020 | |a 0387310738 |9 0-387-31073-8 | ||
020 | |a 9780387310732 |9 978-0-387-31073-2 | ||
024 | 3 | |a 9780387310732 | |
035 | |a (OCoLC)869873325 | ||
035 | |a (DE-599)BVBBV041564301 | ||
040 | |a DE-604 |b ger | ||
041 | 0 | |a eng | |
049 | |a DE-29T |a DE-1051 |a DE-703 |a DE-1047 |a DE-739 |a DE-824 |a DE-188 |a DE-19 |a DE-898 |a DE-20 | ||
084 | |a ST 300 |0 (DE-625)143650: |2 rvk | ||
084 | |a ST 330 |0 (DE-625)143663: |2 rvk | ||
084 | |a DAT 770f |2 stub | ||
100 | 1 | |a Bishop, Christopher M. |d 1959- |e Verfasser |0 (DE-588)120454165 |4 aut | |
245 | 1 | 0 | |a Pattern recognition and machine learning |c Christopher M. Bishop |
250 | |a 11. (corr. printing) | ||
264 | 1 | |a New York [u.a.] |b Springer |c 2013 | |
300 | |a XX, 738 S. |b Ill., graph. Darst. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 0 | |a Information science and statistics | |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Mustererkennung |0 (DE-588)4040936-3 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Mustererkennung |0 (DE-588)4040936-3 |D s |
689 | 0 | 1 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | |8 1\p |5 DE-604 | |
689 | 1 | 0 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 1 | |5 DE-604 | |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=027009840&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-027009840 | ||
883 | 1 | |8 1\p |a cgwrk |d 20201028 |q DE-101 |u https://d-nb.info/provenance/plan#cgwrk |
Datensatz im Suchindex
_version_ | 1804151746239397888 |
---|---|
adam_text | Contents
Preface
vii
Mathematical notation
xi
Contents
xiii
1
Introduction
1
1.1
Example: Polynomial Curve Fitting
................. 4
1.2
Probability Theory
.......................... 12
1.2.1
Probability densities
..................... 17
1.2.2
Expectations and covariances
................ 19
1.2.3
Bayesian probabilities
.................... 21
1.2.4
The Gaussian distribution
.................. 24
1.2.5
Curve fitting re-visited
.................... 28
1.2.6
Bayesian curve fitting
.................... 30
1.3
Model Selection
........................... 32
1.4
The Curse of Dimensionality
..................... 33
1.5
Decision Theory
........................... 38
1.5.1
Minimizing the misclassification rate
............ 39
1.5.2
Minimizing the expected loss
................ 41
1.5.3
The reject option
....................... 42
1.5.4
Inference and decision
.................... 42
1.5.5
Loss functions for regression
................. 46
1.6
Information Theory
.......................... 48
1.6.1
Relative entropy and mutual information
.......... 55
Exercises
.................................. 58
xiii
CONTENTS
Probability
Distributions
67
2.1
Binary
Variables
........................... 68
2.1.1
The beta
distribution
..................... 71
2.2
Multinomial Variables
........................ 74
2.2.1
The Dirichlet distribution
................... 76
2.3
The Gaussian Distribution
...................... 78
2.3.1
Conditional Gaussian distributions
.............. 85
2.3.2
Marginal Gaussian distributions
............... 88
2.3.3
Bayes
theorem for Gaussian variables
............ 90
2.3.4
Maximum likelihood for the Gaussian
............ 93
2.3.5
Sequential estimation
..................... 94
2.3.6
Bayesian inference for the Gaussian
............. 97
2.3.7
Student s t-distribution
.................... 102
2.3.8
Periodic variables
....................... 105
2.3.9
Mixtures of Gaussians
.................... 110
2.4
The Exponential Family
....................... 113
2.4.1
Maximum likelihood and sufficient statistics
........ 116
2.4.2
Conjugate priors
....................... 117
2.4.3
Noninformative
priors
.................... 117
2.5
Nonparametric Methods
....................... 120
2.5.1
Kernel density estimators
................... 122
2.5.2
Nearest-neighbour methods
................. 124
Exercises
.................................. 127
Linear Models for Regression
137
3.1
Linear Basis Function Models
.................... 138
3.1.1
Maximum likelihood and least squares
............ 140
3.1.2
Geometry of least squares
.................. 143
3.1.3
Sequentiallearning
...................... 143
3.1.4
Regularized least squares
................... 144
3.1.5
Multiple outputs
....................... 146
3.2
The
В
ias-
Variance Decomposition
.................. 147
3.3
Bayesian Linear Regression
..................... 152
3.3.1
Parameter distribution
.................... 152
3.3.2
Predictive distribution
.................... 156
3.3.3
Equivalent kernel
....................... 159
3.4
Bayesian Model Comparison
..................... 161
3.5
The Evidence Approximation
.................... 165
3.5.1
Evaluation of the evidence function
............. 166
3.5.2
Maximizing the evidence function
.............. 168
3.5.3
Effective number of parameters
............... 170
3.6
Limitations of Fixed Basis Functions
................ 172
Exercises
.................................. 173
CONTENTS xv
4 Linear Models
for Classification
179
4.1
Discriminant Functions
........................ 181
4.1.1
Two classes
.......................... 181
4.1.2
Multiple classes
........................ 182
4.1.3
Least squares for classification
................ 184
4.1.4
Fisher s linear discriminant
.................. 186
4.1.5
Relation to least squares
................... 189
4.1.6
Fisher s discriminant for multiple classes
.......... 191
4.1.7
The perceptron algorithm
................... 192
4.2
Probabilistic Generative Models
................... 196
4.2.1
Continuous inputs
...................... 198
4.2.2
Maximum likelihood solution
................ 200
4.2.3
Discrete features
....................... 202
4.2.4
Exponential family
...................... 202
4.3
Probabilistic Discriminative Models
................. 203
4.3.1
Fixed basis functions
..................... 204
4.3.2
Logistic regression
...................... 205
4.3.3
Iterative reweighted least squares
.............. 207
4.3.4
Multiclass logistic regression
................. 209
4.3.5
Probit
regression
....................... 210
4.3.6
Canonicallink functions
................... 212
4.4
The Laplace Approximation
..................... 213
4.4.1
Model comparison and
BIC
................. 216
4.5
Bayesian Logistic Regression
.................... 217
4.5.1
Laplace approximation
.................... 217
4.5.2
Predictive distribution
.................... 218
Exercises
.................................. 220
5
Neural Networks
225
5.1
Feed-forward Network Functions
.................. 227
5.1.1
Weight-space symmetries
.................. 231
5.2
Network Training
........................... 232
5.2.1
Parameter optimization
.................... 236
5.2.2
Local quadratic approximation
................ 237
5.2.3
Use of gradient information
................. 239
5.2.4
Gradient descent optimization
................ 240
5.3
Error Backpropagation
........................ 241
5.3.1
Evaluation of error-function derivatives
........... 242
5.3.2
A simple example
...................... 245
5.3.3
Efficiency of backpropagation
................ 246
5.3.4
The Jacobian matrix
..................... 247
5.4
The Hessian Matrix
.......................... 249
5.4.1
Diagonal approximation
................... 250
5.4.2
Outer product approximation
................. 251
5.4.3
Inverse Hessian
........................ 252
CONTENTS
5.4.4
Finite
differences
....................... 252
5.4.5
Exact evaluation of the Hessian
............... 253
5.4.6
Fast multiplication by the Hessian
.............. 254
5.5
Regularization in Neural Networks
................. 256
5.5.1
Consistent Gaussian priors
.................. 257
5.5.2
Early stopping
........................ 259
5.5.3
Invariances
.......................... 261
5.5.4
Tangent propagation
..................... 263
5.5.5
Training with transformed data
................ 265
5.5.6
Convolutional networks
................... 267
5.5.7
Soft weight sharing
...................... 269
5.6
Mixture Density Networks
...................... 272
5.7
В
ayesian Neural Networks
...................... 277
5.7.1
Posterior parameter distribution
............... 278
5.7.2
Hyperparameter optimization
................ 280
5.7.3
В
ayesian neural networks for classification
......... 281
Exercises
.................................. 284
6
Kernel Methods
291
6.1
Dual Representations
......................... 293
6.2
Constructing Kernels
......................... 294
6.3
Radial Basis Function Networks
................... 299
6.3.1
Nadaraya-Watson model
................... 301
6.4
Gaussian Processes
.......................... 303
6.4.1
Linear regression revisited
.................. 304
6.4.2
Gaussian processes for regression
.............. 306
6.4.3
Learning the hyperparameters
................ 311
6.4.4
Automatic relevance determination
............. 312
6.4.5
Gaussian processes for classification
............. 313
6.4.6
Laplace approximation
.................... 315
6.4.7
Connection to neural networks
................ 319
Exercises
.................................. 320
7
Sparse Kernel Machines
325
7.1
Maximum Margin Classifiers
.................... 326
7.1.1
Overlapping class distributions
................ 331
7.1.2
Relation to logistic regression
................ 336
7.1.3
Multiclass SVMs
....................... 338
7.1.4
SVMs for regression
..................... 339
7.1.5
Computational learning theory
................ 344
7.2
Relevance Vector Machines
..................... 345
7.2.1
RVM for regression
...................... 345
7.2.2
Analysis of sparsity
...................... 349
7.2.3
RVM for classification
.................... 353
Exercises
.................................. 357
CONTENTS xvii
8
Graphical
Models 359
8.1
В
ay
esian Networks.......................... 360
8.1.1
Example: Polynomial regression
............... 362
8.1.2
Generative models
...................... 365
8.1.3
Discrete variables
....................... 366
8.1.4
Linear-Gaussian models
................... 370
8.2
Conditional Independence
...................... 372
8.2.1
Three example graphs
.................... 373
8.2.2
D-separation
......................... 378
8.3
Markov Random Fields
....................... 383
8.3.1
Conditional independence properties
............. 383
8.3.2
Factorization properties
................... 384
8.3.3
Illustration: Image de-noising
................ 387
8.3.4
Relation to directed graphs
.................. 390
8.4
Inference in Graphical Models
.................... 393
8.4.1
Inference on a chain
..................... 394
8.4.2
Trees
............................. 398
8.4.3
Factor graphs
......................... 399
8.4.4
The sum-product algorithm
.................. 402
8.4.5
The max-sum algorithm
................... 411
8.4.6
Exact inference in general graphs
.............. 416
8.4.7
Loopy belief propagation
................... 417
8.4.8
Learning the graph structure
................. 418
Exercises
.................................. 418
9
Mixture Models and EM
423
9.1
if-means Clustering
......................... 424
9.1.1
Image segmentation and compression
............ 428
9.2
Mixtures of
Gaus
sians
........................ 430
9.2.1
Maximum likelihood
..................... 432
9.2.2
EM for Gaussian mixtures
.................. 435
9.3
An Alternative View of EM
..................... 439
9.3.1
Gaussian mixtures revisited
................. 441
9.3.2
Relation to K-means
..................... 443
9.3.3
Mixtures of Bernoulli distributions
.............. 444
9.3.4
EM for
В
ay esian linear regression
.............. 448
9.4
The EM Algorithm in General
.................... 450
Exercises
.................................. 455
10
Approximate Inference
461
10.1
Variational Inference
......................... 462
10.1.1
Factorized distributions
.................... 464
10.1.2
Properties of factorized approximations
........... 466
10.1.3
Example: The univariate Gaussian
.............. 470
10.1.4
Model comparison
...................... 473
10.2
Illustration: Variational Mixture of
Gaussiane
............ 474
CONTENTS
10.2.1
Variational
distribution
.................... 475
10.2.2
Variational
lower bound
................... 481
10.2.3
Predictive density
....................... 482
10.2.4
Determining the number of components
........... 483
10.2.5
Induced factorizations
.................... 485
10.3
Variational Linear Regression
.................... 486
10.3.1
Variational distribution
.................... 486
10.3.2
Predictive distribution
.................... 488
10.3.3
Lowerbound
......................... 489
10.4
Exponential Family Distributions
.................. 490
10.4.1
Variational message passing
................. 491
10.5
Local Variational Methods
...................... 493
10.6
Variational Logistic Regression
................... 498
10.6.1
Variational posterior distribution
............... 498
10.6.2
Optimizing the variational parameters
............ 500
10.6.3
Inference of hyperparameters
................ 502
10.7
Expectation Propagation
....................... 505
10.7.1
Example: The clutter problem
................ 511
10.7.2
Expectation propagation on graphs
.............. 513
Exercises
.................................. 517
11
Sampling Methods
523
11.1
Basic Sampling Algorithms
..................... 526
11.1.1
Standard distributions
.................... 526
11.1.2
Rejection sampling
...................... 528
11.1.3
Adaptive rejection sampling
. . ................ 530
11.1.4
Importance sampling
..................... 532
11.1.5
Sampling-importance-resampling
.............. 534
11.1.6
Sampling and the EM algorithm
............... 536
11.2
Markov Chain Monte Carlo
..................... 537
11.2.1
Markov chains
........................ 539
11.2.2
The Metropolis-Hastings algorithm
............. 541
11.3
Gibbs Sampling
........................... 542
11.4
Slice Sampling
............................ 546
11.5
The Hybrid Monte Carlo Algorithm
................. 548
11.5.1
Dynamical systems
...................... 548
11.5.2
Hybrid Monte Carlo
..................... 552
11.6
Estimating the Partition Function
.................. 554
Exercises
.................................. 556
12
Continuous Latent Variables
559
12.1
Principal Component Analysis
.................... 561
12.1.1
Maximum variance formulation
............... 561
12.1.2
Minimum-error formulation
................. 563
12.1.3
Applications of PCA
..................... 565
12.1.4
PCA for high-dimensional data
............... 569
CONTENTS xix
12.2
Probabilistic
PCA .......................... 570
12.2.1 Maximum
likelihood
PCA.................. 574
12.2.2 EM
algorithm for PCA
.................... 577
12.2.3 BayesianPCA........................ 580
12.2.4
Factor analysis........................
583
12.3 Kernel PCA.............................. 586
12.4 Nonlinear Latent Variable Models.................. 591
12.4.1
Independent component analysis...............
591
12.4.2
Autoassociative
neural
networks
............... 592
12.4.3
Modelling nonlinear manifolds
................ 595
Exercises
.................................. 599
13
Sequential Data
605
13.1
Markov Models
............................ 607
13.2
Hidden Markov Models
....................... 610
13.2.1
Maximum likelihood for the
HMM
............. 615
13.2.2
The forward-backward algorithm
.............. 618
13.2.3
The sum-product algorithm for the
HMM
.......... 625
13.2.4
Scaling factors
........................ 627
13.2.5
The Viterbi algorithm
..................... 629
13.2.6
Extensions of the hidden Markov model
........... 631
13.3
Linear Dynamical Systems
...................... 635
13.3.1
Inference in LDS
....................... 638
13.3.2
Learning in LDS
....................... 642
13.3.3
Extensions of LDS
...................... 644
13.3.4
Particle filters
......................... 645
Exercises
.................................. 646
14
Combining Models
653
14.1
В
ay esian Model Averaging
...................... 654
14.2
Committees
.............................. 655
14.3
Boosting
............................... 657
14.3.1
Minimizing exponential error
................ 659
14.3.2
Error functions for boosting
................. 661
14.4
Tree-based Models
.......................... 663
14.5
Conditional Mixture Models
..................... 666
14.5.1
Mixtures of linear regression models
............. 667
14.5.2
Mixtures of logistic models
................. 670
14.5.3
Mixtures of experts
...................... 672
Exercises
.................................. 674
Appendix A Data Sets
677
Appendix
В
Probability Distributions
685
Appendix
С
Properties of Matrices
695
:x
CONTENTS
Appendix D Calculus of Variations
703
Appendix
E
Lagrange
Multipliers
707
і
References
711
Index
729
|
any_adam_object | 1 |
author | Bishop, Christopher M. 1959- |
author_GND | (DE-588)120454165 |
author_facet | Bishop, Christopher M. 1959- |
author_role | aut |
author_sort | Bishop, Christopher M. 1959- |
author_variant | c m b cm cmb |
building | Verbundindex |
bvnumber | BV041564301 |
classification_rvk | ST 300 ST 330 |
classification_tum | DAT 770f |
ctrlnum | (OCoLC)869873325 (DE-599)BVBBV041564301 |
discipline | Informatik |
edition | 11. (corr. printing) |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01834nam a2200445 c 4500</leader><controlfield tag="001">BV041564301</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">140115s2013 ad|| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0387310738</subfield><subfield code="9">0-387-31073-8</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780387310732</subfield><subfield code="9">978-0-387-31073-2</subfield></datafield><datafield tag="024" ind1="3" ind2=" "><subfield code="a">9780387310732</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)869873325</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV041564301</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-29T</subfield><subfield code="a">DE-1051</subfield><subfield code="a">DE-703</subfield><subfield code="a">DE-1047</subfield><subfield code="a">DE-739</subfield><subfield code="a">DE-824</subfield><subfield code="a">DE-188</subfield><subfield code="a">DE-19</subfield><subfield code="a">DE-898</subfield><subfield code="a">DE-20</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="0">(DE-625)143650:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 330</subfield><subfield code="0">(DE-625)143663:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 770f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Bishop, Christopher M.</subfield><subfield code="d">1959-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)120454165</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Pattern recognition and machine learning</subfield><subfield code="c">Christopher M. Bishop</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">11. (corr. printing)</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">New York [u.a.]</subfield><subfield code="b">Springer</subfield><subfield code="c">2013</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">XX, 738 S.</subfield><subfield code="b">Ill., graph. Darst.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Information science and statistics</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Mustererkennung</subfield><subfield code="0">(DE-588)4040936-3</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Mustererkennung</subfield><subfield code="0">(DE-588)4040936-3</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="8">1\p</subfield><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="1" ind2="0"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=027009840&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-027009840</subfield></datafield><datafield tag="883" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="a">cgwrk</subfield><subfield code="d">20201028</subfield><subfield code="q">DE-101</subfield><subfield code="u">https://d-nb.info/provenance/plan#cgwrk</subfield></datafield></record></collection> |
id | DE-604.BV041564301 |
illustrated | Illustrated |
indexdate | 2024-07-10T00:59:43Z |
institution | BVB |
isbn | 0387310738 9780387310732 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-027009840 |
oclc_num | 869873325 |
open_access_boolean | |
owner | DE-29T DE-1051 DE-703 DE-1047 DE-739 DE-824 DE-188 DE-19 DE-BY-UBM DE-898 DE-BY-UBR DE-20 |
owner_facet | DE-29T DE-1051 DE-703 DE-1047 DE-739 DE-824 DE-188 DE-19 DE-BY-UBM DE-898 DE-BY-UBR DE-20 |
physical | XX, 738 S. Ill., graph. Darst. |
publishDate | 2013 |
publishDateSearch | 2013 |
publishDateSort | 2013 |
publisher | Springer |
record_format | marc |
series2 | Information science and statistics |
spelling | Bishop, Christopher M. 1959- Verfasser (DE-588)120454165 aut Pattern recognition and machine learning Christopher M. Bishop 11. (corr. printing) New York [u.a.] Springer 2013 XX, 738 S. Ill., graph. Darst. txt rdacontent n rdamedia nc rdacarrier Information science and statistics Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Mustererkennung (DE-588)4040936-3 gnd rswk-swf Mustererkennung (DE-588)4040936-3 s Maschinelles Lernen (DE-588)4193754-5 s 1\p DE-604 DE-604 Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=027009840&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis 1\p cgwrk 20201028 DE-101 https://d-nb.info/provenance/plan#cgwrk |
spellingShingle | Bishop, Christopher M. 1959- Pattern recognition and machine learning Maschinelles Lernen (DE-588)4193754-5 gnd Mustererkennung (DE-588)4040936-3 gnd |
subject_GND | (DE-588)4193754-5 (DE-588)4040936-3 |
title | Pattern recognition and machine learning |
title_auth | Pattern recognition and machine learning |
title_exact_search | Pattern recognition and machine learning |
title_full | Pattern recognition and machine learning Christopher M. Bishop |
title_fullStr | Pattern recognition and machine learning Christopher M. Bishop |
title_full_unstemmed | Pattern recognition and machine learning Christopher M. Bishop |
title_short | Pattern recognition and machine learning |
title_sort | pattern recognition and machine learning |
topic | Maschinelles Lernen (DE-588)4193754-5 gnd Mustererkennung (DE-588)4040936-3 gnd |
topic_facet | Maschinelles Lernen Mustererkennung |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=027009840&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT bishopchristopherm patternrecognitionandmachinelearning |