Kernel methods and machine learning:
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Cambridge
Cambridge Univ. Press
2014
|
Ausgabe: | 1. publ. |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Beschreibung: | Literaturverz. S. [561] - 577 |
Beschreibung: | XXIV, 591 S. graph. Darst. |
ISBN: | 9781107024960 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV041837185 | ||
003 | DE-604 | ||
005 | 20140617 | ||
007 | t | ||
008 | 140509s2014 d||| |||| 00||| eng d | ||
020 | |a 9781107024960 |c hbk |9 978-1-107-02496-0 | ||
035 | |a (OCoLC)881777272 | ||
035 | |a (DE-599)OBVAC11444061 | ||
040 | |a DE-604 |b ger |e rakwb | ||
041 | 0 | |a eng | |
049 | |a DE-473 |a DE-29T | ||
084 | |a ST 302 |0 (DE-625)143652: |2 rvk | ||
100 | 1 | |a Kung, S. Y. |d 1950- |e Verfasser |0 (DE-588)172201314 |4 aut | |
245 | 1 | 0 | |a Kernel methods and machine learning |c S. Y. Kung |
250 | |a 1. publ. | ||
264 | 1 | |a Cambridge |b Cambridge Univ. Press |c 2014 | |
300 | |a XXIV, 591 S. |b graph. Darst. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
500 | |a Literaturverz. S. [561] - 577 | ||
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Kernel |g Informatik |0 (DE-588)4338679-9 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Kernel |g Informatik |0 (DE-588)4338679-9 |D s |
689 | 0 | 1 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | |5 DE-604 | |
856 | 4 | 2 | |m Digitalisierung UB Bamberg - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=027281924&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-027281924 |
Datensatz im Suchindex
_version_ | 1804152176908435456 |
---|---|
adam_text | Contents
Preface
page
xvii
Part I Machine learning and kernel vector spaces
ι
1
Fundamentals of kernel-based machine learning
3
1.1 Introduction
3
1.2
Feature representation and dimension reduction
4
1.2.1
Feature representation in vector space
6
1.2.2
Conventional similarity metric: Euclidean inner product
8
1.2.3
Feature dimension reduction
8
1.3
The learning subspace property (LSP) and kernelization
of learning models
9
1.3.1
The LSP
9
1.3.2
Kernelization of the optimization formulation for learning models
13
1.3.3
The LSP is necessary and sufficient for kernelization
14
1.4
Unsupervised learning for cluster discovery
15
1.4.1
Characterization of similarity metrics
15
1.4.2
The LSP and kernelization of
/ŕ-means
learning models
16
1.4.3
The LSP and kernelization of
£2
elastic nets
18
1.5
Supervised learning for linear classifiers
19
1.5.1
Learning and prediction phases
20
1.5.2
Learning models and linear system of equations
21
1.5.3
Kernelized
learning models for under-determined systems
23
1.5.4
The vital role of the
і2-потт
for the LSP
24
1.5.5
The LSP condition of one-class SVM for outlier detection
25
1.6
Generalized inner products and kernel functions
25
1.6.1
Mahalanobis inner products
26
1.6.2
Nonlinear inner product: Mercer kernel functions
27
1.6.3
Effective implementation of kernel methods
30
1.7
Performance metrics
31
1.7.1
Accuracy and error rate
31
1.7.2
Sensitivity, specificity, and precision
32
1.7.3
The receiver operating characteristic (ROC)
33
VIII
Contents
35
1.8 Highlights
of chapters ^
1.9
Problems
2
Kernel-induced vector spaces
2.1
Introduction
2.2
Mercer kernels and kernel-induced similarity metrics -o
2.2.1
Distance axioms in medic space
2.2.2
Mercer kernels
2.2.3
Construction of Mercer kernels °
2.2.4
Shift-invariant kernel functions ()
2.3
Training-data-independent intrinsic feature vectors
2.3.1
Intrinsic spaces associated with kernel functions >2
2.3.2
Intrinsic-space-based learning models
ři
2.4
Training-data-dependent empirical
l eut
urc
vectors
ίι()
2.4.1
The LSP: from intrinsic space to empirical space
^
2.4.2
Kernelized
learning models
ři·
2.4.3
Implementation cost comparison of two spaces Wi
2.5
The kernel-trick for nonvectorial data analysis
2.5.1
Nonvectorial data analysis ^
2.5.2
The Mercer condition and kernel tricks 7
2.6
Summary 7-
2.7
Problems
72
Part II Dimension-reduction: PCA/KPCA and feature selection 77
3
PCA and kernel PCA
79
3.1
Introduction
79
3.2
Why dimension reduction?
79
3.3
Subspace projection and PCA xl
3.3.1
Optimality criteria for subspace projection ^
3.3.2
PCA via spectral decomposition of the covariancc matrix x2
3.3.3
The optimal PCA solution: the mean-square-error criterion 83
3.3.4
The optimal PCA solution: the maximum-entropy criterion 87
3.4
Numerical methods for computation of PCA s9
3.4.1
Singular value decomposition of the data matrix %
3.4.2
Spectral decomposition of the scatter matrix %
3.4.3
Spectral decomposition of the kernel matrix 9I
3.4.4
Application studies of the subspace projection approach 94
3.5
Kernel principal component analysis
(
KPCA
)
95
3.5.1
The intrinsic-space approach to KPCA 95
3.5.2
The kernelization of KPCA learning models
3-5.3
PCA versus KPCA
105
3.5.4
Center-adjusted versus unadjusted KPCAs
·()6
3.5.5
Spectral vector space
1
Ю
Contents ix
3.6
Summary
113
3.7 Problems
ИЗ
4
Feature selection
Hg
4.1
Introduction
118
4.2
The filtering approach to feature selection
119
4.2.1
Supervised filtering methods
120
4.2.2
Feature-weighted linear classifiers
122
4.2.3
Unsupervised iiltering methods
124
4.2.4
Consecutive search methods
124
4.3
The wrapper approach to feature selection
127
4.3.1
Supervised wrapper methods
127
4.3.2
Unsupervised wrapper methods
129
4.3.3
The least absolute shrinkage and selection
operator
130
4.4
Application studies of the feature selection approach
131
4.5
Summary
134
4.6
Problems
134
Part III Unsupervised learning models for cluster analysis
139
5
Unsupervised learning for cluster discovery
141
5.1
Introduction
141
5.2
The similarity metric and clustering strategy
141
5.3
K-means clustering models
144
5.3.1
/ŕ-means
clustering criterion
144
5.3.2
The A -means algorithm
146
5.3.3 Monotonie
convergence of A -means
148
5.3.4
The local optimum problem of K-meam
151
5.3.5
The evaluation criterion for multiple trials of
AT-means
152
5.3.6
The optimal number of clusters
152
5.3.7
Application examples
152
5.4
Expectation-maximization (EM) learning models
153
5.4.1
EM clustering criterion
153
5.4.2
The iterative EM algorithm for basic GMM
155
5.4.3
Convergence of the EM algorithm with fixed
σ
156
5.4.4
Annealing EM (AEM)
158
5.5
Self-organizing-map
(SOM)
learning models
159
5.5.1
Input and output spaces in the
SOM
161
5.5.2
The
SOM
learning algorithm
162
5.5.3
The evaluation criterion for multiple-trial
SOM
165
5.5.4
Applications of
SOM
learning models
166
Contents
. . 169
5.6 Bi-clustering
data analysis
5.6.1
Coherence models lor
bi-clustering
5.6.2
Applications of bi-clustering methods 71
5.7
Summary
5.8
Problems
17K
6
Kernel methods for cluster analysis
6.1
Introduction
6 2
Kernel-based /C-means learning models i14
I
ЧП
6.2.1
Kernel /( -means in intrinsic space Ini
6.2.2
The /r-means clustering criterion in (erms
ot
kernel mairix INI
6.3
Kernel /r-means for nonvcctorial data analysis ^
6.3.1
The similarity matrix for nonvectorial training
datasels
1X4
6.3.2
Clustering criteria for network segmentation lxS
6.3.3
The Mercer condition and convergence of kernel
λ
-means 1K7
6.4
K-m&ans learning models in kernel-induced spectral space 1W
6.4.1
Discrepancy on optimal solution due to spectral truncation
1УІ
6.4.2
Computational complexities ^
6.5
Kernelized /í-means
learning models ^
6.5.1
Solution
invariance
of spectral-shift on (he kernel matrix
W
6.5.2
Kernelized
ÅT-means
algorithms ^
6.5.3
A recursive algorithm modi lied to exploit sparsity
1^7
6.6
Kernel-induced
SOM
learning models
201
6.6.1
SOM
learning models in intrinsic or spectral space
201
6.6.2
Kernelized
SOM
learning models
202
6.7
Neighbor-joining hierarchical cluster analysis
204
6.7.1
Divisive and agglomerativc approaches -(W
6.7.2
An NJ method that is based on centroid update
-()6
6.7.3
Kernelized
hierarchical clustering algorithm -()7
6.7.4
Case studies: hierarchical clustering of microarray
dala -12
6.8
Summary
213
6.9
Problems
215
Part IV Kernel ridge regressors and variants
2 №
7
Kernel-based regression and
regularizaron
analysis
22
1
7.1
Introduction
τ»
j
7.2
Linear least-squares-error analysis
222
7.2.1
Linear-least-MSE and least-squares-error
(
LSE)
regressors ^
7.2.2
Ridge regression analysis
225
7.3
Kernel-based regression analysis
225
7.3.1
LSE
regression analysis: intrinsic space
227
7.3.2
Kernel ridge regression analysis: intrinsic space
228
Contents xi
7.3.3
The learning subspace property (LSP): from intrinsic to
empirical space
228
7.3.4
KRR learning models: empirical space
228
7.3.5
Comparison of KRRs in intrinsic and empirical spaces
230
7.4
Radial basis function (RBF) networks for regression analysis
230
7.4.1
RBF approximation networks
230
7.4.2
The Nadaraya-Watson regression estimator (NWRE)
232
7.4.3
Back-propagation neural networks
234
7.5
Multi-kernel regression analysis
240
7.6
Summary
244
7.7
Problems
244
8
Linear regression and discriminant analysis for supervised classification
248
8.1
Introduction
248
8.2
Characterization of supervised learning models
249
8.2.1
Binary and multiple classification
249
8.2.2
Learning, evaluation, and prediction phases
250
8.2.3
Off-line and inductive learning models
251
8.2.4
Linear and nonlinear learning models
252
8.2.5
Basic supervised learning strategies
252
8.3
Supervised learning models: over-determined formulation
253
8.3.1
Direct derivation of
LSE
solution
254
8.3.2
Fisher s discriminant analysis (FDA)
258
8.4
Supervised learning models: under-determined formulation
263
8.5
A regularizaron
method for robust classification
266
8.5.1
The ridge regression approach to linear classification
266
8.5.2
Perturbational discriminant analysis (PDA): an extension
of FDA
268
8.5.3
Equivalence between RR and PDA
269
8.5.4
Regularization effects of the ridge parameter
ρ
270
8.6
Kernelized
learning models in empirical space: linear kernels
273
8.6.1
Kernelized
learning models for under-determined systems
273
8.6.2
Kernelized
formulation of KRR in empirical space
276
8.6.3
Comparison of formulations in original versus empirical
spaces
277
8.7
Summary
278
8.8
Problems
278
9
Kernel ridge regression for supervised classification
282
9.1
Introduction
282
9.2
Kernel-based discriminant analysis (KDA)
284
9.3
Kernel ridge regression (KRR) for supervised classification
287
9.3.1
KRR and LS-SVM models: the intrinsic-space approach
287
9.3.2
Kernelized
learning models: the empirical-space approach
288
x¡¡
Contents
9.3.3
A proof of equivalence of two formulations
9.3.4
Complexities of intrinsic and empirical models
2У0
9.4
Perturbational discriminant analysis (PDA)
2У0
9.5
Robustness and the regression ratio in spectral space
292
9.5.1
The decision vector of KDA in spectral space
293
9.5.2
Resilience of the decision components of KDA classiliers
293
9.5.3
Component magnitude and component resilience 29K
9.5.4
Regression ratio: KDA versus KRR -w
9.6
Application studies: KDA versus KRR 3(X)
9.6.1
Experiments on UC1 data
300
9.6.2
Experiments on mieroarray cancer diagnosis
301
9.6.3
Experiments on subcellular localization
302
9.7
Trimming detrimental (anti-support) vectors in KRR learning
models
303
9.7.1
A pruned-KRR learning model: pruned PDA (PPDA)
304
9.7.2
Case study:
ECG
arrhythmia detection
306
9.8
Multi-class and multi-label supervised classification
307
9.8.1
Multi-class supervised classification
307
9.8.2
Multi-label classification
310
9.9
Supervised subspace projection methods
313
9.9.1
Successively optimized discriminant analysis (SODA)
313
9.9.2
Trace-norm optimization for subspace projection
3
IS
9.9.3
Discriminant component analysis
(
DCA
) 325
9.9.4
Comparisons between PCA,
DCA.
PC-DC A. and SODA
331
9.9.5
Kernelized
DCA
and SODA learning models
333
9.10
Summary
335
9.11
Problems
336
PartV Support vector machines and variants
341
10
Support vector machines
343
10.1
Introduction
343
10.2
Linear support vector machines
344
10.2.1
The optimization formulation in original vector space
345
10.2.2
The Wolfe dual optimizer in empirical space
345
10.2.3
The Karush-Kuhn-Tucker (KKT) condition
348
10.2.4
Support vectors
349
1
η
г
I™5 Comparison between separation margins of LSI- and
S
VM
35
1
10.3
SVM with fuzzy separation: roles of slack variables
353
10.3.1
Optimization in original space
354
10.3.2
The learning subspace property and optimization in empirical
space ?s4
10.3.3
Characterization of support vectors and
W
EC analysis
356
Contents xiii
10.4
Kernel-induced
support vector machines
358
10.4.1
Primal optimizer in intrinsic space
359
10.4.2
Dual optimizer in empirical space
359
10.4.3
Multi-class SVM learning models
361
10.4.4
SVM learning softwares
362
10.5
Application case studies
362
10.5.1
SVM for cancer data analysis
362
10.5.2
Prediction performances w.r.t. size of training
datasets
364
10.5.3
KRR versus SVM: application studies
364
10.6
Empirical-space SVM for trimming of support vectors
365
10.6.1
1
1
-Norm SVM in empirical space
365
10.6.2
1
2
-Norm SVM in empirical space
365
10.6.3
Empirical learning models for
vectorial
and nonvectorial
data analysis
367
10.6.4
Wrapper methods for empirical learning models
369
10.6.5
Fusion of filtering and wrapper methods
373
10.7
Summary
374
10.8
Problems
375
11
Support vector learning models for outlier detection
380
11.1
Introduction
380
11.2
Support vector regression (SVR)
381
11.3
Hyperplane-based one-class SVM learning models
383
11.3.1
Hyperplane-based v-SV classifiers
383
11.3.2
Hyperplane-based one-class SVM
385
11.4
Hypersphere-based one-class SVM
389
11.5
Support vector clustering
392
11.6
Summary
393
11.7
Problems
393
12
Ridge-SVM learning models
395
1
2.1
Introduction
395
12.2
Roles of
p
and
С
on WECs of KRR and SVM
396
12.2.1
Roles of
p
and
С
396
12.2.2
WECs of KDA, KRR, and SVM
397
12.3
Ridge-SVM learning models
399
12.3.1
Ridge-SVM: a unifying supervised learning model
401
12.3.2
Important special cases of Ridge-SVM models
401
12.3.3
Subset selection: KKT and the termination condition
402
12.4
Impacts of design parameters on the WEC of Ridge-SVM
404
12.4.1
Transition ramps and the number of support vectors
404
12.4.2
Effects of
p
and Cmm on the transition ramp
404
12.4.3
The number of support vectors w.r.t. Cmin
408
xiv
Contents
12.5
Prediction accuracy versus training time
408
12.5.1
The tuning of the parameter
С
409
12.5.2
The tuning of the parameter
С,,,!,, ш
12.5.3
The tuning of the parameter
ρ
411
12.6
Application case studies
41-
12.6.1
Experiments on UCI data
412
12.6.2
Experiments on mieroarray cancer diagnosis
413
12.6.3
Experiments on subccllular ocali/alion
414
12.6.4
Experiments on the
ischemic
stroke
dataset
415
12.7
Summary
4
Hi
12.8
Problems
417
Part VI Kernel methods for green machine learning technologies
4
1
ч
13
Efficient kernel methods for learning and classification
42
1
13.1
Introduction
421
13.2
System design considerations
423
13.2.1
Green processing technologies for local or client
computing
423
13.2.2
Cloud computing platforms
423
13.2.3
Local versus centralized processing
424
13.3
Selection of cost-effective kernel functions
424
13.3.1
The intrinsic degree
У
426
13.3.2
Truncated-RBF (TRBF) kernels
428
13.4
Classification complexities: empirical and intrinsic degrees
430
13.4.1
The discriminant function in the empirical representation
432
13.4.2
The discriminant function in the intrinsic representation
433
13.4.3
Tensor representation of discriminant functions
436
13.4.4
Complexity comparison of RBF and
TRBľ
classifiers
438
13.4.5
Case study:
ECG
arrhythmia detection
438
13.5
Learning complexities: empirical and intrinsic degrees
439
13.5.1
Learning complexities for KRR and SVm
439
13.5.2
A scatter-matrix-based KRR algorithm
440
13.5.3
KRR learning complexity: RBF versus TRBF kernels
440
3.5.4
A learning and classification algorithms lor big data size
N
440
13.5.5
Case study:
ECG
arrhythmia detection
442
13.6
The tradeoff between complexity and prediction performance
444
З.о.і
Comparison of prediction accuracies
444
if·6·2 Prediction-complexity tradeoff analysis
446
13.7
Tune-adaptive updating algorithms for KRR learning models
447
3.7.1
Time-adaptive recursive KRR algorithms
448
*. 1-І
The intrmsic-space recursive KRR algorithm
449
13.7.3
A time-adaptive KRR algorithm with a forgetting factor
452
Contents xv
13.8
Summary
453
13.9 Problems 453
Part
VII Kernel
methods and statistical estimation theory
457
14
Statistical regression analysis and errors-in-variables models
459
14.
1 Introduction
459
14.2
Statistical regression analysis
460
14.2.1
The minimum mean-square-error (MMSE) estimator/regressor
461
14.2.2
Linear regression analysis
462
14.3
Kernel ridge regression (KRR)
463
14.3.1
Orthonormal
basis functions: single-variate cases
463
14.3.2
Orthonormal
basis functions: multivariate cases
466
14.4
The perturbation-regulated regressor (PRR) for errors-in-variables models
467
14.4.1
MMSE solution for errors-in-variables models
468
14.4.2
Linear perturbation-regulated regressors
470
14.4.3
Kernel-based perturbation-regulated regressors
471
14.5
The kernel-based perturbation-regulated regressor (PRR): Gaussian cases
472
14.5.1
Orthonormal
basis functions: single-variate cases
472
14.5.2
Single-variate Hermite estimators
473
14.5.3
Error-order tradeoff
475
14.5.4
Simulation results
477
14.5.5
Multivariate PRR estimators: Gaussian distribution
480
14.6
Two-projection theorems
482
14.6.1
The two-projection theorem: general case
483
14.6.2
The two-projection theorem: polynomial case
485
14.6.3
Two-projection for the PRR
486
14.6.4
Error analysis
486
14.7
Summary
487
14.8
Problems
488
15
Kernel methods for estimation, prediction, and system identification
494
15.1
Introduction
494
15.2
Kernel regressors for deterministic generation models
495
15.3
Kernel regressors for statistical generation models
500
15.3.1
The prior model and training data set
500
15.3.2
The Gauss-Markov theorem for statistical models
501
15.3.3
KRR regressors in empirical space
507
15.3.4
KRR regressors with Gaussian distribution
509
15.4
Kernel regressors for errors-in-variables (EiV) models
510
15.4.1
The Gauss-Markov theorem for EiV learning models
511
15.4.2
EiV regressors in empirical space
515
15.4.3
EiV regressors with Gaussian distribution
517
15.4.4
Finite-order EiV regressors
518
xvi
Contents
15.5
Recursive KRR learning algorithms
521
15.5.1
The recursive KRR algorithm in intrinsic space
522
15.5.2
The recursive KRR algorithm in empirical space
524
15.5.3
The recursive KRR algorithm in intrinsic space with a
forgetting factor 5-$
15.5.4
The recursive KRR algorithm in empirical space with a
forgetting factor and a finite window
527
15.6
Recursive EiV learning algorithms
524
15.6.1
Recursive EiV learning models in intrinsic space
529
15.6.2
The recursive EiV algorithm in empirical space
530
15.7
Summary
531
15.8
Problems
531
Part
VIII
Appendices
537
Appendix A Validation and testing of learning models
539
A.I Cross-validation techniques
539
A.
2
Hypothesis testing and significance testing
541
A.
2.1
Hypothesis testing based on the likelihood ratio
542
A.
2.2
Significance testing from the distribution of the null hypothesis
545
A.3 Problems
547
Appendix
В
/cNN, PNN, and
Bayes
classifiers
549
B.I
Bayes
classifiers
550
B.I.I The GMM-based-classifier
551
B.I.
2
The basic
Bayes
classifier
552
B.2 Classifiers with no prior learning process
554
B.2.1
к
nearest neighbors
(λ ΝΝ)
554
В.
2.2
Probabilistic neural networks (PNN)
555
B.2.3 The log-likelihood classifier (LLC)
557
B.3 Problems
559
References
^1
Index cjo
|
any_adam_object | 1 |
author | Kung, S. Y. 1950- |
author_GND | (DE-588)172201314 |
author_facet | Kung, S. Y. 1950- |
author_role | aut |
author_sort | Kung, S. Y. 1950- |
author_variant | s y k sy syk |
building | Verbundindex |
bvnumber | BV041837185 |
classification_rvk | ST 302 |
ctrlnum | (OCoLC)881777272 (DE-599)OBVAC11444061 |
discipline | Informatik |
edition | 1. publ. |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01427nam a2200361 c 4500</leader><controlfield tag="001">BV041837185</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20140617 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">140509s2014 d||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781107024960</subfield><subfield code="c">hbk</subfield><subfield code="9">978-1-107-02496-0</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)881777272</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)OBVAC11444061</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-473</subfield><subfield code="a">DE-29T</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 302</subfield><subfield code="0">(DE-625)143652:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Kung, S. Y.</subfield><subfield code="d">1950-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)172201314</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Kernel methods and machine learning</subfield><subfield code="c">S. Y. Kung</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">1. publ.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cambridge</subfield><subfield code="b">Cambridge Univ. Press</subfield><subfield code="c">2014</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">XXIV, 591 S.</subfield><subfield code="b">graph. Darst.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Literaturverz. S. [561] - 577</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Kernel</subfield><subfield code="g">Informatik</subfield><subfield code="0">(DE-588)4338679-9</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Kernel</subfield><subfield code="g">Informatik</subfield><subfield code="0">(DE-588)4338679-9</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Bamberg - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=027281924&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-027281924</subfield></datafield></record></collection> |
id | DE-604.BV041837185 |
illustrated | Illustrated |
indexdate | 2024-07-10T01:06:33Z |
institution | BVB |
isbn | 9781107024960 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-027281924 |
oclc_num | 881777272 |
open_access_boolean | |
owner | DE-473 DE-BY-UBG DE-29T |
owner_facet | DE-473 DE-BY-UBG DE-29T |
physical | XXIV, 591 S. graph. Darst. |
publishDate | 2014 |
publishDateSearch | 2014 |
publishDateSort | 2014 |
publisher | Cambridge Univ. Press |
record_format | marc |
spelling | Kung, S. Y. 1950- Verfasser (DE-588)172201314 aut Kernel methods and machine learning S. Y. Kung 1. publ. Cambridge Cambridge Univ. Press 2014 XXIV, 591 S. graph. Darst. txt rdacontent n rdamedia nc rdacarrier Literaturverz. S. [561] - 577 Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Kernel Informatik (DE-588)4338679-9 gnd rswk-swf Kernel Informatik (DE-588)4338679-9 s Maschinelles Lernen (DE-588)4193754-5 s DE-604 Digitalisierung UB Bamberg - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=027281924&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Kung, S. Y. 1950- Kernel methods and machine learning Maschinelles Lernen (DE-588)4193754-5 gnd Kernel Informatik (DE-588)4338679-9 gnd |
subject_GND | (DE-588)4193754-5 (DE-588)4338679-9 |
title | Kernel methods and machine learning |
title_auth | Kernel methods and machine learning |
title_exact_search | Kernel methods and machine learning |
title_full | Kernel methods and machine learning S. Y. Kung |
title_fullStr | Kernel methods and machine learning S. Y. Kung |
title_full_unstemmed | Kernel methods and machine learning S. Y. Kung |
title_short | Kernel methods and machine learning |
title_sort | kernel methods and machine learning |
topic | Maschinelles Lernen (DE-588)4193754-5 gnd Kernel Informatik (DE-588)4338679-9 gnd |
topic_facet | Maschinelles Lernen Kernel Informatik |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=027281924&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT kungsy kernelmethodsandmachinelearning |