Model selection and multimodel inference: a practical imformation-theoretic approach
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
New York, NY [u.a.]
Springer
[ca. 2008]
|
Ausgabe: | 2. ed., [Nachdr.] |
Schlagworte: | |
Online-Zugang: | Klappentext Inhaltsverzeichnis |
Beschreibung: | 1. Aufl. u.d.T.: Burnham, Kenneth P.: Model selection and inference |
Beschreibung: | XXVI, 488 S. Ill., graph. Darst. |
ISBN: | 0387953647 9780387953649 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV023104292 | ||
003 | DE-604 | ||
005 | 20090821 | ||
007 | t | ||
008 | 080129s2008 ad|| |||| 00||| eng d | ||
020 | |a 0387953647 |9 0-387-95364-7 | ||
020 | |a 9780387953649 |9 978-0-387-95364-9 | ||
035 | |a (OCoLC)634630617 | ||
035 | |a (DE-599)BVBBV023104292 | ||
040 | |a DE-604 |b ger |e rakwb | ||
041 | 0 | |a eng | |
049 | |a DE-355 |a DE-20 | ||
084 | |a QH 234 |0 (DE-625)141549: |2 rvk | ||
084 | |a SK 850 |0 (DE-625)143263: |2 rvk | ||
084 | |a SK 950 |0 (DE-625)143273: |2 rvk | ||
084 | |a WC 7000 |0 (DE-625)148142: |2 rvk | ||
084 | |a BIO 110f |2 stub | ||
100 | 1 | |a Burnham, Kenneth P. |e Verfasser |0 (DE-588)1018922032 |4 aut | |
245 | 1 | 0 | |a Model selection and multimodel inference |b a practical imformation-theoretic approach |c Kenneth P. Burnham ; David R. Anderson |
250 | |a 2. ed., [Nachdr.] | ||
264 | 1 | |a New York, NY [u.a.] |b Springer |c [ca. 2008] | |
300 | |a XXVI, 488 S. |b Ill., graph. Darst. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
500 | |a 1. Aufl. u.d.T.: Burnham, Kenneth P.: Model selection and inference | ||
650 | 0 | 7 | |a Datenanalyse |0 (DE-588)4123037-1 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Modellwahl |0 (DE-588)4304786-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Biologie |0 (DE-588)4006851-1 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Statistik |0 (DE-588)4056995-0 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Mathematisches Modell |0 (DE-588)4114528-8 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Modellwahl |0 (DE-588)4304786-5 |D s |
689 | 0 | 1 | |a Datenanalyse |0 (DE-588)4123037-1 |D s |
689 | 0 | |5 DE-604 | |
689 | 1 | 0 | |a Biologie |0 (DE-588)4006851-1 |D s |
689 | 1 | 1 | |a Mathematisches Modell |0 (DE-588)4114528-8 |D s |
689 | 1 | |5 DE-604 | |
689 | 2 | 0 | |a Biologie |0 (DE-588)4006851-1 |D s |
689 | 2 | 1 | |a Statistik |0 (DE-588)4056995-0 |D s |
689 | 2 | |5 DE-604 | |
700 | 1 | |a Anderson, David Raymond |d 1942- |e Verfasser |0 (DE-588)122291727 |4 aut | |
856 | 4 | 2 | |m Digitalisierung UB Regensburg |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016306982&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Klappentext |
856 | 4 | 2 | |m Digitalisierung UB Regensburg |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016306982&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-016306982 |
Datensatz im Suchindex
_version_ | 1804137363935330304 |
---|---|
adam_text | Contents
Preface
vii
About the Authors
xxi
Glossary
xxiii
1
Introduction
1
1.1
Objectives of the Book
................... 1
1.2
Background Material
.................... 5
1.2.1
Inference from Data, Given a Model
........ 5
1.2.2
Likelihood and Least Squares Theory
....... 6
1.2.3
The Critical Issue: What Is the Best Model
to Use?
....................... 13
1.2.4
Science Inputs: Formulation of the Set of
Candidate Models
.................. 15
1.2.5
Models Versus Full Reality
............. 20
1.2.6
An Ideal Approximating Model
........... 22
1.3
Model Fundamentals and Notation
............. 23
1.3.1
Truth or Full Reality
ƒ............... 23
1.3.2
Approximating Models gi(x 0)
........... 23
1.3.3
The Kullback-Leibler Best Model g,
{χ θ0)
..... 25
1.3.4
Estimated Models
g¡(x e)
.............. 25
1.3.5
Generating Models
................. 26
1.3.6
Global Model
.................... 26
xiv Contents
1.3.7
Overview of Stochastic Models in the
Biological Sciences
................. 27
1.4
Inference and the Principle of Parsimony
.......... 29
1.4.
1 Avoid Overfitting to Achieve a Good Model Fit
. . 29
1.4.2
The Principle of Parsimony
............. 31
1.4.3
Model Selection Methods
.............. 35
1.5
Data Dredging, Overanalysis of Data, and
Spurious Effects
....................... 37
1.5.1
Overanalysis of Data
................ 38
1.5.2
Some Trends
.................... 40
1.6
Model Selection Bias
.................... 43
1.7
Model Selection Uncertainty
................ 45
1.8
Summary
........................... 47
2
Information and Likelihood Theory: A Basis for Model
Selection and Inference
49
2.1
Kullback-Leibler Information or Distance Between
Two Models
......................... 50
2.1.1
Examples of Kullback-Leibler Distance
...... 54
2.1.2
Truth,
ƒ,
Drops Out as a Constant
......... 58
2.2
Akaike s Information Criterion:
1973............ 60
2.3
Takeuchi s Information Criterion:
1976........... 65
2.4
Second-Order Information Criterion:
1978......... 66
2.5
Modification of Information Criterion for Overdispersed
Count Data
.......................... 67
2.6
AIC Differences,
Δ,
..................... 70
2.7
A Useful Analogy
...................... 72
2.8
Likelihood of a Model,
HgĄdata)
............. 74
2.9
Akaiké
Weights, w,
..................... 75
2.9.1
Basic Formula
.................... 75
2.9.2
An Extension
.................... 76
2.10
Evidence Ratios
....................... 77
2.11
Important Analysis Details
................. 80
2.11.1
AIC Cannot Be Used to Compare Models of
Different Data Sets
................. 80
2.11.2
Order Not Important in Computing AIC Values
. . 81
2.11.3
Transformations of the Response Variable
..... 81
2.11.4
Regression Models with Differing
Error Structures
................... 82
2.11.5
Do Not Mix Null Hypothesis Testing with
Information-Theoretic Criteria
........... 83
2.11.6
Null Hypothesis Testing Is Still Important in
Strict Experiments
.................. 83
2.11.7
Information-Theoretic Criteria Are Not a Test
. . 84
2.11.8
Exploratory Data Analysis
............. 84
Contents xv
2.12
Some History and Further Insights
............. 85
2.12.1
Entropy
....................... 86
2.12.2
A Heuristic Interpretation
.............. 87
2.12.3
More on Interpreting Information-
Theoretic Criteria
.................. 87
2.12.4
Nonnested Models
................. 88
2.12.5
Further Insights
................... 89
2.13
Bootstrap Methods and Model Selection Frequencies
я,-
. . 90
2.13.1
Introduction
..................... 91
2.13.2
The Bootstrap in Model Selection:
The Basic Idea
.................... 93
2.14
Return to Flather s Models
................. 94
2.15
Summary
........................... 96
Basic Use of the Information-Theoretic Approach
98
3.1
Introduction
......................... 98
3.2
Example
1 :
Cement Hardening Data
............ 100
3.2.1
Set of Candidate Models
.............. 101
3.2.2
Some Results and Comparisons
........... 102
3.2.3
A Summary
..................... 106
3.3
Example
2:
Time Distribution of an Insecticide Added to a
Simulated Ecosystem
.................... 106
3.3.1
Set of Candidate Models
.............. 108
3.3.2
Some Results
.................... 110
3.4
Example
3:
Nestling Starlings
................
Ill
3.4.1
Experimental Scenario
............... 112
3.4.2
Monte Carlo Data
.................. 113
3.4.3
Set of Candidate Models
.............. 113
3.4.4
Data Analysis Results
................ 117
3.4.5
Further Insights into the First Fourteen
Nested Models
................... 120
3.4.6
Hypothesis Testing and Information-Theoretic
Approaches Have Different
Selection Frequencies
................ 121
3.4.7
Further Insights Following Final
Model Selection
................... 124
3.4.8
Why Not Always Use the Global Model
for Inference?
.................... 125
3.5
Example
4:
Sage Grouse Survival
.............. 126
3.5.1
Introduction
..................... 126
3.5.2
Set of Candidate Models
.............. 127
3.5.3
Model Selection
................... 129
3.5.4
Hypothesis Tests for Year-Dependent
Survival Probabilities
................ 131
xvi Contents
3.5.5
Hypothesis Testing Versus AIC
in
Model Selection
................... 132
3.5.6
A Class of Intermediate Models
.......... 134
3.6
Example
5:
Resource Utilization of Anolis Lizards
..... 137
3.6.1
Set of Candidate Models
.............. 138
3.6.2
Comments on Analytic Method
........... 138
3.6.3
Some Tentative Results
............... 139
3.7
Example
6:
Sakamoto
et
al. s
(1986)
Simulated Data
.... 141
3.8
Example
7:
Models of Fish Growth
............. 142
3.9
Summary
........................... 143
4
Formal Inference From More Than One Model:
Multimodel Inference
(MMI)
149
4.1
Introduction to Multimodel Inference
............ 149
4.2
Model Averaging
...................... 150
4.2.1
Prediction
...................... 150
4.2.2
Averaging Across Model Parameters
........ 151
4.3
Model Selection Uncertainty
................ 153
4.3.1
Concepts of Parameter Estimation and
Model Selection Uncertainty
............ 155
4.3.2
Including Model Selection Uncertainty in
Estimator Sampling Variance
............ 158
4.3.3
Unconditional Confidence Intervals
........ 164
4.4
Estimating the Relative Importance of Variables
...... 167
4.5
Confidence Set for the K-L Best Model
........... 169
4.5.1
Introduction
..................... 169
4.5.2
Δ,,
Model Selection Probabilities,
and the Bootstrap
.................. 171
4.6
Model Redundancy
..................... 173
4.7
Recommendations
...................... 176
4.8
Cement Data
......................... 177
4.9
Pine Wood Data
....................... 183
4.10
The Durban Storm Data
................... 187
4.10.1
Models Considered
................. 188
4.10.2
Consideration of Model Fit
............. 190
4.10.3
Confidence Intervals on Predicted
Storm Probability
.................. 191
4.10.4
Comparisons of Estimator Precision
........ 193
4.11
Flour Beetle Mortality: A Logistic Regression Example
.. 195
4.12
Publication of Research Results
............... 201
4.13
Summary
........................... 203
5
Monte Carlo Insights and Extended Examples
206
5.1
Introduction
......................... 206
5.2
Survival Models
....................... 207
Contents xvii
5.2.1
A
Chain
Binomial
Survival
Model......... 207
5.2.2
An Example
..................... 210
5.2.3 An
Extended Survival
Model............ 215
5.2.4
Model Selection if Sample Size Is Huge,
or Truth Known
................... 219
5.2.5
A Further Chain Binomial Model
.......... 221
5.3
Examples and Ideas Illustrated with Linear Regression
. . . 224
5.3.1
All-Subsets Selection: A GPA Example
...... 225
5.3.2
A Monte Carlo Extension of the GPA Example
. . 229
5.3.3
An Improved Set of GPA Prediction Models
.... 235
5.3.4
More Monte Carlo Results
............. 238
5.3.5
Linear Regression and Variable Selection
..... 244
5.3.6
Discussion
...................... 248
5.4
Estimation of Density from Line Transect Sampling
.... 255
5.4.
1 Density Estimation Background
.......... 255
5.4.2
Line Transect Sampling of Kangaroos at
Wallaby Creek
.................... 256
5.4.3
Analysis of Wallaby Creek Data
.......... 256
5.4.4
Bootstrap Analysis
................. 258
5.4.5
Confidence Interval on
D
.............. 258
5.4.6
Bootstrap Samples:
1,000
Versus
10,000...... 260
5.4.7
Bootstrap Versus
Akaiké
Weights: A Lesson
onQAIQ.
...................... 261
5.5
Summary
........................... 264
Advanced Issues and Deeper Insights
267
6.1
Introduction
......................... 267
6.2
An Example with
13
Predictor Variables and
8,191
Models
........................ 268
6.2.1
Body Fat Data
.................... 268
6.2.2
The Global Model
.................. 269
6.2.3
Classical Stepwise Selection
............ 269
6.2.4
Model Selection Uncertainty for AICf and
BIC
. . 271
6.2.5
An A Priori Approach
................ 274
6.2.6
Bootstrap Evaluation of Model Uncertainty
.... 276
6.2.7
Monte Carlo Simulations
.............. 279
6.2.8
Summary Messages
................. 281
6.3
Overview of Model Selection Criteria
............ 284
6.3.1
Criteria That Are Estimates of K-L Information
. . 284
6.3.2
Criteria That Are Consistent for
К
......... 286
6.3.3
Contrasts
...................... 288
6.3.4
Consistent Selection in Practice:
Quasi-true Models
.................. 289
6.4
Contrasting AIC and
BIC
.................. 293
6.4.1
A Heuristic Derivation of
BIC
........... 293
xviii Contents
6.4.2
A K-L-Based Conceptual Comparison of
AIC and
BIC ....................
295
6.4.3
Performance Comparison
.............. 298
6.4.4
Exact Bayesian Model Selection Formulas
..... 301
6.4.5
Akaiké
Weights as Bayesian Posterior
Model Probabilities
................. 302
6.5
Goodness-of-Fit and Overdispersion Revisited
....... 305
6.5.1
Overdispersion
с
and Goodness-of-Fit:
A General Strategy
................. 305
6.5.2
Overdispersion Modeling: More Than One
с
. . . . 307
6.5.3
Model Goodness-of-Fit After Selection
....... 309
6.6
AIC and Random Coefficient Models
............ 310
6.6.1
Basic Concepts and Marginal
Likelihood Approach
................ 310
6.6.2
A Shrinkage Approach to AIC and
Random Effects
................... 313
6.6.3
On Extensions
.................... 316
6.7
Selection When Probability Distributions Differ
by Model
........................... 317
6.7.1
Keep All the Parts
.................. 317
6.7.2
A Normal Versus Log-Normal Example
...... 318
6.7.3
Comparing Across Several Distributions:
An Example
..................... 320
6.8
Lessons from the Literature and Other Matters
....... 323
6.8.1
Use AIC,., Not AIC, with Small Sample Sizes
... 323
6.8.2
Use AICC, Not AIC, When
К
Is Large
....... 325
6.8.3
When Is AIC, Suitable: A Gamma
Distribution Example
................ 326
6.8.4
Inference from a Less Than Best Model
...... 328
6.8.5
Are Parameters Real?
................ 330
6.8.6
Sample Size Is Often Not a Simple Issue
...... 332
6.8.7
Judgment Has a Role
................ 333
6.9
Tidbits About AIC
...................... 334
6.9.1
Irrelevance of Between-Sample Variation
of AIC
........................ 334
6.9.2
The G-Statistic and K-L Information
........ 336
6.9.3
AIC Versus Hypothesis Testing: Results Can Be
Very Different
.................... 337
6.9.4
A Subtle Model Selection Bias Issue
........ 339
6.9.5
The Dimensional Unit of AIC
............ 340
6.9.6
AIC and Finite Mixture Models
........... 342
6.9.7
Unconditional Variance
............... 344
6.9.8
A Baseline for w+(i)
................ 345
6.10
Summary
........................... 347
Contents xix
7
Statistical Theory and Numerical Results
352
7.1
Useful Preliminaries
..................... 352
7.2
A General Derivation of AIC
................ 362
7.3
General K-L-Based Model Selection: TIC
......... 371
7.3.1
Analytical Computation of TIC
........... 371
7.3.2
Bootstrap Estimation of TIC
............ 372
7.4
AIC,.: A Second-Order Improvement
............ 374
7.4.1
Derivation of AIQ
.................. 374
7.4.2
Lack of Uniqueness of AIC,
............. 379
7.5
Derivation of AIC for the Exponential Family
of Distributions
....................... 380
7.6
Evaluation of tr(
J
(£„)[/(£„)Г )
and Its Estimator
..... 384
7.6.1
Comparison of AIC Versus TIC in a
Very Simple Setting
................. 385
7.6.2
Evaluation Under Logistic Regression
....... 390
7.6.3
Evaluation Under Multinomially Distributed
Count Data
..................... 397
7.6.4
Evaluation Under Poisson-Distributed Data
.... 405
7.6.5
Evaluation for Fixed-Effects Normality-Based
Linear Models
.................... 406
7.7
Additional Results and Considerations
........... 412
7.7.1
Selection Simulation for Nested Models
...... 412
7.7.2
Simulation of the Distribution of Ap
........ 415
7.7.3
Does AIC Overfit?
................. 417
7.7.4
Can Selection Be Improved Based on
All the
Δ,·?
..................... 419
7.7.5
Linear Regression, AIC, and Mean Square Error
. . 421
7.7.6
AIC,. and Models for Multivariate Data
....... 424
7.7.7
There Is No True TIQ.
............... 426
7.7.8
Kullback-Leibler Information Relationship to the
Fisher Information Matrix
.............. 426
7.7.9
Entropy and Jaynes Maxent Principle
........ 427
7.7.10
Akaiké
Weights
w¡
Versus Selection
Probabilities
7Г,
................... 428
7.8
Kullback-Leibler Information Is Always
> 0........ 429
7.9
Summary
........................... 434
8
Summary
437
8.1
The Scientific Question and the Collection of Data
..... 439
8.2
Actual Thinking and A Priori Modeling
........... 440
8.3
The Basis for Objective Model Selection
.......... 442
8.4
The Principle of Parsimony
................. 443
8.5
Information Criteria as Estimates of Expected Relative
Kullback-Leibler Information
................ 444
8.6
Ranking Alternative Models
................. 446
xx Contents
8.7
Scaling Alternative Models
................. 447
8.8
MMI:
Inference Based on Model Averaging
........ 448
8.9
MMI:
Model Selection Uncertainty
............. 449
8.10
MMI:
Relative Importance of Predictor Variables
..... 451
8.11
More on Inferences
..................... 451
8.12
Final Thoughts
........................ 454
References
455
Index
485
The second edition of this book is unique in that it focuses on methods for making
formal statistical inferences from all the models in an a priori set (multimodel infer¬
ence). A philosophy is presented for model-based data analysis, and a general strategy
is outlined for the analysis of empirical data. The book invites increased attention to a
priori science hypotheses and modeling.
Kullback Leibler information represents a fundamental quantity in science and is
Hirotugu Akaike s basis for model selection. The maximized log-likelihood function
can be bias-corrected as an estimator of expected, relative Kullback-Leibler informa¬
tion. This leads to Akaike s Information Criterion (AIC) and various extensions. These
methods are relatively simple and easy to use in practice, but based on deep statistical
theory. The information-theoretic approaches provide a unified and rigorous theory, an
extension of likelihood theory, and an important application of information theory, and
are objective and practical to employ across a very wide class of empirical problems.
Model Selection and Multimodel Inference presents several new ways to incorporate
model selection uncertainty into parameter estimates and estimates of precision. An
array of challenging examples is given to illustrate various technical issues. This is an
applied book written primarily for biologists and statisticians who want to make infer¬
ences from multiple models and is suitable as a graduate text or as a reference for
professional analysts.
DR. KENNETH P. BURNHAM (a statistician) has applied and developed statistical
theory for thirty years in several areas of life sciences, especially ecology and
wildlife. He is the recipient of numerous professional awards, including the Distin¬
guished Achievement Medal from the American Statistical Association, Section on
Statistics and the Environment, and the Distinguished Statistical Ecologist Award from
INTECOL (International Congress of Ecology). Dr. Burnham is a fellow of the American
Statistical Association.
-
DR. DAVID R. ANDERSON is a senior scientist with the Biological Resources Division
within the U.S. Geological Survey and a professor in the Department of Fishery and
Wildlife Biology, Colorado State University. He is the recipient of numerous profes¬
sional awards for scientific and academic contributions, including the Meritorious
Service Award given by the U.S. Department of the Interior.
|
adam_txt |
Contents
Preface
vii
About the Authors
xxi
Glossary
xxiii
1
Introduction
1
1.1
Objectives of the Book
. 1
1.2
Background Material
. 5
1.2.1
Inference from Data, Given a Model
. 5
1.2.2
Likelihood and Least Squares Theory
. 6
1.2.3
The Critical Issue: "What Is the Best Model
to Use?"
. 13
1.2.4
Science Inputs: Formulation of the Set of
Candidate Models
. 15
1.2.5
Models Versus Full Reality
. 20
1.2.6
An Ideal Approximating Model
. 22
1.3
Model Fundamentals and Notation
. 23
1.3.1
Truth or Full Reality
ƒ. 23
1.3.2
Approximating Models gi(x\0)
. 23
1.3.3
The Kullback-Leibler Best Model g,
{χ\θ0)
. 25
1.3.4
Estimated Models
g¡(x\e)
. 25
1.3.5
Generating Models
. 26
1.3.6
Global Model
. 26
xiv Contents
1.3.7
Overview of Stochastic Models in the
Biological Sciences
. 27
1.4
Inference and the Principle of Parsimony
. 29
1.4.
1 Avoid Overfitting to Achieve a Good Model Fit
. . 29
1.4.2
The Principle of Parsimony
. 31
1.4.3
Model Selection Methods
. 35
1.5
Data Dredging, Overanalysis of Data, and
Spurious Effects
. 37
1.5.1
Overanalysis of Data
. 38
1.5.2
Some Trends
. 40
1.6
Model Selection Bias
. 43
1.7
Model Selection Uncertainty
. 45
1.8
Summary
. 47
2
Information and Likelihood Theory: A Basis for Model
Selection and Inference
49
2.1
Kullback-Leibler Information or Distance Between
Two Models
. 50
2.1.1
Examples of Kullback-Leibler Distance
. 54
2.1.2
Truth,
ƒ,
Drops Out as a Constant
. 58
2.2
Akaike's Information Criterion:
1973. 60
2.3
Takeuchi's Information Criterion:
1976. 65
2.4
Second-Order Information Criterion:
1978. 66
2.5
Modification of Information Criterion for Overdispersed
Count Data
. 67
2.6
AIC Differences,
Δ,
. 70
2.7
A Useful Analogy
. 72
2.8
Likelihood of a Model,
HgĄdata)
. 74
2.9
Akaiké
Weights, w,
. 75
2.9.1
Basic Formula
. 75
2.9.2
An Extension
. 76
2.10
Evidence Ratios
. 77
2.11
Important Analysis Details
. 80
2.11.1
AIC Cannot Be Used to Compare Models of
Different Data Sets
. 80
2.11.2
Order Not Important in Computing AIC Values
. . 81
2.11.3
Transformations of the Response Variable
. 81
2.11.4
Regression Models with Differing
Error Structures
. 82
2.11.5
Do Not Mix Null Hypothesis Testing with
Information-Theoretic Criteria
. 83
2.11.6
Null Hypothesis Testing Is Still Important in
Strict Experiments
. 83
2.11.7
Information-Theoretic Criteria Are Not a "Test"
. . 84
2.11.8
Exploratory Data Analysis
. 84
Contents xv
2.12
Some History and Further Insights
. 85
2.12.1
Entropy
. 86
2.12.2
A Heuristic Interpretation
. 87
2.12.3
More on Interpreting Information-
Theoretic Criteria
. 87
2.12.4
Nonnested Models
. 88
2.12.5
Further Insights
. 89
2.13
Bootstrap Methods and Model Selection Frequencies
я,-
. . 90
2.13.1
Introduction
. 91
2.13.2
The Bootstrap in Model Selection:
The Basic Idea
. 93
2.14
Return to Flather's Models
. 94
2.15
Summary
. 96
Basic Use of the Information-Theoretic Approach
98
3.1
Introduction
. 98
3.2
Example
1 :
Cement Hardening Data
. 100
3.2.1
Set of Candidate Models
. 101
3.2.2
Some Results and Comparisons
. 102
3.2.3
A Summary
. 106
3.3
Example
2:
Time Distribution of an Insecticide Added to a
Simulated Ecosystem
. 106
3.3.1
Set of Candidate Models
. 108
3.3.2
Some Results
. 110
3.4
Example
3:
Nestling Starlings
.
Ill
3.4.1
Experimental Scenario
. 112
3.4.2
Monte Carlo Data
. 113
3.4.3
Set of Candidate Models
. 113
3.4.4
Data Analysis Results
. 117
3.4.5
Further Insights into the First Fourteen
Nested Models
. 120
3.4.6
Hypothesis Testing and Information-Theoretic
Approaches Have Different
Selection Frequencies
. 121
3.4.7
Further Insights Following Final
Model Selection
. 124
3.4.8
Why Not Always Use the Global Model
for Inference?
. 125
3.5
Example
4:
Sage Grouse Survival
. 126
3.5.1
Introduction
. 126
3.5.2
Set of Candidate Models
. 127
3.5.3
Model Selection
. 129
3.5.4
Hypothesis Tests for Year-Dependent
Survival Probabilities
. 131
xvi Contents
3.5.5
Hypothesis Testing Versus AIC
in
Model Selection
. 132
3.5.6
A Class of Intermediate Models
. 134
3.6
Example
5:
Resource Utilization of Anolis Lizards
. 137
3.6.1
Set of Candidate Models
. 138
3.6.2
Comments on Analytic Method
. 138
3.6.3
Some Tentative Results
. 139
3.7
Example
6:
Sakamoto
et
al.'s
(1986)
Simulated Data
. 141
3.8
Example
7:
Models of Fish Growth
. 142
3.9
Summary
. 143
4
Formal Inference From More Than One Model:
Multimodel Inference
(MMI)
149
4.1
Introduction to Multimodel Inference
. 149
4.2
Model Averaging
. 150
4.2.1
Prediction
. 150
4.2.2
Averaging Across Model Parameters
. 151
4.3
Model Selection Uncertainty
. 153
4.3.1
Concepts of Parameter Estimation and
Model Selection Uncertainty
. 155
4.3.2
Including Model Selection Uncertainty in
Estimator Sampling Variance
. 158
4.3.3
Unconditional Confidence Intervals
. 164
4.4
Estimating the Relative Importance of Variables
. 167
4.5
Confidence Set for the K-L Best Model
. 169
4.5.1
Introduction
. 169
4.5.2
Δ,,
Model Selection Probabilities,
and the Bootstrap
. 171
4.6
Model Redundancy
. 173
4.7
Recommendations
. 176
4.8
Cement Data
. 177
4.9
Pine Wood Data
. 183
4.10
The Durban Storm Data
. 187
4.10.1
Models Considered
. 188
4.10.2
Consideration of Model Fit
. 190
4.10.3
Confidence Intervals on Predicted
Storm Probability
. 191
4.10.4
Comparisons of Estimator Precision
. 193
4.11
Flour Beetle Mortality: A Logistic Regression Example
. 195
4.12
Publication of Research Results
. 201
4.13
Summary
. 203
5
Monte Carlo Insights and Extended Examples
206
5.1
Introduction
. 206
5.2
Survival Models
. 207
Contents xvii
5.2.1
A
Chain
Binomial
Survival
Model. 207
5.2.2
An Example
. 210
5.2.3 An
Extended Survival
Model. 215
5.2.4
Model Selection if Sample Size Is Huge,
or Truth Known
. 219
5.2.5
A Further Chain Binomial Model
. 221
5.3
Examples and Ideas Illustrated with Linear Regression
. . . 224
5.3.1
All-Subsets Selection: A GPA Example
. 225
5.3.2
A Monte Carlo Extension of the GPA Example
. . 229
5.3.3
An Improved Set of GPA Prediction Models
. 235
5.3.4
More Monte Carlo Results
. 238
5.3.5
Linear Regression and Variable Selection
. 244
5.3.6
Discussion
. 248
5.4
Estimation of Density from Line Transect Sampling
. 255
5.4.
1 Density Estimation Background
. 255
5.4.2
Line Transect Sampling of Kangaroos at
Wallaby Creek
. 256
5.4.3
Analysis of Wallaby Creek Data
. 256
5.4.4
Bootstrap Analysis
. 258
5.4.5
Confidence Interval on
D
. 258
5.4.6
Bootstrap Samples:
1,000
Versus
10,000. 260
5.4.7
Bootstrap Versus
Akaiké
Weights: A Lesson
onQAIQ.
. 261
5.5
Summary
. 264
Advanced Issues and Deeper Insights
267
6.1
Introduction
. 267
6.2
An Example with
13
Predictor Variables and
8,191
Models
. 268
6.2.1
Body Fat Data
. 268
6.2.2
The Global Model
. 269
6.2.3
Classical Stepwise Selection
. 269
6.2.4
Model Selection Uncertainty for AICf and
BIC
. . 271
6.2.5
An A Priori Approach
. 274
6.2.6
Bootstrap Evaluation of Model Uncertainty
. 276
6.2.7
Monte Carlo Simulations
. 279
6.2.8
Summary Messages
. 281
6.3
Overview of Model Selection Criteria
. 284
6.3.1
Criteria That Are Estimates of K-L Information
. . 284
6.3.2
Criteria That Are Consistent for
К
. 286
6.3.3
Contrasts
. 288
6.3.4
Consistent Selection in Practice:
Quasi-true Models
. 289
6.4
Contrasting AIC and
BIC
. 293
6.4.1
A Heuristic Derivation of
BIC
. 293
xviii Contents
6.4.2
A K-L-Based Conceptual Comparison of
AIC and
BIC .
295
6.4.3
Performance Comparison
. 298
6.4.4
Exact Bayesian Model Selection Formulas
. 301
6.4.5
Akaiké
Weights as Bayesian Posterior
Model Probabilities
. 302
6.5
Goodness-of-Fit and Overdispersion Revisited
. 305
6.5.1
Overdispersion
с
and Goodness-of-Fit:
A General Strategy
. 305
6.5.2
Overdispersion Modeling: More Than One
с
. . . . 307
6.5.3
Model Goodness-of-Fit After Selection
. 309
6.6
AIC and Random Coefficient Models
. 310
6.6.1
Basic Concepts and Marginal
Likelihood Approach
. 310
6.6.2
A Shrinkage Approach to AIC and
Random Effects
. 313
6.6.3
On Extensions
. 316
6.7
Selection When Probability Distributions Differ
by Model
. 317
6.7.1
Keep All the Parts
. 317
6.7.2
A Normal Versus Log-Normal Example
. 318
6.7.3
Comparing Across Several Distributions:
An Example
. 320
6.8
Lessons from the Literature and Other Matters
. 323
6.8.1
Use AIC,., Not AIC, with Small Sample Sizes
. 323
6.8.2
Use AICC, Not AIC, When
К
Is Large
. 325
6.8.3
When Is AIC, Suitable: A Gamma
Distribution Example
. 326
6.8.4
Inference from a Less Than Best Model
. 328
6.8.5
Are Parameters Real?
. 330
6.8.6
Sample Size Is Often Not a Simple Issue
. 332
6.8.7
Judgment Has a Role
. 333
6.9
Tidbits About AIC
. 334
6.9.1
Irrelevance of Between-Sample Variation
of AIC
. 334
6.9.2
The G-Statistic and K-L Information
. 336
6.9.3
AIC Versus Hypothesis Testing: Results Can Be
Very Different
. 337
6.9.4
A Subtle Model Selection Bias Issue
. 339
6.9.5
The Dimensional Unit of AIC
. 340
6.9.6
AIC and Finite Mixture Models
. 342
6.9.7
Unconditional Variance
. 344
6.9.8
A Baseline for w+(i)
. 345
6.10
Summary
. 347
Contents xix
7
Statistical Theory and Numerical Results
352
7.1
Useful Preliminaries
. 352
7.2
A General Derivation of AIC
. 362
7.3
General K-L-Based Model Selection: TIC
. 371
7.3.1
Analytical Computation of TIC
. 371
7.3.2
Bootstrap Estimation of TIC
. 372
7.4
AIC,.: A Second-Order Improvement
. 374
7.4.1
Derivation of AIQ
. 374
7.4.2
Lack of Uniqueness of AIC,
. 379
7.5
Derivation of AIC for the Exponential Family
of Distributions
. 380
7.6
Evaluation of tr(
J
(£„)[/(£„)Г')
and Its Estimator
. 384
7.6.1
Comparison of AIC Versus TIC in a
Very Simple Setting
. 385
7.6.2
Evaluation Under Logistic Regression
. 390
7.6.3
Evaluation Under Multinomially Distributed
Count Data
. 397
7.6.4
Evaluation Under Poisson-Distributed Data
. 405
7.6.5
Evaluation for Fixed-Effects Normality-Based
Linear Models
. 406
7.7
Additional Results and Considerations
. 412
7.7.1
Selection Simulation for Nested Models
. 412
7.7.2
Simulation of the Distribution of Ap
. 415
7.7.3
Does AIC Overfit?
. 417
7.7.4
Can Selection Be Improved Based on
All the
Δ,·?
. 419
7.7.5
Linear Regression, AIC, and Mean Square Error
. . 421
7.7.6
AIC,. and Models for Multivariate Data
. 424
7.7.7
There Is No True TIQ.
. 426
7.7.8
Kullback-Leibler Information Relationship to the
Fisher Information Matrix
. 426
7.7.9
Entropy and Jaynes Maxent Principle
. 427
7.7.10
Akaiké
Weights
w¡
Versus Selection
Probabilities
7Г,
. 428
7.8
Kullback-Leibler Information Is Always
> 0. 429
7.9
Summary
. 434
8
Summary
437
8.1
The Scientific Question and the Collection of Data
. 439
8.2
Actual Thinking and A Priori Modeling
. 440
8.3
The Basis for Objective Model Selection
. 442
8.4
The Principle of Parsimony
. 443
8.5
Information Criteria as Estimates of Expected Relative
Kullback-Leibler Information
. 444
8.6
Ranking Alternative Models
. 446
xx Contents
8.7
Scaling Alternative Models
. 447
8.8
MMI:
Inference Based on Model Averaging
. 448
8.9
MMI:
Model Selection Uncertainty
. 449
8.10
MMI:
Relative Importance of Predictor Variables
. 451
8.11
More on Inferences
. 451
8.12
Final Thoughts
. 454
References
455
Index
485
The second edition of this book is unique in that it focuses on methods for making
formal statistical inferences from all the models in an a priori set (multimodel infer¬
ence). A philosophy is presented for model-based data analysis, and a general strategy
is outlined for the analysis of empirical data. The book invites increased attention to a
priori science hypotheses and modeling.
Kullback Leibler information represents a fundamental quantity in science and is
Hirotugu Akaike's basis for model selection. The maximized log-likelihood function
can be bias-corrected as an estimator of expected, relative Kullback-Leibler informa¬
tion. This leads to Akaike's Information Criterion (AIC) and various extensions. These
methods are relatively simple and easy to use in practice, but based on deep statistical
theory. The information-theoretic approaches provide a unified and rigorous theory, an
extension of likelihood theory, and an important application of information theory, and
are objective and practical to employ across a very wide class of empirical problems.
Model Selection and Multimodel Inference presents several new ways to incorporate
model selection uncertainty into parameter estimates and estimates of precision. An
array of challenging examples is given to illustrate various technical issues. This is an
applied book written primarily for biologists and statisticians who want to make infer¬
ences from multiple models and is suitable as a graduate text or as a reference for
professional analysts.
DR. KENNETH P. BURNHAM (a statistician) has applied and developed statistical
theory for thirty years in several areas of life sciences, especially ecology and
wildlife. He is the recipient of numerous professional awards, including the Distin¬
guished Achievement Medal from the American Statistical Association, Section on
Statistics and the Environment, and the Distinguished Statistical Ecologist Award from
INTECOL (International Congress of Ecology). Dr. Burnham is a fellow of the American
Statistical Association.
-
DR. DAVID R. ANDERSON is a senior scientist with the Biological Resources Division
within the U.S. Geological Survey and a professor in the Department of Fishery and
Wildlife Biology, Colorado State University. He is the recipient of numerous profes¬
sional awards for scientific and academic contributions, including the Meritorious
Service Award given by the U.S. Department of the Interior. |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Burnham, Kenneth P. Anderson, David Raymond 1942- |
author_GND | (DE-588)1018922032 (DE-588)122291727 |
author_facet | Burnham, Kenneth P. Anderson, David Raymond 1942- |
author_role | aut aut |
author_sort | Burnham, Kenneth P. |
author_variant | k p b kp kpb d r a dr dra |
building | Verbundindex |
bvnumber | BV023104292 |
classification_rvk | QH 234 SK 850 SK 950 WC 7000 |
classification_tum | BIO 110f |
ctrlnum | (OCoLC)634630617 (DE-599)BVBBV023104292 |
discipline | Biologie Informatik Mathematik Wirtschaftswissenschaften |
discipline_str_mv | Biologie Informatik Mathematik Wirtschaftswissenschaften |
edition | 2. ed., [Nachdr.] |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02481nam a2200553 c 4500</leader><controlfield tag="001">BV023104292</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20090821 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">080129s2008 ad|| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0387953647</subfield><subfield code="9">0-387-95364-7</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780387953649</subfield><subfield code="9">978-0-387-95364-9</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)634630617</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV023104292</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-355</subfield><subfield code="a">DE-20</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">QH 234</subfield><subfield code="0">(DE-625)141549:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SK 850</subfield><subfield code="0">(DE-625)143263:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SK 950</subfield><subfield code="0">(DE-625)143273:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">WC 7000</subfield><subfield code="0">(DE-625)148142:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">BIO 110f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Burnham, Kenneth P.</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1018922032</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Model selection and multimodel inference</subfield><subfield code="b">a practical imformation-theoretic approach</subfield><subfield code="c">Kenneth P. Burnham ; David R. Anderson</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">2. ed., [Nachdr.]</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">New York, NY [u.a.]</subfield><subfield code="b">Springer</subfield><subfield code="c">[ca. 2008]</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">XXVI, 488 S.</subfield><subfield code="b">Ill., graph. Darst.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">1. Aufl. u.d.T.: Burnham, Kenneth P.: Model selection and inference</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Datenanalyse</subfield><subfield code="0">(DE-588)4123037-1</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Modellwahl</subfield><subfield code="0">(DE-588)4304786-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Biologie</subfield><subfield code="0">(DE-588)4006851-1</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Mathematisches Modell</subfield><subfield code="0">(DE-588)4114528-8</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Modellwahl</subfield><subfield code="0">(DE-588)4304786-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Datenanalyse</subfield><subfield code="0">(DE-588)4123037-1</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="1" ind2="0"><subfield code="a">Biologie</subfield><subfield code="0">(DE-588)4006851-1</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="1"><subfield code="a">Mathematisches Modell</subfield><subfield code="0">(DE-588)4114528-8</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="2" ind2="0"><subfield code="a">Biologie</subfield><subfield code="0">(DE-588)4006851-1</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="2" ind2="1"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="2" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Anderson, David Raymond</subfield><subfield code="d">1942-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)122291727</subfield><subfield code="4">aut</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016306982&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Klappentext</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016306982&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-016306982</subfield></datafield></record></collection> |
id | DE-604.BV023104292 |
illustrated | Illustrated |
index_date | 2024-07-02T19:45:58Z |
indexdate | 2024-07-09T21:11:07Z |
institution | BVB |
isbn | 0387953647 9780387953649 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-016306982 |
oclc_num | 634630617 |
open_access_boolean | |
owner | DE-355 DE-BY-UBR DE-20 |
owner_facet | DE-355 DE-BY-UBR DE-20 |
physical | XXVI, 488 S. Ill., graph. Darst. |
publishDate | 2008 |
publishDateSearch | 2008 |
publishDateSort | 2008 |
publisher | Springer |
record_format | marc |
spelling | Burnham, Kenneth P. Verfasser (DE-588)1018922032 aut Model selection and multimodel inference a practical imformation-theoretic approach Kenneth P. Burnham ; David R. Anderson 2. ed., [Nachdr.] New York, NY [u.a.] Springer [ca. 2008] XXVI, 488 S. Ill., graph. Darst. txt rdacontent n rdamedia nc rdacarrier 1. Aufl. u.d.T.: Burnham, Kenneth P.: Model selection and inference Datenanalyse (DE-588)4123037-1 gnd rswk-swf Modellwahl (DE-588)4304786-5 gnd rswk-swf Biologie (DE-588)4006851-1 gnd rswk-swf Statistik (DE-588)4056995-0 gnd rswk-swf Mathematisches Modell (DE-588)4114528-8 gnd rswk-swf Modellwahl (DE-588)4304786-5 s Datenanalyse (DE-588)4123037-1 s DE-604 Biologie (DE-588)4006851-1 s Mathematisches Modell (DE-588)4114528-8 s Statistik (DE-588)4056995-0 s Anderson, David Raymond 1942- Verfasser (DE-588)122291727 aut Digitalisierung UB Regensburg application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016306982&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Klappentext Digitalisierung UB Regensburg application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016306982&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Burnham, Kenneth P. Anderson, David Raymond 1942- Model selection and multimodel inference a practical imformation-theoretic approach Datenanalyse (DE-588)4123037-1 gnd Modellwahl (DE-588)4304786-5 gnd Biologie (DE-588)4006851-1 gnd Statistik (DE-588)4056995-0 gnd Mathematisches Modell (DE-588)4114528-8 gnd |
subject_GND | (DE-588)4123037-1 (DE-588)4304786-5 (DE-588)4006851-1 (DE-588)4056995-0 (DE-588)4114528-8 |
title | Model selection and multimodel inference a practical imformation-theoretic approach |
title_auth | Model selection and multimodel inference a practical imformation-theoretic approach |
title_exact_search | Model selection and multimodel inference a practical imformation-theoretic approach |
title_exact_search_txtP | Model selection and multimodel inference a practical imformation-theoretic approach |
title_full | Model selection and multimodel inference a practical imformation-theoretic approach Kenneth P. Burnham ; David R. Anderson |
title_fullStr | Model selection and multimodel inference a practical imformation-theoretic approach Kenneth P. Burnham ; David R. Anderson |
title_full_unstemmed | Model selection and multimodel inference a practical imformation-theoretic approach Kenneth P. Burnham ; David R. Anderson |
title_short | Model selection and multimodel inference |
title_sort | model selection and multimodel inference a practical imformation theoretic approach |
title_sub | a practical imformation-theoretic approach |
topic | Datenanalyse (DE-588)4123037-1 gnd Modellwahl (DE-588)4304786-5 gnd Biologie (DE-588)4006851-1 gnd Statistik (DE-588)4056995-0 gnd Mathematisches Modell (DE-588)4114528-8 gnd |
topic_facet | Datenanalyse Modellwahl Biologie Statistik Mathematisches Modell |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016306982&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=016306982&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT burnhamkennethp modelselectionandmultimodelinferenceapracticalimformationtheoreticapproach AT andersondavidraymond modelselectionandmultimodelinferenceapracticalimformationtheoreticapproach |