Rate this book:
Model Selection and Multi-Model Inference
Model Selection and Multi-Model Inference
The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference). A philosophy is presented for model-based data analysis and a general strategy outlined for the analysis of empirical data. The book invites increased attention on a priori science hypotheses and modeling. Kullback-Leibler Information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection. The maximized log-likelihood function can be bias-corrected as an estimator of expected, relative Kullback-Leibler information. This leads to Akaike's Information Criterion (AIC) and various extensions. These methods are relatively simple and easy to use in practice, but based on deep statistical theory. The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are objective and practical to employ across a very wide class of empirical problems. The book presents several new ways to incorporate model selection uncertainty into parameter estimates and estimates of precision. An array of challenging examples is given to illustrate various technical issues. This is an applied book written primarily for biologists and statisticians wanting to make inferences from multiple models and is suitable as a graduate text or as a reference for professional analysts.
A philosophy is presented for model-based data analysis and a general strategy outlined for the analysis of empirical data. Written primarily for biologists and statisticians wanting to make inferences from multiple models.
authoritative and thorough treatment, December 18, 2000
Burnham and Anderson have put together a scholarly account of the developments in model selection techniques from the information theoretic viewpoint. This is an important practical subject. As computer algorithms become more and more available for fitting models and data mining and exploratory analysis become more popular and used more by novices, problems with overfitting models will again raise their ugly heads. This has been an issue for statisticians for decades. But the problems and the art of model selection has not been commonly covered in elementary courses on statistics and regression. George Box puts proper emphasis on the iterative nature of model selection and the importance of applying the principle of parismony in many of his books. Classic texts on regression like Draper and Smith point out the pitfalls of goodness of ift measures like R-square and explain Mallows Cp and adjusted R-square. There are now also a few good books devoted to model selection including the book by McQuarrie and Tsai (that I recently reviewed for Amazon) and the Chapman and Hall monograph by A. J. Miller.
Burnham and Anderson address all these issues and provide the best coverage to date on bootstrap and cross-validation approaches. They also are careful in their historical account and in putting together some coherence to the scattered literature. They are thorough in their references to the literature. Their theme is the information theoretic measures based on the Kullback-Liebler distance measure. The breakthrough in this theory came from Akaike in the 1970s and improvements and refinement came later. The authors provide the theory, but more importantly, they provide many real examples to illustrate the problems and show how the methods work.
They also refer to the recent work in Bayesian methods. Chapter 1 is a great introduction that everyone should read. Being a fan of the bootstrap I was interested in their coverage of it in chapters 4, 5 and 6 (much of which is the authors' own work).
Because the authors work in biological fields they cover survival models as well as the standard time series and regression models where most of the emphasis has been placed on model selection in the past.
It is a great reference source and an important book for learning about model selection as part of the inferential process. The pictures of the famous contributors inserted throughout the book is also nice to see. We have Akaike, Boltzmann, Shibata, Kullback, and Liebler brought to life in photographs or sketches.
Rating: not rated | Added on: 8 Dec 2006