Peter Mörters, Roger Moser, Mathew Penrose, Hartmut Schwetlick, and Johannes Zimmer (eds)
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780199239252
- eISBN:
- 9780191716911
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199239252.001.0001
- Subject:
- Mathematics, Probability / Statistics, Analysis
There has been a significant increase recently in activities on the interface between applied analysis and probability theory. With the potential of a combined approach to the study of various ...
More
There has been a significant increase recently in activities on the interface between applied analysis and probability theory. With the potential of a combined approach to the study of various physical systems in view, this book is a collection of topical survey articles by leading researchers in both fields, working on the mathematical description of growth phenomena in the broadest sense. The main aim of the book is to foster interaction between researchers in probability and analysis, and to inspire joint efforts to attack important physical problems. Mathematical methods discussed in the book comprise large deviation theory, lace expansion, harmonic analysis, multi-scale techniques, and homogenization of partial differential equations. Models based on the physics of individual particles are discussed alongside models based on the continuum description of large collections of particles, and the mathematical theories are used to describe physical phenomena such as droplet formation, Bose–Einstein condensation, Anderson localization, Ostwald ripening, or the formation of the early universe.Less
There has been a significant increase recently in activities on the interface between applied analysis and probability theory. With the potential of a combined approach to the study of various physical systems in view, this book is a collection of topical survey articles by leading researchers in both fields, working on the mathematical description of growth phenomena in the broadest sense. The main aim of the book is to foster interaction between researchers in probability and analysis, and to inspire joint efforts to attack important physical problems. Mathematical methods discussed in the book comprise large deviation theory, lace expansion, harmonic analysis, multi-scale techniques, and homogenization of partial differential equations. Models based on the physics of individual particles are discussed alongside models based on the continuum description of large collections of particles, and the mathematical theories are used to describe physical phenomena such as droplet formation, Bose–Einstein condensation, Anderson localization, Ostwald ripening, or the formation of the early universe.
Ludwig Fahrmeir and Thomas Kneib
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199533022
- eISBN:
- 9780191728501
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199533022.001.0001
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) ...
More
Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) inference, as well as empirical Bayes procedures closely related to penalized likelihood estimation and mixed models, are considered here. Throughout, the focus is on semiparametric regression and smoothing based on basis expansions of unknown functions and effects in combination with smoothness priors for the basis coefficients. Beginning with a review of basic methods for smoothing and mixed models, longitudinal data, spatial data, and event history data are treated in separate chapters. Worked examples from various fields such as forestry, development economics, medicine, and marketing are used to illustrate the statistical methods covered in this book. Most of these examples have been analysed using implementations in the Bayesian software, BayesX, and some with R Codes.Less
Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) inference, as well as empirical Bayes procedures closely related to penalized likelihood estimation and mixed models, are considered here. Throughout, the focus is on semiparametric regression and smoothing based on basis expansions of unknown functions and effects in combination with smoothness priors for the basis coefficients. Beginning with a review of basic methods for smoothing and mixed models, longitudinal data, spatial data, and event history data are treated in separate chapters. Worked examples from various fields such as forestry, development economics, medicine, and marketing are used to illustrate the statistical methods covered in this book. Most of these examples have been analysed using implementations in the Bayesian software, BayesX, and some with R Codes.
José M. Bernardo, M. J. Bayarri, James O. Berger, A. P. Dawid, David Heckerman, Adrian F. M. Smith, and Mike West (eds)
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199694587
- eISBN:
- 9780191731921
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199694587.001.0001
- Subject:
- Mathematics, Probability / Statistics
The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in ...
More
The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in Bayesian statistics. These are the edited Proceedings of the Ninth meeting, and contain the invited papers each followed by their discussion and a rejoinder by the author(s). In the tradition of the earlier editions, this encompasses an enormous range of theoretical and applied research, highlighting the breadth, vitality and impact of Bayesian thinking in interdisciplinary research across many fields as well as the corresponding growth and vitality of core theory and methodology. The Valencia 9 invited papers cover a broad range of topics, including foundational and core theoretical issues in statistics, the continued development of new and refined computational methods for complex Bayesian modelling, substantive applications of flexible Bayesian modelling, and new developments in the theory and methodology of graphical modelling. They also describe advances in methodology for specific applied fields, including financial econometrics and portfolio decision making, public policy applications for drug surveillance, studies in the physical and environmental sciences, astronomy and astrophysics, climate change studies, molecular biosciences, statistical genetics or stochastic dynamic networks in systems biology.Less
The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in Bayesian statistics. These are the edited Proceedings of the Ninth meeting, and contain the invited papers each followed by their discussion and a rejoinder by the author(s). In the tradition of the earlier editions, this encompasses an enormous range of theoretical and applied research, highlighting the breadth, vitality and impact of Bayesian thinking in interdisciplinary research across many fields as well as the corresponding growth and vitality of core theory and methodology. The Valencia 9 invited papers cover a broad range of topics, including foundational and core theoretical issues in statistics, the continued development of new and refined computational methods for complex Bayesian modelling, substantive applications of flexible Bayesian modelling, and new developments in the theory and methodology of graphical modelling. They also describe advances in methodology for specific applied fields, including financial econometrics and portfolio decision making, public policy applications for drug surveillance, studies in the physical and environmental sciences, astronomy and astrophysics, climate change studies, molecular biosciences, statistical genetics or stochastic dynamic networks in systems biology.
Paul Damien, Petros Dellaportas, Nicholas G. Polson, and David A. Stephens (eds)
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199695607
- eISBN:
- 9780191744167
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199695607.001.0001
- Subject:
- Mathematics, Probability / Statistics
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances ...
More
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This book travels on a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book honours the contributions of Sir Adrian F. M. Smith, one of the seminal Bayesian researchers, with his work on hierarchical models, sequential Monte Carlo, and Markov chain Monte Carlo and his mentoring of numerous graduate students.Less
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This book travels on a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book honours the contributions of Sir Adrian F. M. Smith, one of the seminal Bayesian researchers, with his work on hierarchical models, sequential Monte Carlo, and Markov chain Monte Carlo and his mentoring of numerous graduate students.
A. C. Davison, Yadolah Dodge, and N. Wermuth (eds)
- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198566540
- eISBN:
- 9780191718038
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566540.001.0001
- Subject:
- Mathematics, Probability / Statistics
Sir David Cox is among the most important statisticians of the past half-century, making pioneering and highly influential contributions to a wide range of topics in statistics and applied ...
More
Sir David Cox is among the most important statisticians of the past half-century, making pioneering and highly influential contributions to a wide range of topics in statistics and applied probability. This book contains summaries of the invited talks at a meeting held at the University of Neuchâtel in July 2004 to celebrate David Cox’s 80th birthday. The chapters describe current developments across a wide range of topics, ranging from statistical theory and methods, through applied probability and modelling, to applications in areas including finance, epidemiology, hydrology, medicine, and social science. The book contains chapters by numerous well-known statisticians. It provides a summary of current thinking across a wide front by leading statistical thinkers.Less
Sir David Cox is among the most important statisticians of the past half-century, making pioneering and highly influential contributions to a wide range of topics in statistics and applied probability. This book contains summaries of the invited talks at a meeting held at the University of Neuchâtel in July 2004 to celebrate David Cox’s 80th birthday. The chapters describe current developments across a wide range of topics, ranging from statistical theory and methods, through applied probability and modelling, to applications in areas including finance, epidemiology, hydrology, medicine, and social science. The book contains chapters by numerous well-known statisticians. It provides a summary of current thinking across a wide front by leading statistical thinkers.
Geoffrey Grimmett and Colin McDiarmid (eds)
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780198571278
- eISBN:
- 9780191718885
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198571278.001.0001
- Subject:
- Mathematics, Probability / Statistics
Professor Dominic Welsh has made significant contributions to the fields of combinatorics and discrete probability, including matroids, complexity, and percolation. He has taught, influenced, and ...
More
Professor Dominic Welsh has made significant contributions to the fields of combinatorics and discrete probability, including matroids, complexity, and percolation. He has taught, influenced, and inspired generations of students and researchers in mathematics. This book summarizes and reviews the consistent themes from his work through a series of articles written by renowned experts. These articles, presented as chapters, contain original research work, set in a broader context by the inclusion of review material.Less
Professor Dominic Welsh has made significant contributions to the fields of combinatorics and discrete probability, including matroids, complexity, and percolation. He has taught, influenced, and inspired generations of students and researchers in mathematics. This book summarizes and reviews the consistent themes from his work through a series of articles written by renowned experts. These articles, presented as chapters, contain original research work, set in a broader context by the inclusion of review material.
Stéphane Boucheron, Gábor Lugosi, and Pascal Massart
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199535255
- eISBN:
- 9780191747106
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199535255.001.0001
- Subject:
- Mathematics, Probability / Statistics, Applied Mathematics
This monograph presents a mathematical theory of concentration inequalities for functions of independent random variables. The basic phenomenon under investigation is that if a function of many ...
More
This monograph presents a mathematical theory of concentration inequalities for functions of independent random variables. The basic phenomenon under investigation is that if a function of many independent random variables does not depend too much on any of them then it is concentrated around its expected value. This book offers a host of inequalities to quantify this statement. The authors describe the interplay between the probabilistic structure (independence) and a variety of tools ranging from functional inequalities, transportation arguments, to information theory. Applications to the study of empirical processes, random projections, random matrix theory, and threshold phenomena are presented. The book offers a self-contained introduction to concentration inequalities, including a survey of concentration of sums of independent random variables, variance bounds, the entropy method, and the transportation method. Deep connections with isoperimetric problems are revealed. Special attention is paid to applications to the supremum of empirical processes.Less
This monograph presents a mathematical theory of concentration inequalities for functions of independent random variables. The basic phenomenon under investigation is that if a function of many independent random variables does not depend too much on any of them then it is concentrated around its expected value. This book offers a host of inequalities to quantify this statement. The authors describe the interplay between the probabilistic structure (independence) and a variety of tools ranging from functional inequalities, transportation arguments, to information theory. Applications to the study of empirical processes, random projections, random matrix theory, and threshold phenomena are presented. The book offers a self-contained introduction to concentration inequalities, including a survey of concentration of sums of independent random variables, variance bounds, the entropy method, and the transportation method. Deep connections with isoperimetric problems are revealed. Special attention is paid to applications to the supremum of empirical processes.
Florence Merlevède, Magda Peligrad, and Sergey Utev
- Published in print:
- 2019
- Published Online:
- April 2019
- ISBN:
- 9780198826941
- eISBN:
- 9780191865961
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198826941.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book has its origin in the need for developing and analyzing mathematical models for phenomena that evolve in time and influence each another, and aims at a better understanding of the structure ...
More
This book has its origin in the need for developing and analyzing mathematical models for phenomena that evolve in time and influence each another, and aims at a better understanding of the structure and asymptotic behavior of stochastic processes. This monograph has double scope. First, to present tools for dealing with dependent structures directed toward obtaining normal approximations. Second, to apply the normal approximations presented in the book to various examples. The main tools consist of inequalities for dependent sequences of random variables, leading to limit theorems, including the functional central limit theorem (CLT) and functional moderate deviation principle (MDP). The results will point out large classes of dependent random variables which satisfy invariance principles, making possible the statistical study of data coming from stochastic processes both with short and long memory. Over the course of the book different types of dependence structures are considered, ranging from the traditional mixing structures to martingale-like structures and to weakly negatively dependent structures, which link the notion of mixing to the notions of association and negative dependence. Several applications have been carefully selected to exhibit the importance of the theoretical results. They include random walks in random scenery and determinantal processes. In addition, due to their importance in analyzing new data in economics, linear processes with dependent innovations will also be considered and analyzed.Less
This book has its origin in the need for developing and analyzing mathematical models for phenomena that evolve in time and influence each another, and aims at a better understanding of the structure and asymptotic behavior of stochastic processes. This monograph has double scope. First, to present tools for dealing with dependent structures directed toward obtaining normal approximations. Second, to apply the normal approximations presented in the book to various examples. The main tools consist of inequalities for dependent sequences of random variables, leading to limit theorems, including the functional central limit theorem (CLT) and functional moderate deviation principle (MDP). The results will point out large classes of dependent random variables which satisfy invariance principles, making possible the statistical study of data coming from stochastic processes both with short and long memory. Over the course of the book different types of dependence structures are considered, ranging from the traditional mixing structures to martingale-like structures and to weakly negatively dependent structures, which link the notion of mixing to the notions of association and negative dependence. Several applications have been carefully selected to exhibit the importance of the theoretical results. They include random walks in random scenery and determinantal processes. In addition, due to their importance in analyzing new data in economics, linear processes with dependent innovations will also be considered and analyzed.
Jon Williamson
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780199228003
- eISBN:
- 9780191711060
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199228003.001.0001
- Subject:
- Mathematics, Probability / Statistics, Logic / Computer Science / Mathematical Philosophy
Bayesian epistemology aims to answer the following question: How strongly should an agent believe the various propositions expressible in her language? Subjective Bayesians hold that.it is largely ...
More
Bayesian epistemology aims to answer the following question: How strongly should an agent believe the various propositions expressible in her language? Subjective Bayesians hold that.it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms: (i) Probability: degrees of belief should be probabilities; (ii) Calibration: they should be calibrated with evidence; and (iii) Equivocation: they should otherwise equivocate between basic outcomes. Objective Bayesianism has been challenged on a number of different fronts: for example, it has been accused of being poorly motivated, of failing to handle qualitative evidence, of yielding counter‐intuitive degrees of belief after updating, of suffering from a failure to learn from experience, of being computationally intractable, of being susceptible to paradox, of being language dependent, and of not being objective enough. The book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.Less
Bayesian epistemology aims to answer the following question: How strongly should an agent believe the various propositions expressible in her language? Subjective Bayesians hold that.it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms: (i) Probability: degrees of belief should be probabilities; (ii) Calibration: they should be calibrated with evidence; and (iii) Equivocation: they should otherwise equivocate between basic outcomes. Objective Bayesianism has been challenged on a number of different fronts: for example, it has been accused of being poorly motivated, of failing to handle qualitative evidence, of yielding counter‐intuitive degrees of belief after updating, of suffering from a failure to learn from experience, of being computationally intractable, of being susceptible to paradox, of being language dependent, and of not being objective enough. The book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.
Ray Chambers and Robert Clark
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780198566625
- eISBN:
- 9780191738449
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566625.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey ...
More
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the model-based approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like non-parametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how model-based methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of non-linear population quantities, sub-sampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to model-based survey design and estimation.Less
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the model-based approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like non-parametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how model-based methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of non-linear population quantities, sub-sampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to model-based survey design and estimation.
Steve Selvin
- Published in print:
- 2019
- Published Online:
- May 2019
- ISBN:
- 9780198833444
- eISBN:
- 9780191872280
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198833444.001.0001
- Subject:
- Mathematics, Probability / Statistics, Applied Mathematics
The Joy of Statistics consists of a series of 42 “short stories,” each illustrating how elementary statistical methods are applied to data to produce insight and solutions to the questions data are ...
More
The Joy of Statistics consists of a series of 42 “short stories,” each illustrating how elementary statistical methods are applied to data to produce insight and solutions to the questions data are collected to answer. The text contains brief histories of the evolution of statistical methods and a number of brief biographies of the most famous statisticians of the 20th century. Also throughout are a few statistical jokes, puzzles, and traditional stories. The level of the Joy of Statistics is elementary and explores a variety of statistical applications using graphs and plots, along with detailed and intuitive descriptions and occasionally using a bit of 10th grade mathematics. Examples of a few of the topics are gambling games such as roulette, blackjack, and lotteries as well as more serious subjects such as comparison of black/white infant mortality rates, coronary heart disease risk, and ethnic differences in Hodgkin’s disease. The statistical description of these methods and topics are accompanied by easy to understand explanations labeled “how it works.”Less
The Joy of Statistics consists of a series of 42 “short stories,” each illustrating how elementary statistical methods are applied to data to produce insight and solutions to the questions data are collected to answer. The text contains brief histories of the evolution of statistical methods and a number of brief biographies of the most famous statisticians of the 20th century. Also throughout are a few statistical jokes, puzzles, and traditional stories. The level of the Joy of Statistics is elementary and explores a variety of statistical applications using graphs and plots, along with detailed and intuitive descriptions and occasionally using a bit of 10th grade mathematics. Examples of a few of the topics are gambling games such as roulette, blackjack, and lotteries as well as more serious subjects such as comparison of black/white infant mortality rates, coronary heart disease risk, and ethnic differences in Hodgkin’s disease. The statistical description of these methods and topics are accompanied by easy to understand explanations labeled “how it works.”
Peter Grindrod
- Published in print:
- 2014
- Published Online:
- March 2015
- ISBN:
- 9780198725091
- eISBN:
- 9780191792526
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198725091.001.0001
- Subject:
- Mathematics, Analysis, Probability / Statistics
This book presents analytics within a framework of mathematical theory and concepts, building upon firm theory and foundations of probability theory, graphs, and networks, random matrices, linear ...
More
This book presents analytics within a framework of mathematical theory and concepts, building upon firm theory and foundations of probability theory, graphs, and networks, random matrices, linear algebra, optimization, forecasting, discrete dynamical systems, and more. Following on from the theoretical considerations, applications are given to data from commercially relevant interests: supermarket baskets; loyalty cards; mobile phone call records; smart meters; ‘omic‘ data; sales promotions; social media; and microblogging. Each chapter tackles a topic in analytics: social networks and digital marketing; forecasting; clustering and segmentation; inverse problems; Markov models of behavioural changes; multiple hypothesis testing and decision-making; and so on. Chapters start with background mathematical theory explained with a strong narrative and then give way to practical considerations and then to exemplar applications.Less
This book presents analytics within a framework of mathematical theory and concepts, building upon firm theory and foundations of probability theory, graphs, and networks, random matrices, linear algebra, optimization, forecasting, discrete dynamical systems, and more. Following on from the theoretical considerations, applications are given to data from commercially relevant interests: supermarket baskets; loyalty cards; mobile phone call records; smart meters; ‘omic‘ data; sales promotions; social media; and microblogging. Each chapter tackles a topic in analytics: social networks and digital marketing; forecasting; clustering and segmentation; inverse problems; Markov models of behavioural changes; multiple hypothesis testing and decision-making; and so on. Chapters start with background mathematical theory explained with a strong narrative and then give way to practical considerations and then to exemplar applications.
Russell Cheng
- Published in print:
- 2017
- Published Online:
- September 2017
- ISBN:
- 9780198505044
- eISBN:
- 9780191746390
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198505044.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates ...
More
This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates behave unusually, and (ii) the use of parametric bootstrap resampling methods in analysing such problems. Simple and practical model building is an underlying theme. A frequentist viewpoint based on likelihood is adopted, for which there is a well-established and very practical theory. The standard situation is where certain widely applicable regularity conditions hold. However, there are many apparently innocuous situations where standard theory breaks down, sometimes spectacularly. Most of the departures from regularity are described geometrically in the book, with mathematical detail only sufficient to clarify the non-standard nature of a problem and to allow formulation of practical solutions. The book is intended for anyone with a basic knowledge of statistical methods typically covered in a university statistical inference course who wishes to understand or study how standard methodology might fail. Simple, easy-to-understand statistical methods are presented which overcome these difficulties, and illustrated by detailed examples drawn from real applications. Parametric bootstrap resampling is used throughout for analysing the properties of fitted models, illustrating its ease of implementation even in non-standard situations. Distributional properties are obtained numerically for estimators or statistics not previously considered in the literature because their theoretical distributional properties are too hard to obtain theoretically. Bootstrap results are presented mainly graphically in the book, providing easy-to-understand demonstration of the sampling behaviour of estimators.Less
This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates behave unusually, and (ii) the use of parametric bootstrap resampling methods in analysing such problems. Simple and practical model building is an underlying theme. A frequentist viewpoint based on likelihood is adopted, for which there is a well-established and very practical theory. The standard situation is where certain widely applicable regularity conditions hold. However, there are many apparently innocuous situations where standard theory breaks down, sometimes spectacularly. Most of the departures from regularity are described geometrically in the book, with mathematical detail only sufficient to clarify the non-standard nature of a problem and to allow formulation of practical solutions. The book is intended for anyone with a basic knowledge of statistical methods typically covered in a university statistical inference course who wishes to understand or study how standard methodology might fail. Simple, easy-to-understand statistical methods are presented which overcome these difficulties, and illustrated by detailed examples drawn from real applications. Parametric bootstrap resampling is used throughout for analysing the properties of fitted models, illustrating its ease of implementation even in non-standard situations. Distributional properties are obtained numerically for estimators or statistics not previously considered in the literature because their theoretical distributional properties are too hard to obtain theoretically. Bootstrap results are presented mainly graphically in the book, providing easy-to-understand demonstration of the sampling behaviour of estimators.
Christopher G. Small and Jinfang Wang
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198506881
- eISBN:
- 9780191709258
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198506881.001.0001
- Subject:
- Mathematics, Probability / Statistics
Nonlinearity arises in statistical inference in various ways, with varying degrees of severity, as an obstacle to statistical analysis. More entrenched forms of nonlinearity often require intensive ...
More
Nonlinearity arises in statistical inference in various ways, with varying degrees of severity, as an obstacle to statistical analysis. More entrenched forms of nonlinearity often require intensive numerical methods to construct estimators. Root search algorithms and one-step estimators are standard methods of solution. This book provides a comprehensive study of nonlinear estimating equations and artificial likelihoods for statistical inference. It provides extensive coverage and comparison of hill climbing algorithms which, when started at points of nonconcavity, often have very poor convergence properties. For additional flexibility, number of modifications to the standard methods for solving these algorithms are proposed. The book also goes beyond simple root search algorithms to include a discussion of the testing of roots for consistency and the modification of available estimating functions to provide greater stability in inference. A variety of examples from practical applications are included to illustrate the problems and possibilities.Less
Nonlinearity arises in statistical inference in various ways, with varying degrees of severity, as an obstacle to statistical analysis. More entrenched forms of nonlinearity often require intensive numerical methods to construct estimators. Root search algorithms and one-step estimators are standard methods of solution. This book provides a comprehensive study of nonlinear estimating equations and artificial likelihoods for statistical inference. It provides extensive coverage and comparison of hill climbing algorithms which, when started at points of nonconcavity, often have very poor convergence properties. For additional flexibility, number of modifications to the standard methods for solving these algorithms are proposed. The book also goes beyond simple root search algorithms to include a discussion of the testing of roots for consistency and the modification of available estimating functions to provide greater stability in inference. A variety of examples from practical applications are included to illustrate the problems and possibilities.
Raphaël Mourad (ed.)
- Published in print:
- 2014
- Published Online:
- December 2014
- ISBN:
- 9780198709022
- eISBN:
- 9780191779619
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198709022.001.0001
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic ...
More
At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic models whose graphical components denote conditional independence structures between random variables. The probabilistic framework makes it possible to deal with data uncertainty while the conditional independence assumption helps process high dimensional and complex data. Examples of probabilistic graphical models are Bayesian networks and Markov random fields, which represent two of the most popular classes of such models. With the rapid advancements of high-throughput technologies and the ever decreasing costs of these next generation technologies, a fast-growing volume of biological data of various types—the so-called omics—is in need of accurate and efficient methods for modeling, prior to further downstream analysis. Network reconstruction from gene expression data represents perhaps the most emblematic area of research where probabilistic graphical models have been successfully applied. However these models have also created renew interest in genetics, in particular: association genetics, causality discovery, prediction of outcomes, detection of copy number variations, epigenetics, etc.. For all these reasons, it is foreseeable that such models will have a prominent role to play in advances in genome-wide analyses.Less
At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic models whose graphical components denote conditional independence structures between random variables. The probabilistic framework makes it possible to deal with data uncertainty while the conditional independence assumption helps process high dimensional and complex data. Examples of probabilistic graphical models are Bayesian networks and Markov random fields, which represent two of the most popular classes of such models. With the rapid advancements of high-throughput technologies and the ever decreasing costs of these next generation technologies, a fast-growing volume of biological data of various types—the so-called omics—is in need of accurate and efficient methods for modeling, prior to further downstream analysis. Network reconstruction from gene expression data represents perhaps the most emblematic area of research where probabilistic graphical models have been successfully applied. However these models have also created renew interest in genetics, in particular: association genetics, causality discovery, prediction of outcomes, detection of copy number variations, epigenetics, etc.. For all these reasons, it is foreseeable that such models will have a prominent role to play in advances in genome-wide analyses.
John C Gower and Garmt B Dijksterhuis
- Published in print:
- 2004
- Published Online:
- September 2007
- ISBN:
- 9780198510581
- eISBN:
- 9780191708961
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198510581.001.0001
- Subject:
- Mathematics, Probability / Statistics
Procrustean methods are used to transform one set of data to represent another set of data as closely as possible. This book unifies several strands in the literature and contains new algorithms. It ...
More
Procrustean methods are used to transform one set of data to represent another set of data as closely as possible. This book unifies several strands in the literature and contains new algorithms. It focuses on matching two or more configurations by using orthogonal, projection, and oblique axes transformations. Group-average summaries play an important part, and links with other group-average methods are discussed. The text is multi-disciplinary and also presents a unifying ANOVA framework.Less
Procrustean methods are used to transform one set of data to represent another set of data as closely as possible. This book unifies several strands in the literature and contains new algorithms. It focuses on matching two or more configurations by using orthogonal, projection, and oblique axes transformations. Group-average summaries play an important part, and links with other group-average methods are discussed. The text is multi-disciplinary and also presents a unifying ANOVA framework.
Mathew Penrose
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198506263
- eISBN:
- 9780191707858
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198506263.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book sets out a body of rigorous mathematical theory for finite graphs with nodes placed randomly in Euclidean d-space according to a common probability density, and edges added to connect ...
More
This book sets out a body of rigorous mathematical theory for finite graphs with nodes placed randomly in Euclidean d-space according to a common probability density, and edges added to connect points that are close to each other. As an alternative to classical random graph models, these geometric graphs are relevant to the modelling of real networks having spatial content, arising for example in wireless communications, parallel processing, classification, epidemiology, astronomy, and the internet. Their study illustrates numerous techniques of modern stochastic geometry, including Stein's method, martingale methods, and continuum percolation. Typical results in the book concern properties of a graph G on n random points with edges included for interpoint distances up to r, with the parameter r dependent on n and typically small for large n. Asymptotic distributional properties are derived for numerous graph quantities. These include the number of copies of a given finite graph embedded in G, the number of isolated components isomorphic to a given graph, the empirical distributions of vertex degrees, the clique number, the chromatic number, the maximum and minimum degree, the size of the largest component, the total number of components, and the connectivity of the graph.Less
This book sets out a body of rigorous mathematical theory for finite graphs with nodes placed randomly in Euclidean d-space according to a common probability density, and edges added to connect points that are close to each other. As an alternative to classical random graph models, these geometric graphs are relevant to the modelling of real networks having spatial content, arising for example in wireless communications, parallel processing, classification, epidemiology, astronomy, and the internet. Their study illustrates numerous techniques of modern stochastic geometry, including Stein's method, martingale methods, and continuum percolation. Typical results in the book concern properties of a graph G on n random points with edges included for interpoint distances up to r, with the parameter r dependent on n and typically small for large n. Asymptotic distributional properties are derived for numerous graph quantities. These include the number of copies of a given finite graph embedded in G, the number of isolated components isomorphic to a given graph, the empirical distributions of vertex degrees, the clique number, the chromatic number, the maximum and minimum degree, the size of the largest component, the total number of components, and the connectivity of the graph.
Shoutir Kishore Chatterjee
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198525318
- eISBN:
- 9780191711657
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198525318.001.0001
- Subject:
- Mathematics, Probability / Statistics
The book examines the distinguishing features of the various contending approaches to statistical inference (including decision-making) that are currently available in statistical literature, and ...
More
The book examines the distinguishing features of the various contending approaches to statistical inference (including decision-making) that are currently available in statistical literature, and traces the historical evolution of the concepts underlying these approaches and their applications. The first part entitled, Perspective, shows that statistical inference is really a prolongation of the philosophical problem of induction, and in it, probability is involved both in the input (in the form of model) and the output (for quantifying uncertainty). Four different approaches (behavioural, instantial, pro-subjective Bayesian, and purely subjective) to such statistical induction arise due to the invocation of different conceptions of probability (objective and subjective) at the two stages of the process. The comparative characteristics, advantages, and disadvantages of the different approaches are considered, and it is concluded that each is appropriate in its natural setting. The second part entitled, History, discusses how the different types of probability originated and evolved, and how their application to statistical induction gave rise to the variety of concepts and principles associated with the different approaches. After some reference to pre-history, the developments made by the principal contributors to probability and statistics during 17th-20th centuries (from Cardano, Pascal, Fermat, Huygens, and James Bernoulli through to Daniel Bernoulli, Bayes, Laplace, Gauss, to Galton, Karl Pearon, Fisher, Jeffreys, de Finetti, Neyman, E. S. Pearson, Wald, and their successors) are delineated.Less
The book examines the distinguishing features of the various contending approaches to statistical inference (including decision-making) that are currently available in statistical literature, and traces the historical evolution of the concepts underlying these approaches and their applications. The first part entitled, Perspective, shows that statistical inference is really a prolongation of the philosophical problem of induction, and in it, probability is involved both in the input (in the form of model) and the output (for quantifying uncertainty). Four different approaches (behavioural, instantial, pro-subjective Bayesian, and purely subjective) to such statistical induction arise due to the invocation of different conceptions of probability (objective and subjective) at the two stages of the process. The comparative characteristics, advantages, and disadvantages of the different approaches are considered, and it is concluded that each is appropriate in its natural setting. The second part entitled, History, discusses how the different types of probability originated and evolved, and how their application to statistical induction gave rise to the variety of concepts and principles associated with the different approaches. After some reference to pre-history, the developments made by the principal contributors to probability and statistics during 17th-20th centuries (from Cardano, Pascal, Fermat, Huygens, and James Bernoulli through to Daniel Bernoulli, Bayes, Laplace, Gauss, to Galton, Karl Pearon, Fisher, Jeffreys, de Finetti, Neyman, E. S. Pearson, Wald, and their successors) are delineated.
Carsten Wiuf and Claus L. Andersen (eds)
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199532872
- eISBN:
- 9780191714467
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199532872.001.0001
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
This book discusses novel advances in informatics and statistics in molecular cancer research. Through eight chapters it discusses specific topics in cancer research, talks about how the topics give ...
More
This book discusses novel advances in informatics and statistics in molecular cancer research. Through eight chapters it discusses specific topics in cancer research, talks about how the topics give rise to development of new informatics and statistics tools, and explains how the tools can be applied. The focus of the book is to provide an understanding of key concepts and tools, rather than focusing on technical issues. A main theme is the extensive use of array technologies in modern cancer research — gene expression and exon arrays, SNP and copy number arrays and methylation arrays — to derive quantitative and qualitative statements about cancer, its progression and aetiology, and to understand how these technologies at one hand allow us learn about cancer tissue as a complex system and at the other hand allow us to pinpoint key genes and events as crucial for the development of the disease. Cancer is characterized by genetic and genomic alterations that influence all levels of the cell's machinery and function.Less
This book discusses novel advances in informatics and statistics in molecular cancer research. Through eight chapters it discusses specific topics in cancer research, talks about how the topics give rise to development of new informatics and statistics tools, and explains how the tools can be applied. The focus of the book is to provide an understanding of key concepts and tools, rather than focusing on technical issues. A main theme is the extensive use of array technologies in modern cancer research — gene expression and exon arrays, SNP and copy number arrays and methylation arrays — to derive quantitative and qualitative statements about cancer, its progression and aetiology, and to understand how these technologies at one hand allow us learn about cancer tissue as a complex system and at the other hand allow us to pinpoint key genes and events as crucial for the development of the disease. Cancer is characterized by genetic and genomic alterations that influence all levels of the cell's machinery and function.
Peter J. Diggle and Amanda G. Chetwynd
- Published in print:
- 2011
- Published Online:
- December 2013
- ISBN:
- 9780199543182
- eISBN:
- 9780191774867
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199543182.001.0001
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
An antidote to technique-oriented service courses, this book studiously avoids the recipe-book style and keeps algebraic details of specific statistical methods to the minimum extent necessary to ...
More
An antidote to technique-oriented service courses, this book studiously avoids the recipe-book style and keeps algebraic details of specific statistical methods to the minimum extent necessary to understand the underlying concepts. Instead, it aims to give the reader a clear understanding of how core statistical ideas of experimental design, modelling, and data analysis are integral to the scientific method. Aimed primarily towards a range of scientific disciplines (albeit with a bias towards the biological, environmental, and health sciences), this book assumes some maturity of understanding of scientific method, but does not require any prior knowledge of statistics, or any mathematical knowledge beyond basic algebra and a willingness to come to terms with mathematical notation. Any statistical analysis of a realistically sized data-set requires the use of specially written computer software. An Appendix introduces the reader to our open-source software of choice. All of the material in the book can be understood without using either R or any other computer software.Less
An antidote to technique-oriented service courses, this book studiously avoids the recipe-book style and keeps algebraic details of specific statistical methods to the minimum extent necessary to understand the underlying concepts. Instead, it aims to give the reader a clear understanding of how core statistical ideas of experimental design, modelling, and data analysis are integral to the scientific method. Aimed primarily towards a range of scientific disciplines (albeit with a bias towards the biological, environmental, and health sciences), this book assumes some maturity of understanding of scientific method, but does not require any prior knowledge of statistics, or any mathematical knowledge beyond basic algebra and a willingness to come to terms with mathematical notation. Any statistical analysis of a realistically sized data-set requires the use of specially written computer software. An Appendix introduces the reader to our open-source software of choice. All of the material in the book can be understood without using either R or any other computer software.