Bayesian statistics : (Record no. 9502)
[ view plain ]
000 -LEADER | |
---|---|
fixed length control field | 18579cam a2200229 a 4500 |
005 - DATE & TIME | |
control field | 20230518161617.0 |
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION | |
fixed length control field | 120228s2012 nju b 001 0 eng |
010 ## - LIBRARY OF CONGRESS CONTROL NUMBER | |
LC control number | 2012007007 |
020 ## - ISBN | |
International Standard Book Number | 9781118332573 (pbk.) |
Price | 6073.00 |
040 ## - CATALOGING SOURCE | |
Original cataloging agency | S.X.U.K |
041 ## - Language | |
Language | English |
082 00 - DDC NUMBER | |
Classification number | R 519.542 LEE(BAY) |
100 1# - MAIN ENTRY--PERSONAL NAME | |
Personal name | Lee, Peter M. |
245 10 - TITLE STATEMENT | |
Title | Bayesian statistics : |
Sub Title | an introduction / |
Statement of responsibility | Peter M. Lee. |
250 ## - EDITION STATEMENT | |
Edition statement | 4th ed. |
260 ## - PUBLICATION, DISTRIBUTION, ETC. (IMPRINT) | |
Place of publication, distribution, etc | Chichester, West Sussex ; |
-- | Hoboken, N.J. : |
Name of publisher, distributor, etc | Wiley, |
Date of publication, distribution, etc | 2012. |
300 ## - PHYSICAL DESCRIPTION | |
Pages | xxiii, 462 p. ; |
Dimension | 23 cm. |
Other Details | P.B. |
500 ## - GENERAL NOTE | |
General note | TABLE OF CONTENTS<br/>Preface xix<br/><br/>Preface to the First Edition xxi<br/><br/>1 Preliminaries 1<br/><br/>1.1 Probability and Bayes’ Theorem 1<br/><br/>1.1.1 Notation 1<br/><br/>1.1.2 Axioms for probability 2<br/><br/>1.1.3 ‘Unconditional’ probability 5<br/><br/>1.1.4 Odds 6<br/><br/>1.1.5 Independence 7<br/><br/>1.1.6 Some simple consequences of the axioms; Bayes’ Theorem 7<br/><br/>1.2 Examples on Bayes’ Theorem 9<br/><br/>1.2.1 The Biology of Twins 9<br/><br/>1.2.2 A political example 10<br/><br/>1.2.3 A warning 10<br/><br/>1.3 Random variables 12<br/><br/>1.3.1 Discrete random variables 12<br/><br/>1.3.2 The binomial distribution 13<br/><br/>1.3.3 Continuous random variables 14<br/><br/>1.3.4 The normal distribution 16<br/><br/>1.3.5 Mixed random variables 17<br/><br/>1.4 Several random variables 17<br/><br/>1.4.1 Two discrete random variables 17<br/><br/>1.4.2 Two continuous random variables 18<br/><br/>1.4.3 Bayes’ Theorem for random variables 20<br/><br/>1.4.4 Example 21<br/><br/>1.4.5 One discrete variable and one continuous variable 21<br/><br/>1.4.6 Independent random variables 22<br/><br/>1.5 Means and variances 23<br/><br/>1.5.1 Expectations 23<br/><br/>1.5.2 The expectation of a sum and of a product 24<br/><br/>1.5.3 Variance, precision and standard deviation 25<br/><br/>1.5.4 Examples 25<br/><br/>1.5.5 Variance of a sum; covariance and correlation 27<br/><br/>1.5.6 Approximations to the mean and variance of a function of a random variable 28<br/><br/>1.5.7 Conditional expectations and variances 29<br/><br/>1.5.8 Medians and modes 31<br/><br/>1.6 Exercises on Chapter 1 31<br/><br/>2 Bayesian inference for the normal distribution 36<br/><br/>2.1 Nature of Bayesian inference 36<br/><br/>2.1.1 Preliminary remarks 36<br/><br/>2.1.2 Post is prior times likelihood 36<br/><br/>2.1.3 Likelihood can be multiplied by any constant 38<br/><br/>2.1.4 Sequential use of Bayes’ Theorem 38<br/><br/>2.1.5 The predictive distribution 39<br/><br/>2.1.6 A warning 39<br/><br/>2.2 Normal prior and likelihood 40<br/><br/>2.2.1 Posterior from a normal prior and likelihood 40<br/><br/>2.2.2 Example 42<br/><br/>2.2.3 Predictive distribution 43<br/><br/>2.2.4 The nature of the assumptions made 44<br/><br/>2.3 Several normal observations with a normal prior 44<br/><br/>2.3.1 Posterior distribution 44<br/><br/>2.3.2 Example 46<br/><br/>2.3.3 Predictive distribution 47<br/><br/>2.3.4 Robustness 47<br/><br/>2.4 Dominant likelihoods 48<br/><br/>2.4.1 Improper priors 48<br/><br/>2.4.2 Approximation of proper priors by improper priors 49<br/><br/>2.5 Locally uniform priors 50<br/><br/>2.5.1 Bayes’ postulate 50<br/><br/>2.5.2 Data translated likelihoods 52<br/><br/>2.5.3 Transformation of unknown parameters 52<br/><br/>2.6 Highest density regions 54<br/><br/>2.6.1 Need for summaries of posterior information 54<br/><br/>2.6.2 Relation to classical statistics 55<br/><br/>2.7 Normal variance 55<br/><br/>2.7.1 A suitable prior for the normal variance 55<br/><br/>2.7.2 Reference prior for the normal variance 58<br/><br/>2.8 HDRs for the normal variance 59<br/><br/>2.8.1 What distribution should we be considering? 59<br/><br/>2.8.2 Example 59<br/><br/>2.9 The role of sufficiency 60<br/><br/>2.9.1 Definition of sufficiency 60<br/><br/>2.9.2 Neyman’s factorization theorem 61<br/><br/>2.9.3 Sufficiency principle 63<br/><br/>2.9.4 Examples 63<br/><br/>2.9.5 Order statistics and minimal sufficient statistics 65<br/><br/>2.9.6 Examples on minimal sufficiency 66<br/><br/>2.10 Conjugate prior distributions 67<br/><br/>2.10.1 Definition and difficulties 67<br/><br/>2.10.2 Examples 68<br/><br/>2.10.3 Mixtures of conjugate densities 69<br/><br/>2.10.4 Is your prior really conjugate? 71<br/><br/>2.11 The exponential family 71<br/><br/>2.11.1 Definition 71<br/><br/>2.11.2 Examples 72<br/><br/>2.11.3 Conjugate densities 72<br/><br/>2.11.4 Two-parameter exponential family 73<br/><br/>2.12 Normal mean and variance both unknown 73<br/><br/>2.12.1 Formulation of the problem 73<br/><br/>2.12.2 Marginal distribution of the mean 75<br/><br/>2.12.3 Example of the posterior density for the mean 76<br/><br/>2.12.4 Marginal distribution of the variance 77<br/><br/>2.12.5 Example of the posterior density of the variance 77<br/><br/>2.12.6 Conditional density of the mean for given variance 77<br/><br/>2.13 Conjugate joint prior for the normal distribution 78<br/><br/>2.13.1 The form of the conjugate prior 78<br/><br/>2.13.2 Derivation of the posterior 80<br/><br/>2.13.3 Example 81<br/><br/>2.13.4 Concluding remarks 82<br/><br/>2.14 Exercises on Chapter 2 82<br/><br/>3 Some other common distributions 85<br/><br/>3.1 The binomial distribution 85<br/><br/>3.1.1 Conjugate prior 85<br/><br/>3.1.2 Odds and log-odds 88<br/><br/>3.1.3 Highest density regions 90<br/><br/>3.1.4 Example 91<br/><br/>3.1.5 Predictive distribution 92<br/><br/>3.2 Reference prior for the binomial likelihood 92<br/><br/>3.2.1 Bayes’ postulate 92<br/><br/>3.2.2 Haldane’s prior 93<br/><br/>3.2.3 The arc-sine distribution 94<br/><br/>3.2.4 Conclusion 95<br/><br/>3.3 Jeffreys’ rule 96<br/><br/>3.3.1 Fisher’s information 96<br/><br/>3.3.2 The information from several observations 97<br/><br/>3.3.3 Jeffreys’ prior 98<br/><br/>3.3.4 Examples 98<br/><br/>3.3.5 Warning 100<br/><br/>3.3.6 Several unknown parameters 100<br/><br/>3.3.7 Example 101<br/><br/>3.4 The Poisson distribution 102<br/><br/>3.4.1 Conjugate prior 102<br/><br/>3.4.2 Reference prior 103<br/><br/>3.4.3 Example 104<br/><br/>3.4.4 Predictive distribution 104<br/><br/>3.5 The uniform distribution 106<br/><br/>3.5.1 Preliminary definitions 106<br/><br/>3.5.2 Uniform distribution with a fixed lower endpoint 107<br/><br/>3.5.3 The general uniform distribution 108<br/><br/>3.5.4 Examples 110<br/><br/>3.6 Reference prior for the uniform distribution 110<br/><br/>3.6.1 Lower limit of the interval fixed 110<br/><br/>3.6.2 Example 111<br/><br/>3.6.3 Both limits unknown 111<br/><br/>3.7 The tramcar problem 113<br/><br/>3.7.1 The discrete uniform distribution 113<br/><br/>3.8 The first digit problem; invariant priors 114<br/><br/>3.8.1 A prior in search of an explanation 114<br/><br/>3.8.2 The problem 114<br/><br/>3.8.3 A solution 115<br/><br/>3.8.4 Haar priors 117<br/><br/>3.9 The circular normal distribution 117<br/><br/>3.9.1 Distributions on the circle 117<br/><br/>3.9.2 Example 119<br/><br/>3.9.3 Construction of an HDR by numerical integration 120<br/><br/>3.9.4 Remarks 122<br/><br/>3.10 Approximations based on the likelihood 122<br/><br/>3.10.1 Maximum likelihood 122<br/><br/>3.10.2 Iterative methods 123<br/><br/>3.10.3 Approximation to the posterior density 123<br/><br/>3.10.4 Examples 124<br/><br/>3.10.5 Extension to more than one parameter 126<br/><br/>3.10.6 Example 127<br/><br/>3.11 Reference posterior distributions 128<br/><br/>3.11.1 The information provided by an experiment 128<br/><br/>3.11.2 Reference priors under asymptotic normality 130<br/><br/>3.11.3 Uniform distribution of unit length 131<br/><br/>3.11.4 Normal mean and variance 132<br/><br/>3.11.5 Technical complications 134<br/><br/>3.12 Exercises on Chapter 3 134<br/><br/>4 Hypothesis testing 138<br/><br/>4.1 Hypothesis testing 138<br/><br/>4.1.1 Introduction 138<br/><br/>4.1.2 Classical hypothesis testing 138<br/><br/>4.1.3 Difficulties with the classical approach 139<br/><br/>4.1.4 The Bayesian approach 140<br/><br/>4.1.5 Example 142<br/><br/>4.1.6 Comment 143<br/><br/>4.2 One-sided hypothesis tests 143<br/><br/>4.2.1 Definition 143<br/><br/>4.2.2 P-values 144<br/><br/>4.3 Lindley’s method 145<br/><br/>4.3.1 A compromise with classical statistics 145<br/><br/>4.3.2 Example 145<br/><br/>4.3.3 Discussion 146<br/><br/>4.4 Point (or sharp) null hypotheses with prior information 146<br/><br/>4.4.1 When are point null hypotheses reasonable? 146<br/><br/>4.4.2 A case of nearly constant likelihood 147<br/><br/>4.4.3 The Bayesian method for point null hypotheses 148<br/><br/>4.4.4 Sufficient statistics 149<br/><br/>4.5 Point null hypotheses for the normal distribution 150<br/><br/>4.5.1 Calculation of the Bayes’ factor 150<br/><br/>4.5.2 Numerical examples 151<br/><br/>4.5.3 Lindley’s paradox 152<br/><br/>4.5.4 A bound which does not depend on the prior distribution 154<br/><br/>4.5.5 The case of an unknown variance 155<br/><br/>4.6 The Doogian philosophy 157<br/><br/>4.6.1 Description of the method 157<br/><br/>4.6.2 Numerical example 157<br/><br/>4.7 Exercises on Chapter 4 158<br/><br/>5 Two-sample problems 162<br/><br/>5.1 Two-sample problems – both variances unknown 162<br/><br/>5.1.1 The problem of two normal samples 162<br/><br/>5.1.2 Paired comparisons 162<br/><br/>5.1.3 Example of a paired comparison problem 163<br/><br/>5.1.4 The case where both variances are known 163<br/><br/>5.1.5 Example 164<br/><br/>5.1.6 Non-trivial prior information 165<br/><br/>5.2 Variances unknown but equal 165<br/><br/>5.2.1 Solution using reference priors 165<br/><br/>5.2.2 Example 167<br/><br/>5.2.3 Non-trivial prior information 167<br/><br/>5.3 Variances unknown and unequal (Behrens–Fisher problem) 168<br/><br/>5.3.1 Formulation of the problem 168<br/><br/>5.3.2 Patil’s approximation 169<br/><br/>5.3.3 Example 170<br/><br/>5.3.4 Substantial prior information 170<br/><br/>5.4 The Behrens–Fisher controversy 171<br/><br/>5.4.1 The Behrens–Fisher problem from a classical standpoint 171<br/><br/>5.4.2 Example 172<br/><br/>5.4.3 The controversy 173<br/><br/>5.5 Inferences concerning a variance ratio 173<br/><br/>5.5.1 Statement of the problem 173<br/><br/>5.5.2 Derivation of the F distribution 174<br/><br/>5.5.3 Example 175<br/><br/>5.6 Comparison of two proportions; the 2 × 2 table 176<br/><br/>5.6.1 Methods based on the log-odds ratio 176<br/><br/>5.6.2 Example 177<br/><br/>5.6.3 The inverse root-sine transformation 178<br/><br/>5.6.4 Other methods 178<br/><br/>5.7 Exercises on Chapter 5 179<br/><br/>6 Correlation, regression and the analysis of variance 182<br/><br/>6.1 Theory of the correlation coefficient 182<br/><br/>6.1.1 Definitions 182<br/><br/>6.1.2 Approximate posterior distribution of the correlation coefficient 184<br/><br/>6.1.3 The hyperbolic tangent substitution 186<br/><br/>6.1.4 Reference prior 188<br/><br/>6.1.5 Incorporation of prior information 189<br/><br/>6.2 Examples on the use of the correlation coefficient 189<br/><br/>6.2.1 Use of the hyperbolic tangent transformation 189<br/><br/>6.2.2 Combination of several correlation coefficients 189<br/><br/>6.2.3 The squared correlation coefficient 190<br/><br/>6.3 Regression and the bivariate normal model 190<br/><br/>6.3.1 The model 190<br/><br/>6.3.2 Bivariate linear regression 191<br/><br/>6.3.3 Example 193<br/><br/>6.3.4 Case of known variance 194<br/><br/>6.3.5 The mean value at a given value of the explanatory variable 194<br/><br/>6.3.6 Prediction of observations at a given value of the explanatory variable 195<br/><br/>6.3.7 Continuation of the example 195<br/><br/>6.3.8 Multiple regression 196<br/><br/>6.3.9 Polynomial regression 196<br/><br/>6.4 Conjugate prior for the bivariate regression model 197<br/><br/>6.4.1 The problem of updating a regression line 197<br/><br/>6.4.2 Formulae for recursive construction of a regression line 197<br/><br/>6.4.3 Finding an appropriate prior 199<br/><br/>6.5 Comparison of several means – the one way model 200<br/><br/>6.5.1 Description of the one way layout 200<br/><br/>6.5.2 Integration over the nuisance parameters 201<br/><br/>6.5.3 Derivation of the F distribution 203<br/><br/>6.5.4 Relationship to the analysis of variance 203<br/><br/>6.5.5 Example 204<br/><br/>6.5.6 Relationship to a simple linear regression model 206<br/><br/>6.5.7 Investigation of contrasts 207<br/><br/>6.6 The two way layout 209<br/><br/>6.6.1 Notation 209<br/><br/>6.6.2 Marginal posterior distributions 210<br/><br/>6.6.3 Analysis of variance 212<br/><br/>6.7 The general linear model 212<br/><br/>6.7.1 Formulation of the general linear model 212<br/><br/>6.7.2 Derivation of the posterior 214<br/><br/>6.7.3 Inference for a subset of the parameters 215<br/><br/>6.7.4 Application to bivariate linear regression 216<br/><br/>6.8 Exercises on Chapter 6 217<br/><br/>7 Other topics 221<br/><br/>7.1 The likelihood principle 221<br/><br/>7.1.1 Introduction 221<br/><br/>7.1.2 The conditionality principle 222<br/><br/>7.1.3 The sufficiency principle 223<br/><br/>7.1.4 The likelihood principle 223<br/><br/>7.1.5 Discussion 225<br/><br/>7.2 The stopping rule principle 226<br/><br/>7.2.1 Definitions 226<br/><br/>7.2.2 Examples 226<br/><br/>7.2.3 The stopping rule principle 227<br/><br/>7.2.4 Discussion 228<br/><br/>7.3 Informative stopping rules 229<br/><br/>7.3.1 An example on capture and recapture of fish 229<br/><br/>7.3.2 Choice of prior and derivation of posterior 230<br/><br/>7.3.3 The maximum likelihood estimator 231<br/><br/>7.3.4 Numerical example 231<br/><br/>7.4 The likelihood principle and reference priors 232<br/><br/>7.4.1 The case of Bernoulli trials and its general implications 232<br/><br/>7.4.2 Conclusion 233<br/><br/>7.5 Bayesian decision theory 234<br/><br/>7.5.1 The elements of game theory 234<br/><br/>7.5.2 Point estimators resulting from quadratic loss 236<br/><br/>7.5.3 Particular cases of quadratic loss 237<br/><br/>7.5.4 Weighted quadratic loss 238<br/><br/>7.5.5 Absolute error loss 238<br/><br/>7.5.6 Zero-one loss 239<br/><br/>7.5.7 General discussion of point estimation 240<br/><br/>7.6 Bayes linear methods 240<br/><br/>7.6.1 Methodology 240<br/><br/>7.6.2 Some simple examples 241<br/><br/>7.6.3 Extensions 243<br/><br/>7.7 Decision theory and hypothesis testing 243<br/><br/>7.7.1 Relationship between decision theory and classical hypothesis testing 243<br/><br/>7.7.2 Composite hypotheses 245<br/><br/>7.8 Empirical Bayes methods 245<br/><br/>7.8.1 Von Mises’ example 245<br/><br/>7.8.2 The Poisson case 246<br/><br/>7.9 Exercises on Chapter 7 247<br/><br/>8 Hierarchical models 253<br/><br/>8.1 The idea of a hierarchical model 253<br/><br/>8.1.1 Definition 253<br/><br/>8.1.2 Examples 254<br/><br/>8.1.3 Objectives of a hierarchical analysis 257<br/><br/>8.1.4 More on empirical Bayes methods 257<br/><br/>8.2 The hierarchical normal model 258<br/><br/>8.2.1 The model 258<br/><br/>8.2.2 The Bayesian analysis for known overall mean 259<br/><br/>8.2.3 The empirical Bayes approach 261<br/><br/>8.3 The baseball example 262<br/><br/>8.4 The Stein estimator 264<br/><br/>8.4.1 Evaluation of the risk of the James–Stein estimator 267<br/><br/>8.5 Bayesian analysis for an unknown overall mean 268<br/><br/>8.5.1 Derivation of the posterior 270<br/><br/>8.6 The general linear model revisited 272<br/><br/>8.6.1 An informative prior for the general linear model 272<br/><br/>8.6.2 Ridge regression 274<br/><br/>8.6.3 A further stage to the general linear model 275<br/><br/>8.6.4 The one way model 276<br/><br/>8.6.5 Posterior variances of the estimators 277<br/><br/>8.7 Exercises on Chapter 8 277<br/><br/>9 The Gibbs sampler and other numerical methods 281<br/><br/>9.1 Introduction to numerical methods 281<br/><br/>9.1.1 Monte Carlo methods 281<br/><br/>9.1.2 Markov chains 282<br/><br/>9.2 The EM algorithm 283<br/><br/>9.2.1 The idea of the EM algorithm 283<br/><br/>9.2.2 Why the EM algorithm works 285<br/><br/>9.2.3 Semi-conjugate prior with a normal likelihood 287<br/><br/>9.2.4 The EM algorithm for the hierarchical normal model 288<br/><br/>9.2.5 A particular case of the hierarchical normal model 290<br/><br/>9.3 Data augmentation by Monte Carlo 291<br/><br/>9.3.1 The genetic linkage example revisited 291<br/><br/>9.3.2 Use of R 291<br/><br/>9.3.3 The genetic linkage example in R 292<br/><br/>9.3.4 Other possible uses for data augmentation 293<br/><br/>9.4 The Gibbs sampler 294<br/><br/>9.4.1 Chained data augmentation 294<br/><br/>9.4.2 An example with observed data 296<br/><br/>9.4.3 More on the semi-conjugate prior with a normal likelihood 299<br/><br/>9.4.4 The Gibbs sampler as an extension of chained data augmentation 301<br/><br/>9.4.5 An application to change-point analysis 302<br/><br/>9.4.6 Other uses of the Gibbs sampler 306<br/><br/>9.4.7 More about convergence 309<br/><br/>9.5 Rejection sampling 311<br/><br/>9.5.1 Description 311<br/><br/>9.5.2 Example 311<br/><br/>9.5.3 Rejection sampling for log-concave distributions 311<br/><br/>9.5.4 A practical example 313<br/><br/>9.6 The Metropolis–Hastings algorithm 317<br/><br/>9.6.1 Finding an invariant distribution 317<br/><br/>9.6.2 The Metropolis–Hastings algorithm 318<br/><br/>9.6.3 Choice of a candidate density 320<br/><br/>9.6.4 Example 321<br/><br/>9.6.5 More realistic examples 322<br/><br/>9.6.6 Gibbs as a special case of Metropolis–Hastings 322<br/><br/>9.6.7 Metropolis within Gibbs 323<br/><br/>9.7 Introduction to WinBUGS and OpenBUGS 323<br/><br/>9.7.1 Information about WinBUGS and OpenBUGS 323<br/><br/>9.7.2 Distributions in WinBUGS and OpenBUGS 324<br/><br/>9.7.3 A simple example using WinBUGS 324<br/><br/>9.7.4 The pump failure example revisited 327<br/><br/>9.7.5 DoodleBUGS 327<br/><br/>9.7.6 coda 329<br/><br/>9.7.7 R2WinBUGS and R2OpenBUGS 329<br/><br/>9.8 Generalized linear models 332<br/><br/>9.8.1 Logistic regression 332<br/><br/>9.8.2 A general framework 334<br/><br/>9.9 Exercises on Chapter 9 335<br/><br/>10 Some approximate methods 340<br/><br/>10.1 Bayesian importance sampling 340<br/><br/>10.1.1 Importance sampling to find HDRs 343<br/><br/>10.1.2 Sampling importance re-sampling 344<br/><br/>10.1.3 Multidimensional applications 344<br/><br/>10.2 Variational Bayesian methods: simple case 345<br/><br/>10.2.1 Independent parameters 347<br/><br/>10.2.2 Application to the normal distribution 349<br/><br/>10.2.3 Updating the mean 350<br/><br/>10.2.4 Updating the variance 351<br/><br/>10.2.5 Iteration 352<br/><br/>10.2.6 Numerical example 352<br/><br/>10.3 Variational Bayesian methods: general case 353<br/><br/>10.3.1 A mixture of multivariate normals 353<br/><br/>10.4 ABC: Approximate Bayesian Computation 356<br/><br/>10.4.1 The ABC rejection algorithm 356<br/><br/>10.4.2 The genetic linkage example 358<br/><br/>10.4.3 The ABC Markov Chain Monte Carlo algorithm 360<br/><br/>10.4.4 The ABC Sequential Monte Carlo algorithm 362<br/><br/>10.4.5 The ABC local linear regression algorithm 365<br/><br/>10.4.6 Other variants of ABC 366<br/><br/>10.5 Reversible jump Markov chain Monte Carlo 367<br/><br/>10.5.1 RJMCMC algorithm 367<br/><br/>10.6 Exercises on Chapter 10 369<br/><br/>Appendix A Common statistical distributions 373<br/><br/>A.1 Normal distribution 374<br/><br/>A.2 Chi-squared distribution 375<br/><br/>A.3 Normal approximation to chi-squared 376<br/><br/>A.4 Gamma distribution 376<br/><br/>A.5 Inverse chi-squared distribution 377<br/><br/>A.6 Inverse chi distribution 378<br/><br/>A.7 Log chi-squared distribution 379<br/><br/>A.8 Student’s t distribution 380<br/><br/>A.9 Normal/chi-squared distribution 381<br/><br/>A.10 Beta distribution 382<br/><br/>A.11 Binomial distribution 383<br/><br/>A.12 Poisson distribution 384<br/><br/>A.13 Negative binomial distribution 385<br/><br/>A.14 Hypergeometric distribution 386<br/><br/>A.15 Uniform distribution 387<br/><br/>A.16 Pareto distribution 388<br/><br/>A.17 Circular normal distribution 389<br/><br/>A.18 Behrens’ distribution 391<br/><br/>A.19 Snedecor’s F distribution 393<br/><br/>A.20 Fisher’s z distribution 393<br/><br/>A.21 Cauchy distribution 394<br/><br/>A.22 The probability that one beta variable is greater than another 395<br/><br/>A.23 Bivariate normal distribution 395<br/><br/>A.24 Multivariate normal distribution 396<br/><br/>A.25 Distribution of the correlation coefficient 397<br/><br/>Appendix B Tables 399<br/><br/>B.1 Percentage points of the Behrens–Fisher distribution 399<br/><br/>B.2 Highest density regions for the chi-squared distribution 402<br/><br/>B.3 HDRs for the inverse chi-squared distribution 404<br/><br/>B.4 Chi-squared corresponding to HDRs for log chi-squared 406<br/><br/>B.5 Values of F corresponding to HDRs for log F 408<br/><br/>Appendix C R programs 430<br/><br/>Appendix D Further reading 436<br/><br/>D.1 Robustness 436<br/><br/>D.2 Nonparametric methods 436<br/><br/>D.3 Multivariate estimation 436<br/><br/>D.4 Time series and forecasting 437<br/><br/>D.5 Sequential methods 437<br/><br/>D.6 Numerical methods 437<br/><br/>D.7 Bayesian networks 437<br/><br/>D.8 General reading 438<br/><br/>References 439<br/><br/>Index 455 |
650 #0 - Subject | |
Subject | Bayesian statistical decision theory. |
650 #7 - Subject | |
Subject | MATHEMATICS / Probability & Statistics / Bayesian Analysis. |
942 ## - ADDED ENTRY ELEMENTS (KOHA) | |
Koha item type | REFERENCE STATISTICS |
Withdrawn status | Lost status | Source of classification or shelving scheme | Damaged status | Not for loan | Koha collection | Location (home branch) | Sublocation or collection (holding branch) | Shelving location | Date acquired | Source of acquisition | Cost, normal purchase price | Serial Enumeration / chronology | Koha issues (times borrowed) | Koha full call number | Barcode (Accession No.) | Koha date last seen | Copy Number | Price effective from | Koha item type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Dewey Decimal Classification | Not For Loan | Reference | St. Xavier's University, Kolkata | St. Xavier's University, Kolkata | Reference Section | 05/18/2023 | Segment book distributors | 6073.00 | S.X.U.K | R 519.542 LEE(BAY) | US9596 | 05/18/2023 | 9596 | 05/18/2023 | REFERENCE STATISTICS |