next up previous
Next: MC480 Mathematics Project Up: Year 4 Previous: MC449 Galois Theory

MC460 Statistical Inference


MC460 Statistical Inference

Credits: 20 Convenor: Mr. B. English Semester: 2


Prerequisites: essential: MC160, MC161, MC260 desirable: MC261
Assessment: Coursework: 10% Three hour exam: 90%

Lectures: 36 Classes: none
Tutorials: 12 Private Study: 102
Labs: none Seminars: none
Project: none Other: none
Total: 150

Explanation of Pre-requisites

Modules MC160 and MC260 provide the core probability and distribution theory for this course, while MC161 provides necessary introductory material on the likelihood function, hypothesis testing and confidence intervals. The module MC261 reinforces and extends material from these earlier modules and is therefore, given the importance of this material, a desirable prerequisite. Further, methods covered informally in MC261 provide motivation for a more formal analysis in this module.

Course Description

This module discusses general principles which may be used to derive classical procedures introduced informally in earlier modules. For example, apart from their intuitive reasonableness, can the t-test and $\chi^2$-goodness-of-fit test, be given more formal foundations within a more general theory of hypothesis testing?

In a more formal appraisal of estimation, we consider the problem of finding a `best' estimator, how the best unbiased estimator (if it exists) may be found, and question whether such estimators are desirable. The theoretical support for the method of maximum likelihood estimation is considered, and some of its limitations are identified.

Detailed consideration is given to inferences based on the large-sample properties of the maximum likelihood estimator, and their asymptotic equivalents. Such methods play a key role in much modern applied statistical analysis.

We also discuss a number of standard inferential topics from a Bayesian standpoint; an increasingly important approach over the last two decades, and consider some aspects of the debate between adherents of the Bayesian and Frequentist approaches to inference.

Aims

To discuss and illustrate some of the general principles which may be exploited to derive various classical statistical procedures introduced informally in earlier modules. To expose students to some of the elegant results and more thorny and fascinating questions of statistical inference, to encourage further study. To this end, we discuss some aspects of the debate between adherents of the Bayesian and Frequentist approaches to inference.

To provide a solid grounding for inferences based on the large-sample properties of the maximum likelihood estimator, and their asymptotic equivalents. Such methods play a key role in much modern applied statistical analysis.

Objectives

On completion of this module, students should:

Transferable Skills

Syllabus

A review and extensions of some distribution theory; bivariate distributions for variables of mixed types; the multinomial distribution and its basic properties; order statistics. Chebyshev's inequality, the Weak and Strong Laws of large numbers.

The likelihood function, the weak and strong likelihood principles. Competing approaches to inference; the frequentist and Bayesian approach. The specification of prior distributions and computation of posterior distributions; an example of Bayesian inference; other approaches. Sufficient statistics; frequentist and Bayesian definitions; the factorisation theorem. Point estimation; loss functions, risk functions, admissibility, unbiasedness and consistency. Unbiased estimates; the Cramér-Rao inequality, the Rao-Blackwell and Lehmann-Scheffé Theorems. The pros and cons of unbiased estimators. The maximum likelihood estimator, its asymptotic distribution (for one or more parameters, under suitable regularity conditions); asymptotic equivalents, and use for providing approximate tests and confidence intervals. The pros and cons of maximum likelihood estimation. Bayesian and mini-max estimators, and their calculation.

Hypothesis testing; the Neyman-Pearson Lemma, uniformly most powerful tests, unbiased tests. Tests based on the large sample properties of the maximum likelihood estimate, likelihood ratio test (Wilks' Theorem), and score statistic. Fisher's approach. The Bayesian approach to significance tests and Lindley's paradox. Confidence intervals and regions, their relationship to hypothesis testing; the Fieller-Creasy method, and recognisable subsets. Bayesian credible intervals.

Ancillary statistics, the ancillarity principle, and conditional likelihoods.

Reading list

Background:

V. Barnett, Comparative Statistical Inference, J. Wiley, 1973.

D. R. Cox and D. V. Hinkley, Theoretical Statistics, Chapman and Hall, 1974.

M. H. DeGroot, Probability and Statistics, 2nd edition, Addison-Wesley, 1986.

S. D. Silvey, Statistical Inference, Chapman and Hall, 1975.


next up previous
Next: MC480 Mathematics Project Up: Year 4 Previous: MC449 Galois Theory
Roy L. Crole
10/22/1998