r代写-MATH6174W1
时间:2023-01-08
UNIVERSITY OF SOUTHAMPTON MATH6174W1
SEMESTER 1 EXAMINATION 2022/23
MATH6174: Likelihood and Bayesian Inference
Duration: Due Date: January 12, 2023
• This coursework is worth 25% of the overall assessment.
• Answer all the questions.
• Standard university guidelines will be followed for plaigiarism (copy-
ing).
Page 1 of 4
Copyright 2023 © University of Southampton Page 1 of 4
2 MATH6174W1
1. [25 marks.] Suppose that we have a random sample of normal data
Yi ∼ N(µ, σ2), i = 1, . . . , n
where σ2 is known but µ is unknown. Thus for µ the likelihood function comes from
the distribution
Y¯ ∼ N(µ, σ2/n),
where Y¯ = 1n
∑n
i=1 Yi. Assume the prior distribution for µ is given by N(γ, σ
2/n0)
where γ and n0 are known constants.
(a) [10 marks] Show that the posterior distribution for µ is normal with
mean = E(µ|y¯) = n0γ + ny¯
n0 + n
, variance = var(µ|y¯) = σ
2
n0 + n
.
(b) [4 marks] Provide an interpretation for each of E(µ|y¯) and var(µ|y¯) in terms of
the prior and data means and prior and data sample sizes.
(c) [4 marks] By writing a future observation Y˜ = µ+ ϵ where ϵ ∼ N(0, σ2)
independently of the posterior distribution π(µ|y¯) explain why the posterior
predictive distribution of Y˜ given y¯ is normally distributed. Obtain the mean and
variance of this posterior predictive distribution.
(d) [7 marks] Suppose that in an experiment n = 2, y¯ = 130, n0 = 0.25, γ = 120
and σ2 = 25. Obtain:
(i) the posterior mean, E(µ|y¯) and variance, var(µ|y¯),
(ii) a 95% credible interval for µ given y¯,
(iii) the mean and variance of the posterior predictive distribution of a future
observation Y˜ ,
(iv) a 95% prediction interval for a future observation Y˜ .
2. [25 marks.]
Copyright 2023 © University of Southampton Page 2 of 4
3 MATH6174W1
Suppose that y1, . . . , yn are i.i.d. observations from a Poisson distribution with mean
θ where θ > 0 is unknown. Consider the following three models, that differ in the
specification of the prior distribution of θ:
M1 : θ = 1,
M2 : π(θ) =
ba
Γ(a)θ
a−1e−bθ, a > 0, b > 0,
M3 : π(θ) ∝

I(θ),
where I(θ) is the Fisher information number (see the formula sheet for its definition).
(a) [8 marks] Write down the likelihood function. Hence obtain the Jeffreys prior for
θ given by π(θ) =

I(θ).
(b) [10 marks] Derive an expression for the Bayes factor for comparing models M1
and M2. If a = b = 1, n = 2 and y1 + y2 = 4, find the values of the Bayes
factor. Which model is preferred?
(c) [7 marks] Explain why it is problematic to use the Bayes factor to compare M3
with any of the other two models. Describe an alternative approach for comparing
M3 with any other model and discuss how it can be implemented using Monte
Carlo methods.
3. Assume that we want to use a Gibbs sampler to estimate P (X1 ≥ 0, X2 ≥ 0) for a
normal distribution N(µ,Σ), where µ =
(
µ1
µ2
)
and Σ =
(
σ21 σ12
σ12 σ
2
2
)
. The pdf of
this distribution is
f(x1, x2) ∝ exp
{
−1
2
(
x1 − µ1
x2 − µ2
)T (
σ21 σ12
σ12 σ
2
2
)−1(
x1 − µ1
x2 − µ2
)}
.
(a) Show that f(x1|x2) ∝ exp
−
[
x1−
{
µ1+
σ12
σ22
(x2−µ2)
}]2
2
(
σ21−
σ212
σ22
)
 , i.e.
X1|(X2 = x2) ∼ N
(
µ1 +
σ12
σ22
(x2 − µ2), σ21 −
σ212
σ22
)
.
(Question 3 continued on next page)
Copyright 2023 © University of Southampton
TURN OVER
Page 3 of 4
4 MATH6174W1
(b) Write down the conditional distribution of X2|(X1 = x1)
(c) Write down what the Gibbs sampler step is for some t = 1, 2, . . .
(d) Consider the special case µ1 = µ2 = 0, σ21 = σ
2
2 = 1 and σ12 = 0.4. Write the
code in R to implement Gibbs sampler for this case (consider t = 1, 2, . . . , 4000
and x(0)2 ∼ N(µ2, σ22)).
(e) Plot the resulting chains. Estimate P (X1 ≥ 0, X2 ≥ 0). From the resulting
chains, calculate and plot the evolution of P (X1 ≥ 0, X2 ≥ 0) over time
t = 1, 2, . . . , 4000. Run the sampler 100 times and plot the replications of
P (X1 ≥ 0, X2 ≥ 0) over time in grey (all in one plot). Add the original estimate
of P (X1 ≥ 0, X2 ≥ 0) as a black line over the plotted range.
END OF PAPER

Copyright 2023 © University of Southampton Page 4 of 4

R语言代写


留学生CS代写|代做Java编程|C作业|C++程序|Python代码