Processing math: 100%

Analytic Marginalizing For Posteriors

Consider the model

Xt+1=XtreβXt+σZt

with Zt a unit normal random variable. The likelihood of the sequence of T observations of X under this model is thus

P(X|r,β,σ)=12πσ2T1exp(T1t(logXt+1logXtlogr+βXt)22σ2)

To integrate out r, P(X|β,σ)=P(X|r,β,σ)P(r)dr, we’ll make this look like a Gaussian in logr by completing the square; getting the square on the outside of the sum. First we collect all the other terms as the factor, Mt;

Mt:=logXt+1logXt+βXt

Also define a=logr, then after expanding the square inside the sum we have

T1t(logrMt)2=T1ta22T1taMt+T1tM2t

(using the linearity of the summation operator). Use the trick of adding and subtracting (Mt)2/(T1), to get:

=T1tM2t(T1tMt)2T1+(T1tMt)2T12aMt+(T1)a2

=T1tM2t(T1tMt)2T1+(T1)((T1tMtT1)22aMtT1+a2)

=T1tM2t(T1tMt)2T1+(T1)(T1tMtT1a)2

Returning this expression into our exponential in place of the sum of squares, we have

P(r,β,σ|X)=12πσ2T1exp((T1)(T1tMtT1a)22σ2)exp(T1tM2t(T1tMt)2T12σ2)

Note that the second exp term does not depend on a. The remaining argument has Gaussian form in da, so after pulling out the constant terms we can easily integrate this over da. (Note that we have an implicit uniform prior on a here).

exp((T1tMtT1a)22σ1(T1)1)da=2πσ2T1

which we can combine with the remaining terms to recover

1(T1)(2πσ2)T2exp(M2t+(Mt)2T12σ2)

marginalizing over σ

Now that we have effectively eliminated the parameter r from our posterior calculation, we wish to also integrate out the second parameter, σ. Once again we can “integrate by analogy;” the expression above in the variable σ2 looks like a Gamma distribution,

xα1eβxdx=βαΓ(α)

Where we take

α=T/2

and

β=12(M2t(Mt)2T1),

leaving us with

1(T1)2πT212T/2(M2t(Mt)2T1)T/2Γ(T/2)

Additional recruitment functions

The above derivation can be followed identically for the three-parameter recruitment functions I refer to as the Allen and Myers models after an appropriate choice of Mt. In both the Ricker and Allen models we must first reparamaterize the models to isolate the α term correctly.

Ricker

The original parameterization

Xt+1=Xter(1XtK) does not partition into the form above. Taking β=rK and a=r, we can write Mt as above,

Mt:=logXt+1logXt+βXt

Myers

Xt+1=rXθt1+XθtKZt

For Zt lognormal with log-mean zero and log-standard-deviation σ, The log-likelihood takes the form

and thus we can write Mt as

Mt:=logXt+1θlogXt+log(1+XθtK)

Allen

The original parameterization

Xt+1=ZtXter(1XtK)(Xtθ)K

does not let us isolate an additive constant (log-mean term) as we did in the example above. Writing the argument of the exponent in standard quadratic form,

Xt+1=ZtXtec+bX+t+aX2t

Where

c=rCK b=rK(CK+1) a=rK2

then

Mt:=logXt+1logXtbXt+aX2t