• Complete IACUC training for fish room. File forms:
  • Online training – possibly see
  • Shiny:

  • Comment piece
  • Prep exit seminar

  • Applied math club: Networks. Reading

Discussion of when network structure has made an important difference on our understanding of a problem. When have the common summary statistics of a network (degree-distribution) allowed us to compte network dynamics without fully resolving the network?


Comparing methods for Gaussian process fits

  • mlegp package appears more carefully constructed, but notation is terrible. Also unclear how to change covariance kernel. Example shows fitting with zero noise only, though somehow that should be adjusted (in the nugget parameters?). Has a vignette but it has not been particularly enlightening for me. Seems kernlab::gausspr may be the only intelligent package function for GP I’ve seen.

  • What explains the descrepancy in the Cholesky approach?
  • What explains the descrepancy in the sequential approach?

Sequential updating algorithm avoids inverting the matrix, which can become unstable as state space becomes large. Instead of dealing with the vector of observations \(\vec y\) at states \(\vec x\) simultaneously, we update the GP mean \(m_i\) and covariance \(c_i\) at each observed point \((x_i, y_i)\) sequentially. Because we can compute this over the desired prediction grid \(X\) simultaneously still, \(m_i\) is a vector of the same length as the prediction grid (\(n_x\)), and the \(c_i\) is a matrix of dimension \(n_x\) by \(n_x\). Let \(K(x_i, x_j)\) be the covariance function of \(x_i, x_j\), and \(y = \mathcal{N(m, c)} + \sigma_n^2 \mathbb{I}\) be the noise model of the GP, then

\[ m_{i+1} = m_i + \frac{K(X, x_i) y_i}{ K(x_i,x_i)+\sigma_n^2} \] \[ c_{i+1} = c_i + K(X, X) - \frac{K(X, x_i) K(x_i, X)}{K(x_i,x_i)+\sigma_n^2} \]

  • Work out the multi-dimensional Gaussian process algorithm

Multiple uncertainty