Exploring Gp Model Space

Trying to think about a more systematic way to go about varying the parameters: the underlying parametric model has 3 parameters for the stock-recruitment curve’s deterministic skeleton, plus growth noise. (My first exploratory phase has been just to try different things. See my various tweaks in the history log Clearly time to be more systematic about both running and visualizing the various cases.)

Should I just choose a handful of parameter combinations to test? (Trying to think of a way to do this that is easy to summarize – at least I can summarize expected profit under each set). Presumably, for each set of these parameters, I’d want a few (many?) stochastic realizations of the calibration/training data.

Would it be worth digging up some real-world data-sets and base the selection of underlying model parameters on them?

Then there’s a variety of nuisance parameters: grid size, discount rate, price of fish (non-dimensionalization eliminates that one I guess), cost to fishing (and whether the cost is on effort or harvest, whether linear or quadratic, etc), harvest grid size / possible constraints on maximum or minimum allowable levels for the control; length of the calibration period (and related dynamics if we use any of the variable fishing effort models you showed me today).

Additionally there’s the MCMC-related nuisance parameters – parameters for the priors, possibly hyperpriors, and the MCMC convergence analysis (selecting burn-in period – currently 2000 steps out of 16000, etc) . Also the distributional shapes for the priors, and perhaps more meaningfully, the GP covariance function (using Gaussian for simplicity, but might want to look at Matern, and the various linear + Gaussian covariances).

New and progressing issues

from the commit log today

Misc: ropensci