Robust control & uncertainty - reading notes

Ecologists/TREE (Fischer et. al. 2009), (Polasky et. al. 2011)  give a nice overview/introduction to the problem.  Perhaps most interestingly, both set up resilience approaches as a foil or alternate approach in contrast to a decision-theoretic problem. Fischer’s group does a particularly nice job handling the case that these two are separate approaches – it’s easy to dismiss resilience thinking as fuzzy optimization of fuzzy objective functions.

The Polasky piece does a nice job refocusing us on the obvious but brushed-over question of the objective function.  Without explicit probabilities on outcomes, we switch immediately to approaches like maximin, which their simple example does a much better job calling attention to the weakness of such assumptions that do the richer examples usually formulated (below).

Entertainingly, max-min goes back to von Neumann’s proof of the theorem in two player games, while in the case of uncertainty “Nature” or “chance” takes the role of the other player – not knowing how they will play, you seek a strategy that maximizes the minimum payoff you will receive. Clearly this is invented in a discrete world with limited outcomes and options – for most realistic problems there must always be some probability of no payoff, regardless of strategy, and you must be doing a maximin conditional on the really unlikely things not happening (hit by asteroid, sudden collapse of your fishery, etc).

Their critique of decision theory focuses only on the fact that it requires probabilities of outcomes, which in practice won’t be known, or at least knowledge of possible states, in the case of maximin.  The first is critique is quite just, which is as they say, the reason for maximin approaches.  These probably founder more on their choice of objective function than their inability to enumerate possible states.  Surprisingly they make no mention of computational complexity, which is probably in practice the more common limitation.

Info Gap Theory

Reading (Regan et. al. 2005), sounds like a sensitivity analysis on the uncertainty parameter. Authors refer to this as “info-gap theory,” though it’s not particularly different.  (While sensitivity analysis would do this parametrically using, say, the width of the distribution about a parameter), this rather more crudely it seems, just varies the best-estimate value of the parameter until the decision changes.

Whoops (reading more), apparently it’s a thing (i.e. gets it’s own wikipedia page, also appearing in the literature without Dr. Ben-Haim, Nicholson & Possingham, 2007 (Halpern et. al. 2006)) though this still seems to support my characterization (the second author of the Eco Apps paper seems to be behind the term). Info-gap can also be framed as a Max Min approach.

Whoops again, apparently it’s a controversy too, (as mentioned in the Fischer piece). Nicholson & Possingham, 2007 Gives a rather nice list of papers that in single-species analyses have found ranking of management options to be robust to uncertainty and others where it is not (hardly surprising, but nice to have concrete examples).  Takes a simple info-gap approach to the multi-species case.

More rigorous maximin approaches

Bill Brock’s work in this area is particularly nice.  Cute proper technical example of model uncertainty in Hansen’s recursive max/min expected utility (Brock & Xepapadeas, 2010).  Hansen’s treatment on these is dense (below), but not at all clear that this is what we want to be optimizing.  See mathematical economics treatment of model uncertainty & misspecification in (Hansen & Sargent, 2001), (Hansen et. al. 2006)

Brock also has a beautiful simple paper pointing out how uncertainties lead to conflicting conclusions , such as the debate over the fisheries collapse.  (Biggs et. al. 2009) Also has a nice example in Peterson et. al. 2003 of an apparently optimally managed system collapsing.  Applies a passive adaptive management solution learning about a choice between two given models (with fixed parameters), one which describes the system around the eutrophic stable state, the other around the oligotrophic.Note the actual dynamics cannot be represented as a sum of these beliefs, so this approach is doomed and the result is hardly surprising mathematically, which kinda makes it all the more clever as an example.

(Brozović & Schlenker, 2011) is probably a richer example of an optimally managed system failing under uncertainty. Shows the outcome can be very sensitive to assumptions about uncertainty in systems with alternative stable states; a nice example to further the point of (Biggs et. al. 2009). Shows with moderate uncertainty a precautionary behavior, while more severe uncertainty it precaution isn’t worth it, since it cannot diminish the risk adequately.

(Meir et. al. 2004) Which seems to give  rise to the Possingham position on giving only rules of thumb from decision theoretic models.  Comes the closest to implementation uncertainty, considering the case in which a policy cannot be implemented immediately as it is formed.  This is really a red herring – if implementation takes place over time, in principle that problem should be framed in the dynamic optimization.  The real bugbear here is not implementation uncertainty per se, but the fact that dynamic solutions (SDP) are not computationally feasible, so they test rules-of-thumb against the optimum for a small problem, in hopes that the extrapolation is valid.  The rules of thumb are “more effective” than a static solution; the optimal dynamic solution being unfeasible. It seems we’d be on firmer ground using small problems to provide counterexamples of when simple rules fail, than to use the lack of counterexample in a particular case as grounds for extrapolating.