Writing: Warning Signals Mansucript

Summary Paragraph

  1. Critical transitions exist (Holling, 1973), (May, 1977),(Scheffer et. al. 2001)
  2. Warning signals exist (Scheffer et. al. 2009), (Drake & Griffen, 2010)
  3. Warning signals are summary statistics, critical slowing down (Wissel, 1984).
  4. Statement of general problem: double-edged sword – no quantification of the chance detection scheme will fail.
  5. We provide a way to do this
  6. We find existing methods lack sufficient power and have high false-alarm potential
  7. We provide a model-based solution using machinery of modern likelihood statistics


  • Define CSD, standard detection scheme. Figure 1.
  • What’s wrong with this approach: Figure 2
  • We do not have a quantification of risk –> related to:
  • We do not have a method designed for single replicates.

Being precise: expected value stationary process statistical independence

Consider an ensemble of data time series data which we represent as a matrix whose columns are replicates and rows are observations at each point in time.  If they are produced by the same stationary process, then computing function f across the rows that returns a scalar value

Limits to Detection

We estimate probability of a missed event and false alarm.  Even negative correlations can correspond to real events, while stable systems are quite likely to have strong signals.

To do this we need models.

Models for Critical Transitions

  • General classes: Linearized transcritical, Linearized saddle node
  • Models are: general, stochastic (can simulate), analytically tractable (can calc likelihood)
  • Apply parametric bootstrap

Improving Power by Likelihood

Given that we’re comparing models there’s a full machinery to do this in a way that quantifies uncertainty.  We follow Neyman-Pearson, Cox, McLachlan, Goldman, Huelsenbeck style (Compare Cox’s delta statistic for maximum likelihood estimates of models).  Same basic Monte Carlo framework, but more powerful.

Avoid other bifurcation types:

i.e. Hastings & Wysham(Hastings & Wysham, 2010), or supercritical Hopf, see Sebastian’s examples(Schreiber, 2003)(Schreiber & Rudolf, 2008)), these would fail model adequacy, while summary-statistics approaches will not.  (May need to outline this better with examples).


Serious application of warning signals must check for power.

We must quantify uncertainty.

Future is Bayesian approaches.

Methods options:

  • put 200 words in main text under “Methods.” Or:
  • put 300 words in “Methods Summary” at end of text (following figure legends).
  • put 1000 words in “Additional Methods”.

You still need 300 word version as a “Methods Summary” for print.

“Additional” appears in online html only. This must repeat any critical info from the Summary (along with references).  It cannot use figures or tables, but should have short bold headings.  Cannot duplicate anything in the Supplement.

Option (B) is probably best, and just say “see supplement for details”.  (As Drake 2010 does).



Parametric Bootstrap

Likelihood, AIC



Quantifying risk → realizing we often don’t have power to detect → improving power with likelihood approach.

But also mention issues like we address the single replicates problem, (also less vulnerable to applying in the case when the bifurcation is the wrong type, because models won’t fit well, but everything has a variance so you don’t realize you’re applying the wrong theory…)

Replacing prophesy with Science

risk – probability – chance