Lab Notebook

Notes

10 Dec 2013

Consider the model

$d X_t = \alpha \left(\theta - X_t\right)dt + \sigma dB_t$

By Ito Isometry, we have:

$\langle X \rangle_t = E_t(X) = \theta \left(1 - e^{-\alpha t} \right) + X_0 e^{-\alpha t}$

and

$\langle X^2 \rangle_t - \langle X \rangle_t^2 = V_t(X) = \frac{\sigma^2}{2 \alpha }\left(1 - e^{-2 \alpha t} \right)$

If we assume discrete, uniform sampling of spacing $$\tau$$, then we have,

$P(\theta | X; \alpha, \sigma) = \sqrt{\frac{1}{2\pi V_{\tau} }}^{T-1} \exp\left(-\frac{\sum_t^{T-1} \left(X_{t+1} - E\right)^2 }{2V}\right)$

or more explicitly (taking $$\tau_i = 1$$ without loss of generality),

$P(\theta | X; \alpha, \sigma) = \sqrt{\frac{\tfrac{\alpha}{\sigma}}{\pi (1- e^{-\alpha t}) }}^{T-1} \exp\left( - \frac{\sum_t^{T-1} \left(X_{t+1} - ( \theta (1 - e^{-\alpha}) + X_0 e^{-\alpha}) \right)^2 }{\tfrac{\sigma^2}{\alpha}(1-e^{-2\alpha})}\right)$

To integrate out $$\theta$$, $$P(X | \alpha, \sigma) = \int P(X | \theta, \alpha, \sigma ) P(\theta) d\theta$$, we’ll make this look like a Gaussian in $$\theta$$ by completing the square. First, let us introduce a more compact notation to manipulate terms independent of $$\theta$$:

$A_t := X_{t+1} - X_0e^{-\alpha}$ $B := 1-e^{-\alpha}$

Our sum in the exponent can then be written more succinctly:

$\sum_t^{T-1} (A_t - \theta B)^2$

Squaring out inside the sum and distributing the summation operator (by linearity) and extracting or summing terms constant in $$t$$, we have:

$\sum A_t^2 - 2 \theta B \sum A_t + \theta^2 B^2 (T-1)$

For which we complete the square in $$\theta$$,

$B^2 (T-1) \left( \left(\theta - \frac{\sum_t A_t}{B(T-1)}\right)^2 + \frac{\sum A_t ^2 - (\sum A_t)^2}{B^2 (T-1)} \right)$

Which lets us write $$P(\theta | X; \alpha, \sigma)$$ as a normal distribution in $$\theta$$ times some constant terms:

$P(\theta | X; \alpha, \sigma) = \exp \left(\frac{-\sum A_t^2 + (\sum A_t)^2 }{V_{\tau} }\right) \sqrt{\frac{1}{2\pi V_{\tau} }}^{T-1} \int d\theta\exp\left( -\frac{ \left(\theta - \frac{\sum_t A_t}{B(T-1)} \right)^2 }{\tfrac{2V_{\tau}}{B^2 (T-1)}}\right)$

In which we recognize the integral as Gaussian with mean $=$ and variance $$\nu = V_{\tau}/(B^2(T-1))$$, and thus can replace the integral with $$\sqrt{2 \pi \nu} = \sqrt{\frac{2 \pi V_{\tau} }{B^2(T-1)}}$$,

$P(\theta | X; \alpha, \sigma) = \exp \left(\frac{-\sum A_t^2 + (\sum A_t)^2 }{V_{\tau} }\right) \sqrt{\frac{1}{2\pi V_{\tau} }}^{T-1} \sqrt{\frac{2 \pi V_{\tau} }{B^2(T-1)}}$

Collecting common terms in $$V_t$$,

$P(\theta | X; \alpha, \sigma) = \exp \left(\frac{f(A_t)}{V_{\tau}}\right) \sqrt{\frac{1}{2\pi}}^{T-1} \sqrt{\frac{2 \pi }{B^2(T-1)}} V_{\tau}^{\tfrac{1}{2}} V_{\tau}^{\tfrac{1-T}{2}}$

$= \exp \left(\frac{f(A_t)}{V_{\tau}}\right) \sqrt{\frac{(2 \pi)^{2-T}}{B ^ 2(T-1)}} V_{\tau}^{1 - \tfrac{T}{2}}$

where

$f(A_t) = - \sum A_t^2 + \left( \sum A_t \right)^2$

Then substituting in $$V_t = \tfrac{\sigma^2}{2\alpha} \left(1 - e^{-2\alpha}\right)$$

$= \exp \left(\frac{\tfrac{2\alpha f(A)}{1 - e^{-2\alpha}} }{\sigma^2}\right) \sqrt{\frac{(2 \pi)^{2-T}}{B ^ 2(T-1)}} \left(\tfrac{1}{2\alpha} \left(1 - e^{-2\alpha}\right)\right)^{1 - \tfrac{T}{2}}\sigma^{2(1 - T/2)}$

Which can be written to look like an Inverse Gamma integral in $$\sigma^2$$

$= \mathcal{N} e^{\tfrac{-\beta}{\sigma^2}} (\sigma^2)^{-\gamma - 1}$

where

$\gamma = T/2 -2,$

$\beta = \frac{2 \alpha \left( \left( \sum A_t \right)^2- \sum A_t^2 \right) }{1 - e^{-2\alpha}}$ and

$\mathcal{N} = \sqrt{\frac{(2 \pi)^{2-T}}{B ^ 2(T-1)}} \left(\tfrac{1}{2\alpha} \left(1 - e^{-2\alpha}\right)\right)^{1 - \tfrac{T}{2}}$

Which we can integrate as Inverse Gamma (again assuming uniform prior, now in $$\sigma^2$$),

$= \mathcal{N} \int e^{\tfrac{-\beta}{\sigma^2}} (\sigma^2)^{-\gamma - 1} d\sigma^2 = \mathcal{N} \frac{\beta^{\gamma}}{\Gamma(\gamma)}$

with $$\beta$$, $$\gamma$$, and $$\mathcal{N}$$ as defined above.

26 Nov 2013

RNeXML

EML provides a nice illustration of the kind of metadata we would want to typically accompany character trait data; at minimum, at the attribute level.

Some of this is already there – we know if states are continuous or discrete, what possible states they can take on, and can add labels to state nodes to define what the symbols mean (however, perhaps an additional meta annotation node would be preferred). We don’t have a way to specify the units of a continuous trait, or longer definitions of the traits themselves, etc.

Obviously this is a hard problem, particularly in building any semantic reasoning around these traits. I gather that the phenoscape project is tackling this, but I’m not really up to speed on that. Long-term it would be nice to allow users to read and write trait metadata from R even without a working knowledge of the ontologies.

One idea I’m pondering is if we might be able to generate an EML metadata file the NeXML. The most trivial and straight forward approach is simply to use EML vocabularly itself in the metadata.

Software to integrate into NeXML (long-term)

• Visualization: automatically generate the appropriate HTML+javascript page to render tree with jsPhyloSVG/phylotouch (example)
• Support for phenoscape onotlogy tools, e.g. see data as generated by phenex
• PhyloWS Generate a REST web service based on NeXML library(?)

Issues

• Trying to understand if using nexml for comparative trait data will generate any confusion. (#44)

19 Nov 2013

RNeXML

• Feedback from Rutger, need to add about attributes so that RDFa abstraction references the right level of the DOM (issue #35).
• Looking for strategy for distilling RDF from RDFa in R, see my question on SO. Hopefully don’t have to wrap some C library myself…

nonparametric-bayes

Writing writing.

• Update pandoc templates to use yaml metadata for author, affiliation, abstract, etc. Avoids having to manually edit the elsarticle.latex template with this metadata. Added fork for my templates, e.g. see my elsarticle.latex. Example metadata in manuscript.

• fixing xtable caption (as argument)

• Extended discussion. Adjustments to figures. See commit log /diffs for details.

Mace (2013) , e.g.

a new kind of ecology is needed that is predicated on scaling up efforts, data sharing and collaboration

hear hear.

• PNAS with a somewhat confused take on error rates, suggesting a revised threshold p-value…

• AmNat Asilomar schedule (pdf) is up.

17 Nov 2013

(From issue #20)

a question of how the user queries that metadata. Currently we have a metadata function that simply extracts all the metadata at the specified level (nexml, otus, trees, tree, etc) and returns a named character string in which the name corresponds to the rel or property and the value corresponds to the content or href, e.g.:

birds <- read.nexml("birdOrders.xml")
meta <- get_metadata(birds) 

prints the named string with the top-level (default-level) metadata elements as so:

> meta
##                                             dc:date
##                                        "2013-11-17"
## "https://creativecommons.org/publicdomain/zero/1.0/"

Which we can subset by name, e.g. meta["dc:date"]. This is probably simplest to most R users; though exactly what the namespace prefix means may be unclear if they haven’t worked with namespaces before. (The user can always print a summary of the namespaces and prefixes in the nexml file using birds@namespaces).

This approach is simple, albeit a bit limited.

XPath queries

For instance, the R user has a much more natural and powerful way to handle these issues of prefixes and namespaces using either the XML or rrdf libraries. For instance, if we extract meta nodes into RDF-XML, we could handle the queries like so:

xpathSApply(meta, "//dc:title", xmlValue)

which uses the namespace prefix defined in the nexml; or

xpathSApply(meta, "//x:title", xmlValue, namespaces=c(x = "https://purl.org/dc/elements/1.1/"))

defining the custom prefix x to the URI

Sparql queries

Pretty exciting that qe can make arbitrary SPARQL queries of the metadata as well.

library(rrdf)
sparql.rdf(ex, "SELECT ?title WHERE { ?x <https://purl.org/dc/elements/1.1/title> ?title })

Obviously the XPath or SPARQL queries are more expressive / powerful than drawing out the metadata from the S4 structure directly. On the other hand, because both of these approaches use just the distilled metadata, the original connection between metadata elements and the structure of the XML tree is lost unless stated explicitly. An in-between solution is to use XPath on the nexml XML instead, though I think we cannot make use of the namespaces in that case, since they appear in attribute values rather than structure.

Anyway, it’s nice to have these options in R, particularly for more complex queries where we might want to make some use of the ontology as well. On the other hand, simple presentation of basic metadata is probably necessary for most users.

Would be nice to illustrate with a query that required some logical deduction from the ontology.

So, you're active on Research Gate?

14 Nov 2013

I have occassionally been getting this question:

So, you’re active on ResearchGate?

Sounds like being accused of some scandal, doesn’t it?

I’m not generally active on it - my impression is that the open science community is mostly skeptical about ResearchGate and any other “Social Network” for scientists, largely on the grounds that “we already use the same social networks everyone else uses.” Some object on more philosophical grounds (profit, Mendeley, etc), but heck I publish in Elsevier/Springer/Wiley so I won’t preach. That’s perhaps a US/elite institution centric view though; it seems more popular with a more international audience where things like basic access to pdfs may be more of an issue. I present no data to back any if that up.

Personally, it hasn’t added any value for me, in contrast to the value I get from interacting with other researchers on Github, G+ or Twitter. Still, as RG recently got $35 million from Bill Gates, they might actually build something useful. Certainly traditional publishers have left plenty of room for innovation in the space of sharing data, networking, etc. So I have a profile there to wait and see, next to a disclaimer that says “please see my website for updated information.” However, I was actually impressed by ResearchGate this morning. While I thought I had successfully blocked most of their email notifications, one this morning had successfully found the full text of a recent paper of mine (albeit a few months after it had appeared). Instead of asking me to upload something, RG was able to obtain the full text from the publisher (Springer). On so doing, it also asks me if I would like to “follow” several of the researchers I cited who are also on ResearchGate. Why is that impressive? Mendeley, for all it’s much more natural fit into most researcher’s workflows, never automatically discovers papers I publish. If I want them in my Mendeley profile, I have to add them manually. Manually maintaining profiles across different networks is so entirely a waste of time and the antithesis of a linked data web where I have already made this information machine readable that I find it the most annoying feature by far in any of these sites. Here, ResearchGate is actually doing the intelligent thing, whether by connecting my RG identity to my ORCID ID, or something more heuristic. (Google Scholar automatically adds things to my profile, but in a far less selective algorithm that can be easily gamed, see 10.1002/asi.23056). By obtaining the full-text directly from the publisher, they show the considerable advantage of a well-funded network. Presumably this indicates that access was negotiated directly with the publisher, who agrees and even facilitates me sharing the full-text of my otherwise paywalled article on my RG profile. That’s a non-trivial contribution towards open access: Contrast this to Mendeley’s more murky policy which encourages me to provide full text access through my user profile but places the legal responsibility directly on me to confirm that this permissible, or an organization like ORCID which despite (because of?) it’s more non-profit and utilitarian values does not have permission to distribute my paywalled pdfs on my profile. (Sure, my papers on arXiv already, but that isn’t the point). Likewise, using the citation data against the RG data on which researchers have profiles shows a vaguely intelligent use of data other platforms mostly ignore (providing useful suggestions using some understanding of the academic process rather than mindless application of some friend-of-a-friend network algorithm). (The fact that RG pings unfortunate souls who might have signed up once but have no desire to see my “Activity” on RG is one of it’s potentially effective marketing but more pernicious decisions. Use does not necessarily imply trust). Notes 14 Nov 2013 ropensci • ropensci stategic planning: wrote personal vision statement RNeXML • Comments on issues #12 and #15, thinking about character matrices • comments on #20, thinking about metadata parsing. (If only everyone knew xpath…) Commit details on Github Do we need a culture of Data Science in Academia? 13 Nov 2013 Just my draft copy of a Guest blog post I wrote for Dynamic Ecology. On Tuesday the Whitehouse Office of Science and Technology Policy announced the creation of a$37.8 million dollar initiative to promote a “Data Science Culture” in academic institutions, funded by the Gordon and Betty Moore Foundation, Alfred P. Sloan Foundation, and hosted in centers at the universities UC Berkeley, University of Washington, and New York University. Sadly, these announcements give little description of just what such a center would do, beyond repeating the usual the hype of “Big Data.”

Fernando Perez, a research scientist at UC Berkeley closely involved with the process, paints a rather more provocative picture in his own perspective on what this initiative might mean by a “Data Science Culture.” Rather than motivating the need for such a Center merely by expressing terabytes in scientific notation, Perez focuses on something not mentioned in the press releases. In his view, the objective of such a center stems from the observation that:

the incentive mechanisms of academic research are at sharp odds with the rising need for highly collaborative interdisciplinary research, where computation and data are first-class citizens

His list of problems to be tackled by this Data Science Initiative includes some particularly catching references to issues that have raised themselves on Dynamic Ecology before:

• people grab methods like shirts from a rack, to see if they work with the pants they are wearing that day
• methodologists tend to only offer proof-of-concept, synthetic examples, staying largely shielded from real-world concerns

Well that’s a different tune than the usual big data hype1. While it is easy to find anecdotes that support each of these charges, it is more difficult to assess just how rare or pervasive they really are. Though these are not new complaints among ecologists, the solutions (or at least antidotes) proposed in a Data Science Culture given a rather different emphasis. At first glance, the Data Science Culture sounds like the more familiar call for an interdisciplinary culture, emphasizing that the world would be a better place if only domain scientists learned more mathematics, statistics and computer science. It is not.

the problem, part 1: statistical machismo?

As to whether ecologists choose methods to match their pants, we have at least some data beyond anecdote. A survey earlier this year by Joppa et al. (2013) Science) has indeed shown that most ecologists select methods software guided primarily by concerns of fashion (in other words, whatever everybody else uses). The recent expansion of readily available statistical software has greatly increased the number of shirts on the rack. Titles in Ecology reflect the trend of rising complexity in ecological models, such as Living Dangerously with big fancy models and Are exercises like this a good use of anybody’s time?). Because software enables researchers to make use of methods without the statistical knowledge of how to implement them from the ground up, many echo the position so memorably articulated by Jim Clark that we “handing guns to children.” This belittling position usually leads to a call for improved education and training in mathematical and statistical underpinnings (see each of the 9 articles in another Ecology Forum on this topic), or the occassional wistful longing for a simpler time.

the solution, part 1: data publication?

What is most interesting to me in Perez’s perspective on the Data Science Institute in an emphasis on changing incentives more than changing educational practices. Perez characterizes the fundamental objective of the initiative as a cultural shift in which

“The creation of usable, robust computational tools, and the work of data acquisition and analysis must be treated as equal partners to methodological advances or domain-specific results”

While this does not tackle the problem of misuse or misinterpretation of statistical methodology head-on, I believe it is a rather thought-provoking approach to mitigate the consequences of mistakes or limiting assumptions. By atomizing the traditional publication into such component parts: data, text, and software implementation, it becomes easier to recognize each for it’s own contributions. A brilliantly executed experimental manipulation need not live or die on some minor flaw in a routine statistical analysis when the data is a product in its own right. Programmatic access to raw data and computational libraries of statistical tools could make it easy to repeat or alter the methods chosen by the original authors, allowing the consequences of these mistakes to be both understood and corrected. In the current system in which access to the raw data is rare, statistical mistakes can be difficult to detect and even harder to remedy. This in turn places a high premium on the selection of appropriate statistical methods, while putting little selective pressure on the details of the data management or implementation of those methods. Allowing the data to stand by itself places a higher premium on careful collection and annotation of data (e.g. the adoption of metadata standards). To the extent that misapplication of statistical and modeling approaches could place a substantial error rate on the literature (Economist, Ioannidis 2005), independent data publication might be an intriguing antidote.

the problem, part 2: junk software

As Perez is careful to point out, those implementing and publishing methods aren’t helping either. Unreliable, inextensible and opaque computational implementations act both as barriers to adoption and validation. Trouble with scientific software has been well recognized by the literature (e.g. Merali (2010), Nature, Inces et al. (2012), Nature), the news (Times Higher Education) and funding agencies (National Science Foundation). While it is difficult to assess the frequency of software bugs that may really alter the results (though see Inces et al.), designs that will make software challenging or impossible to maintain, scale to larger tasks or extend as methods evolve are more readily apparent. Cultural challenges around software run as deep as they do around data. When Mozilla’s Science Lab undertook a review of code associated with scientific publications, they took some criticism from other advocates of publishing code. I encountered this first hand in replies from authors, editors and reviewers on my own blog post suggesting we raise the bar on the review of methodological implementations. Despite disagreement about where that bar should be, I think we all felt the community could benefit from clearer guidance or consensus on how to review papers in which the software implementation plays an essential part and contribution.

the solution, part 2: software publication?

As in the case of data, educational practices are the route usually suggested to address better programming practices, and no doubt these are important. Once again though, it is interesting to think how a higher incentive on such research products might also improve their quality, or at least facilitate distilling the good from the bad from the ugly, more easily. Yet in this case, I think there is a potential downside as well.

Or not?

While widespread recognition of its importance will no doubt help bring us faster software, fewer bugs and more user-friendly interfaces, it may do more harm than good. Promotion of software as a product can lead to empire-building, for which ESRI’s ArcGIS might be a poster child. The scientific concepts become increasingly opaque, while training in a conceptually rich academic field gives way to more mindless training in the user interface of a single giant software tool. I believe that good scientific software should be modular – small code bases that can be easily understood, inter-operable, and perform a single task well (the Unix model). This lets us build more robust computational infrastructure tailored to the problem at hand, just as individual Lego bricks may be assembled and reassembled. Unfortunately, I do not see how recognition for software products would promote small modules over vast software platforms, or interoperability with other software instead of an exclusive walled garden.

So, change incentives how?

If this provides some argument as to why one might want to change incentives around data and software publication, I have said nothing to suggest how. After all, as ecologists we’re trained to reflect on the impact a policy would have, not advocate for what should be done about it. If the decision-makers agree about the effects of the given incentives, then choosing what to reward should be easier.

1. Probably for reasons discussed recently on Dynamic Ecology about politicians and dirty laundry.

Mbi Day Five Notes

08 Nov 2013

Panel discussion

• Hugh’s question on the usefulness of dynamic vs static models: do we have dynamical systems envy?
• Chris: are temporal dynamics historical artefact, and space the new frontier?
• Hugh: though decision theory is fundamentally temporal. really question of sequential decision vs single decision
• Hugh, on what would be his priority if he had time for new question: Solve the 2 player, 2 step SDP competition closed form.
• Paul: the narrow definitions of “math biology” with 1980s flavor.
• @mathbiopaul: Formulating the hard problems arising in application in an appropriate abstraction that mathematicians will attack.
• Leah raises issue of publishing software and reproducibility
• Julia mentions Environmental modeling and software journal

pdg-control

Trying to understand pattern of increasing ENPV with increasing stochasticity. Despite having the same optimal policy inferred under increasing stochasticity (i.e. still in Reed’s self-sustaining criterion, below $$\sigma_g$$ of 0.2 or so) the average over simulated replicates is higher. We don’t seem to obtain the theoretical ENPV, but something less, in either case. See code noise_effects.md.

ropensci

Schema.org defines a vocabulary for datasets (microdata/rdfa)

Rutger gives a one-liner solution for tolweb to nexml using bio-phylo perl library:

perl -MBio::Phylo::IO=parse -e 'print parse->to_xml' format tolweb as_project 1 url 'https://tolweb.org/onlinecontributors/app?service=external&page=xml/TreeStructureService&node_id=52643'

Hmm, there’s a journal of Ecological Informatics.

References

• Rebecca S. Epanchin-Niell, Robert G. Haight, Ludek Berec, John M. Kean, Andrew M. Liebhold, Helen Regan, (2012) Optimal Surveillance And Eradication of Invasive Species in Heterogeneous Landscapes. Ecology Letters 15 803-812 10.1111/j.1461-0248.2012.01800.x
• Rebecca S. Epanchin-Niell, James E. Wilen, (2012) Optimal Spatial Control of Biological Invasions. Journal of Environmental Economics And Management 63 260-270 10.1016/j.jeem.2011.10.003
• J. Esper, U. Buntgen, D. C Frank, D. Nievergelt, A. Liebhold, (2007) 1200 Years of Regular Outbreaks in Alpine Insects. Proceedings of The Royal Society B: Biological Sciences 274 671-679 10.1098/rspb.2006.0191
• unknown Fagan, unknown Meir, unknown Prendergast, unknown Folarin, unknown Karieva, (2001) Characterizing Population Vulnerability For 758 Species. Ecology Letters 4 132-138 10.1046/j.1461-0248.2001.00206.x
• A. R. Hall, A. D. Miller, H. C. Leggett, S. H. Roxburgh, A. Buckling, K. Shea, (2012) Diversity-Disturbance Relationships: Frequency And Intensity Interact. Biology Letters 8 768-771 10.1098/rsbl.2012.0282
• Elizabeth Eli Holmes, John L. Sabo, Steven Vincent Viscido, William Fredric Fagan, (2007) A Statistical Approach to Quasi-Extinction Forecasting. Ecology Letters 10 1182-1198 10.1111/j.1461-0248.2007.01105.x
• Brian Leung, Nuria Roura-Pascual, Sven Bacher, Jaakko Heikkilä, Lluis Brotons, Mark A. Burgman, Katharina Dehnen-Schmutz, Franz Essl, Philip E. Hulme, David M. Richardson, Daniel Sol, Montserrat Vilà, Marcel Rejmanek, (2012) Teasing Apart Alien Species Risk Assessments: A Framework For Best Practices. Ecology Letters 15 1475-1493 10.1111/ele.12003
• A. D. Miller, S. H. Roxburgh, K. Shea, (2011) How Frequency And Intensity Shape Diversity-Disturbance Relationships. Proceedings of The National Academy of Sciences 108 5643-5648 10.1073/pnas.1018594108
• Adam David Miller, Stephen H. Roxburgh, Katriona Shea, (2011) Timing of Disturbance Alters Competitive Outcomes And Mechanisms of Coexistence in an Annual Plant Model. Theoretical Ecology 5 419-432 10.1007/s12080-011-0133-1

07 Nov 2013

Morning session

• Chadès, I., Carwardine, J., Martin, T.G., Nicol, S., Sabbadin, R. & Buffet, O. (2012) MOMDPs: a solution for modelling adaptive management problems. The Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI-12), pp. 267-273. Toronto, Canada.

• 10.1098/rspb.2013.0325 Migratory connectivity magnifies the consequences of habitat loss from sea-level rise for shorebird populations

Jake LaRiviera

presents the challenges of the uncertainty table. Additional challenges in making an apples-to-apples comparison of the benefit of decreasing noise of different systems (e.g. in pricing information?)

Me

Some good questions following talk, primarily on BNP part.

• Where does the risk-adverse vs risk-prone behavior come from? Adjusting curvature of the uncertainty appropriately.
• Any lessons after stock collapsed, e.g. Rebuild a stock rather than mantain it? (Perhaps, but may face hysteresis in a way the intial collapse does not).
• Brute-force passive learning?

Afternoon discussion

1. Is an active learning approach more or less valuable in a changing environment
2. Embracing surprise: how do we actually mathematically do this.
3. Limitations due to constraints on frequency of updating. (e.g. we don’t get to change harvest, we get to set a TAC once every ten years).
4. Uncertainty affecting net present value vs affecting model behavior