Lab Notebook

(Introduction)

Entries

Reflecting on five years of the open lab notebook

31 Dec 2015

I have been keeping an open lab notebook now since February of 2010 1. Those years have seen 271, 301, 164, 136, 86 and 44 posts, respectively. The notebook started on OpenWetWare, with code on (now defunct) Google Code (using svn!). By October of that year I had moved to a Wordpress.org platform running on DreamHost, with code moving to GitHub earlier that year. Wordpress meant my own domain name, control of layout and plugins, better mobile support, and perhaps most importantly, avoided the occasional day-long down-times I had seen on OpenWetWare. By the end of 2010 I had also cobbled together a system in which my scripts committed themselves to GitHub and automatically posted figures the generated to flickr that included the Git hash. Knitr was first released to CRAN in January of 2012, and by March I announced a new phase for my lab notebook focused on notes written with knitr markdown living on GitHub. For the first time, part of the daily research narrative lived not directly in the lab notebook itself (thus the decline in total notebook posts starting that year). Most of my posts in the first two years are much more journal like; reflecting on what I have been working on or reading earlier that day, describing some equation or graph, or muddling over a sticking point. Some of this remained in the notebook, particularly for work unconnected to code (thoughts on some literature, mathematical derivations, etc), but more migrated to knitr documents outside of the notebook. While knitr brought the code, results, and discussion elements closer together, it also often meant I was writing with less distance from the results, and so the text shifted towards more technical and less reflective content. Notebook entries became more brief and bullet pointed, though also punctuated by an increasing frequency of notebook entries that really were blog posts, aimed at an audience and sometimes drawing lengthy comment sections, primarily on topics related to open science and research workflow.

In May of 2012, I then shifted platforms again, this time moving to Jekyll, primarily for reasons of performance and cost (as detailed in that post). I have always experimented heavily with different features and technology connected to the lab notebook, and Jekyll also opened new possibilities. Meanwhile, it fit nicely into the git/markdown-centric workflow I had already adapted. This made it easy to turn knitr outputs into notebook entries, though I waffled back on forth on whether it was more natural to post such material in the relevant project repository in GitHub or directly into the notebook. The notebook remained the home for work not (yet) connected to a project, and for blog posts, and occasionally cross-posted material on a particular project, often capturing a key result. Meanwhile more of my other work became open – in particular, I began writing my manuscripts in public GitHub project repositories from the start, and also making greater use of GitHub issues in projects. The end of 2012 also saw me leave graduate school and begin a post-doc, luckily with advisers that were equally neutral to my open notebook habit.

The start of 2015 saw the next significant evolution in platform. Though still based on Jekyll, I first I broke up what had been growing into one rather large labnotebook repo on GitHub into separate repos by year. This left me with a more lightweight repository and freed me to experiment with infrastructure without breaking all my old posts. I began writing all posts directly in knitr’s .Rmd format, allowing the code to be run automatically when the site was published, rather than executing it locally and copying over a .md file into the notebook. That this was possible is thanks to my adoption of Docker (making the myriad dependencies portable), Continuous integration platforms that could run the code (using the Docker container) and automatically push the compiled site, and servr, another Yihui Xie R package that facilitates knitr+jekyll integration. Some entries involved particularly heavy computation, including a few that required several hours on a large Amazon EC2 machine to complete. knitr’s caching feature made it easy to snapshot these caches (which I did in a somewhat convoluted way with another docker container) so that the whole site could still be rebuilt on a public CI service (primarily circle-ci) within the 50 minute permitted window. The Docker+knitr+CI combination brought reproducible to a new level, though still not perfectly automated, since changing dependencies can still have some effect, particularly since I allow the container to be rebuilt from Dockerfile rather than committed as a binary image. I’ve been pleased that this system has worked remarkably well the entire year, though I’m frequently nervous that some piece of its increasingly complex architecture will break. Moreover, the complicated re-running of all the R code was increasingly redundant as most of my work continued to happen in GitHub repositories which had largely adopted their own system of continuous integration and unit testing (for basic functions and final results, if not for individual notebook posts).

More and more, GitHub, knitr, and other such tools have allowed me to actually perform my research openly, not just discuss it in an open notebook. Perhaps ironically, this means less and less content in the ‘lab notebook’ itself (as I wrote in 2012, ‘the real notebook is on GitHub’). Interestingly, this has the side effect of making the ‘lab notebook’ much less visible. Living on my website, my open notebook has been hard to miss; a visitor could easily page through the entries to see what I had done recently without any of the familiarity it requires to navigate to the same material on GitHub. The idea that this was a scientist’s notebook, a concept both immediately familiar and yet so rarely public, perhaps gave it an out-sized significance to having the same material scattered around GitHub (some under my account and other work in various GitHub Organization accounts), where openness was already the norm. It attracted the attention of reporters and, I’m told, my faculty search committee. So while I think I have benefited from this visibility, for the science it is perhaps much the same.

Halfway though 2015 also saw a different kind of change to my notebook as I transitioned from a post-doc to a faculty position. Just as my homepage (now maintained in its own GitHub repo, though still appearing in the same theme as my notebook) shifted from being “me” to being “my research group,” part of my time also shifted from “my research” to mentoring the research of others. While I continue to believe in transparency in scientific process and results, I have also appreciated that as a graduate student open science was something I was allowed to choose for myself. I want those I mentor to make their own choices as well. Thus we have begun projects with students primarily in private GitHub repositories (now hosted on github.com/boettiger-lab).

So what does this mean for 2016 and the future of my open lab notebook? I suspect it will continue in its role as occasional blog and communication tool, where it’s greater visibility has a practical use, while becoming simpler technically, with the day-to-day of research (& teaching, etc) activity confined to their own repositories.


  1. though some material going back to 2009 and 2008 was posted later.

Read more



Docker Workflows

17 Dec 2015

I was recently asked to describe what my typical workflow would look for running R on a cloud machine, such as digitalocean.
So, here’s a typical use:


## Create digitalocean machine
docker-machine create --driver digitalocean --digitalocean-size 1gb --digitalocean-access-token $DO_TOKEN dev

## Point docker-engine at the the new machine
eval $(docker-machine env dev)

## Launch the Rocker container running RStudio
docker run -d -p 8787:8787 -e PASSWORD=$PASSWORD rocker/hadleyverse

## Open browser at the IP address (e.g. mac-based terminal command)
open https://$(docker-machine ip dev):8787

From there, login with username “rstudio” and password you chose (will default to rstudio if nothing was given) and you’re good to go. RStudio has a nice git GUI which is a good way to move content on and off the instance, but I would also teach docker commit and docker push at the end as a way to capture the whole runtime (e.g. any packages installed, cached binaries from knitr, etc):


docker commit <hash> username/myimage
docker push username/myimage 

(Using to the the private image slot if you don’t want your image & results public)

It would probably be good to cover the non-interactive case too, which is helpful when you want to do a really long run of something that needs a bigger computer than your laptop. IMHO this is where docker-machine excels because it’s easy to have the machine shut down when all is finished! e.g.

## As before
docker-machine create --driver digitalocean --digitalocean-size 1gb --digitalocean-access-token $DO_TOKEN dev
eval $(docker-machine env dev)

## Let's use the committed image since it can already have my script included
docker run --name batch username/myimage Rscript myscript.R

## commit using container name & push back up
docker commit batch username/myimage
docker push username/myimage 

## Script stops machine now
docker-machine rm -f dev

Clearly that can be modified to pull scripts down with git and push results back up to github rather than using docker commit to snapshot and push the whole image every time. Just add env vars for authentication.

Provided runs are < 1 hr, this workflow can rather easily be applied to a continuous integration system such as Travis or Circle-CI, (which both support Docker), instead of using docker-machine to create a private cloud instance. This provides a very simple way to bring continuous integration testing and content (re)generation to individual research results.

Read more



Download and Parse Github Issues

20 Nov 2015

# devtools::install("cscheid/rgithub")
library("github")
Error in library("github"): there is no package called 'github'
library("dplyr")
library("stringr")
library("lubridate")
library("readr")
ctx = interactive.login(Sys.getenv("GH_CLIENT_ID"),Sys.getenv("GH_CLIENT_SECRET"))

Thanks to @realAkhmed for this one, see: https://github.com/cscheid/rgithub/issues/30#issuecomment-150354560

auto.page <- function(f) {
  f_call <- substitute(f)
  stopifnot(is.call(f_call))

  i <- 1
  req <- list()
  result_lst <- list()

  repeat {
    # Specify the page to download
    f_call$page <- i

    req <- eval(f_call, parent.frame())

    # Last page has empty content
    if (length(req$content)<=0) break

    result_lst[[i]] <- req$content
    i <- i+1
  }

  result_req <- req
  result_req$content <- unlist(result_lst, recursive = FALSE)

  (result_req)
}
issues <- auto.page(github::get.all.repository.issues.comments("ropensci", "RNeXML", ctx=ctx))
length(issues$content)

Here we get the content of interest.

to_df <- function(entry){
        dplyr::data_frame(
             issue = stringr::str_replace(entry$issue_url, ".*/(.*$)", "\\1"), 
             comment_id = entry$id, 
             user = entry$user$login, 
             created_at = lubridate::parse_date_time(entry$created_at,"%Y-%m-%d %H:%M:%S"),
             updated_at = lubridate::parse_date_time(entry$updated_at,"%Y-%m-%d %H:%M:%S"),
             body = entry$body)
}

Minor problem: This doesn’t actually include the issue’s title and opening comment (or tags, status, or other metadata that all come from the issues endpoint):

issue_meta <- auto.page(github::get.repository.issues("ropensci", "RNeXML", state="all", filter="all", ctx=ctx))
meta_to_df <- function(entry){
        dplyr::data_frame(
             issue = stringr::str_replace(entry$html_url, ".*/(.*$)", "\\1"), 
             comment_id = entry$id, 
             user = entry$user$login, 
             state = entry$state,
             comments = entry$comments,
             created_at = lubridate::parse_date_time(entry$created_at,"%Y-%m-%d %H:%M:%S"),
             updated_at = lubridate::parse_date_time(entry$updated_at,"%Y-%m-%d %H:%M:%S"),
             title = entry$title,
             body = entry$body)
}
df <- dplyr::bind_rows(lapply(issues$content, to_df))
meta_df <- dplyr::bind_rows(lapply(issue_meta$content, meta_to_df))
issue_tbl <- dplyr::full_join(df, meta_df)
readr::write_csv(issue_tbl, "../_data/rnexml.csv")

Now let’s do EML

issues <- auto.page(github::get.all.repository.issues.comments("ropensci", "EML", ctx=ctx))
issue_meta <- auto.page(github::get.repository.issues("ropensci", "EML", state="all", filter="all", ctx=ctx))
df <- dplyr::bind_rows(lapply(issues$content, to_df))
meta_df <- dplyr::bind_rows(lapply(issue_meta$content, meta_to_df))
issue_tbl <- dplyr::full_join(df, meta_df)

readr::write_csv(issue_tbl, "../_data/eml.csv")

Read more



Nimble Model Construction

27 Aug 2015

library("nimble")
library("lazyeval")

Can we declare the nimbleCode parts in sections? Such as defining the model and priors in separate, reusable code blocks? Here’s a “model” block:

model <- nimbleCode({
  for (i in 1:N){
    theta[i] ~ dgamma(alpha,beta)
    lambda[i] <- theta[i]*t[i]
    x[i] ~ dpois(lambda[i])
  }
})

We may also wish to define the priors separately, and with specified list of hyperparameters for the priors. Perhaps the need to use quote and to specify each line as a list item, instead of as a block defined with {, is not ideal, but non-standard evaluation is a bit of a wild west still.

hyperparameters <- list(lambda = 1.0, a = 0.1, b = 1.0)

priors <- nimbleCode({
  alpha ~ dexp(lambda)
  beta ~ dgamma(a,b)
})

Having done so, we can construct a nimbleCode block by passing our chosen hyperparameters to the priors and concatenating the model and priors.

P <- as.expression(sapply(priors, lazyeval::interp, .values = hyperparameters)[-1])
pumpCode <- c(model, P)

The result appears to work as a functional nimbleCode block; though note the resulting expression is not quite identical to the one we would typically write (e.g. without commas):

pumpCode
expression({
    for (i in 1:N) {
        theta[i] ~ dgamma(alpha, beta)
        lambda[i] <- theta[i] * t[i]
        x[i] ~ dpois(lambda[i])
    }
}, alpha ~ dexp(1), beta ~ dgamma(0.1, 1))

Here we just specify the rest of the model:

pumpConsts <- list(N = 10, 
                   t = c(94.3, 15.7, 62.9, 126, 5.24,
                         31.4, 1.05, 1.05, 2.1, 10.5))
pumpData <- list(x = c(5, 1, 5, 14, 3, 19, 1, 1, 4, 22))
pumpInits <- list(alpha = 1, 
                  beta = 1,
                  theta = rep(0.1, pumpConsts$N))

pump <- nimbleModel(code = pumpCode, 
                    name = 'pump', 
                    constants = pumpConsts,
                    data = pumpData, 
                    inits = pumpInits)

and then verify that it runs.

Cmodel <- compileNimble(pump)
mcmcspec <- configureMCMC(pump, print=FALSE)
mcmc <- buildMCMC(mcmcspec)
Cmcmc <- compileNimble(mcmc, project = Cmodel)
Cmcmc$run(1000)
NULL
samples <- as.data.frame(as.matrix(Cmcmc$mvSamples))
plot(samples[ , 'alpha'], samples[ , 'beta'], xlab = expression(alpha), ylab = expression(beta))

(The plot shows the same correlation issue that arises without using a block sampler that is illustrated in the manual.)

Read more



a drat repository for rOpenSci (draft for ropensci blog)

03 Aug 2015

We’re happy to announce the launch of a CRAN-style repository for rOpenSci at https://packages.ropensci.org

This repository contains the latest nightly builds from the master branch of all rOpenSci packages currently on GitHub. This allows users to install development versions of our software without specialized functions such as install_github(), allows dependencies not hosted on CRAN to still be resolved automatically, and permits the use of update.packages().

Using the repository

To use, simply add packages.ropensci.org to your existing list of R repos, such as:

options(repos = c("https://packages.ropensci.org", getOption("repos"))

(If you don’t have any default CRAN mirrors selected yet by getOption("repos"), you may want to add one now). You can also include this line in specific install.packages() requests:

install.packages("taxize", repos = c("https://packages.ropensci.org", "https://cran.rstudio.com"))

Design

Our goal in creating a CRAN-style package repository (yes, it’s confusing that we use the word “repository” to describe both an individual package source in a GitHub repo as well as a collection of package binaries on a CRAN-like repo… sorry) was to provide users with a way to install the latest development versions of rOpenSci packages that offerred an easier and more seamless alternative to the widely used method of devtools::install_github(). This would be particularly useful for updating all packages at once, or installing development versions that depended on other versions of packages not yet released to CRAN. As an added benefit, we also wanted a system that would allow us to compute anonmyzed download statistics, analogous to what RStudio provides for it’s CRAN mirror. Getting this all to work required the introduction of a few additional technologies.

drat and drat.builder

While the basic structure of a CRAN-like R repository is simple, used both platforms such as rForge as well as individual developers, kudos really goes to Dirk Eddelbuettel’s drat package for really making this automated, simple, and fun. While drat makes it easy to toss individual packages into a CRAN-like repo (which we often refer to as a drat repo), we needed an easy & automatic way to add a whole list of packages, given their GitHub repos. Rich FitzJohn’s new drat.builder package does precisely this; handling the downloading of packages with some clever record-keeping to avoid building and adding packages which have not changed since the last time the drat repo was built. The sources for building the rOpenSci packages repository can be found in our “drat” GitHub repo: https://github.com/ropensci/drat

Dynamic package lists: ropkgs

With rOpenSci, we wanted to take this one step further. No one wants to have to maintain one more list that must be updated every time a package is successfully on-boarded to the project. Scott Chamberlain’s work with ropkgs provides a convenient way to query the rOpenSci software suite, automatically generating a list of available rOpenSci packages, and filtering them on relevant metadata, such as those that are in good status and installable condition, like so:

library("ropkgs")
out <- ro_pkgs()
good <- out$packages$status == "good"
installable <- out$packages$installable
pkgs <- out$packages$name[installable & good]

the magic of continuous integration: CircleCI

With ropkgs, drat and drat.builder, we now have everything we need to automate the building of the CRAN-like package repository. Now we just need some computing resource that can do the hard work of pulling down all the GitHub packages, building the repository, and securely sending off the binaries somewhere they can be downloaded. Continuous Integration systems turn out to be perfect for this. My favorite CI platform at the moment is CircleCi, for several reasons particularly relevant here:

  • it has a rich API which includes support for POST requests which can trigger a build without making commits to GitHub
  • it supports custom Docker containers, allowing us to just download a container with most or all the dependencies we need to build packages, etc., without having to wait for them to install manually from source first.
  • it has a convient web interface for providing secure credentials we’ll need to publish the binary repository to GitHub or Amazon.

Circle has other advantages too, like great live help and the ability to ssh into your CI run to troubleshoot when all else fails, but otherwise works like most other CI platforms. More on that another day. You can see the daily builds here: CircleCi, which are triggered by a simple POST request running as a cron job. The circle.yml configuration file appears in the project’s drat repo – check out how simple it is!

Publishing to Amazon

The last step would be getting download logs; which is somewhat more complicated than it sounds. drat convienently already handles pushing packages to GitHub’s gh-pages, a free and easy way to provide static hosting. This is free and easy, but isn’t ideal, particularly for large and frequently updated package collections. Also, it is impossible to get download logs from this approach. To avoid these issues, we settled on pushing our package repository to a static site hosted through Amazon’s S3 data storage “buckets.” It’s not free, but for at most a few gigs of space we’ll need it’s still very cost effective. In particular, S3 buckets can generate their own log files, which provides a way to count package downloads.

Secure communication with Amazon S3 system is accomplished using the very nacsent / actively developing aws.s3 R package from the awesome cloudyr project.

Parsing the download logs

Amazon’s S3 logs are rather raw and require some good ol data tidying work to transform them into the conveniently parsed, tidied and IP-address-anonymized format used by RStudio’s download logs. Eventually this too can be accomplished by the CircleCI builds, but at the moment is too computationally intensive for them. A script for this workflow can be found in the project repo, parse_s3_logs.R. As the data accumulate we should be able to start publishing the tidy logs.

This project is still in it’s early days, and as ever, we welcome feedback, problems or ideas on the issues tracker.

Now go ahead and install or update some packages from the shiny new https://packages.ropensci.org!

Read more



Gpdp Via Mdptoolbox Cont

30 Jul 2015

knitr::opts_chunk$set(comment=NA)
#devtools::install_github("cboettig/gpmanagement@b3b765cbceb51c9b0b8cb2724e395353ec365df9")
library("MDPtoolbox")
library("gpmanagement")
library("tidyr")
library("dplyr")
library("ggplot2")
library("lazyeval")

True model

p <- c(2, 100, 50)
f <- function (x, h){
  sapply(x, function(x) {
    x <- pmax(0, x - h)
    x * exp(p[1] * (1 - x/p[2]) * (x - p[3])/p[2])
  })
}

sigma_g <- 0.1
pdfn <- function(x, mu, sigma = sigma_g){
  dlnorm(x, log(mu), sdlog = sigma)
}

z_g <- function() rlnorm(1, 0, sigma_g)
set.seed(0)
Tobs <- 40

W <- Tobs

x <- numeric(Tobs)
x[1] <- 60
for(t in 1:(Tobs-1))
  x[t+1] = z_g() * f(x[t], h=0)
obs <- data.frame(x = c(rep(0, W), 
                        pmax(rep(0,Tobs-1), x[1:(Tobs-1)])), 
                  y = c(rep(0, W), 
                        x[2:Tobs]))
xObs <- obs$x
yObs <- obs$y
xPred <- seq(0, max(xObs), length = 50)
qplot(seq_along(x), x) + geom_line()
xObs <- rep(0,5)
yObs <- rep(0,5)

Now the GP estimation from NIMBLE. Updated to compute the prior for the length scale using Daniel’s scaling argument.

 priors_template <- nimbleCode({
        rho ~ dunif(0, X)
        sigGP ~ dunif(0, 1e5)
        sigOE ~ dunif(0, 1e5)
        })  

nimbleCodeSub <- function(code, .values){
  as.expression(sapply(code, lazyeval::interp, .values = .values)[-1])
}
  
priors <- nimbleCodeSub(priors_template, 
                        .values = list(X = diff(range(xObs))/(2*sqrt(6))) )

(Note that at this time all nimble run calls must be made in one block since we cannot cache nimble pointers).

fit <- gp_setup(xObs, yObs, xPred, priors = priors)
NaNs were detected in model variable: logProb_rho.
## Fit the GP
system.time(fit$Cmcmc$run(100000))
   user  system elapsed 
  5.476   0.024   5.498 
samples <- as.matrix(fit$Cmcmc$mvSamples)

## Perform prediction
system.time(fit$Cpred$run(samples))
   user  system elapsed 
  5.792   0.000   5.790 
## Extract variables for future use
E <- fit$Cpred$getE()
C <- fit$Cpred$getC()
samples <- as.data.frame(samples)

Posteriors

df <- tidyr::gather(samples)
ggplot(df) + 
  geom_density(aes(value)) + 
  facet_wrap(~key, scale='free')
obs <- data.frame(x = xObs, y = yObs)
pred <- data.frame(x = xPred, y = E, ymin = E - sqrt(diag(C)), ymax = E + sqrt(diag(C)))
ggplot2::ggplot(pred) + 
  geom_ribbon(aes(x = x,y = y, ymin = ymin, ymax = ymax), fill = "grey80") +
  geom_line(aes(x = x, y = y), size=1) + 
  geom_point(data = obs, aes(x,y)) +
  geom_line(data = data.frame(x = xPred, true = f(xPred,0)), aes(x,true), col="red") + 
  coord_cartesian(xlim = range(c(xObs, xPred)), ylim = range(c(yObs,E))) +
  theme_bw() 

Decision theory

states <- xPred # Vector of all possible states
actions <- states # Vector of actions: harvest

Let’s consider a slight variation of the most trivial utility function: one which explicitly adds a cost to completely exhausting the stock (or reducing the stock by more than, say 95% in this case.) This should be somewhat similar to the impact of no discount rate.

# Utility function
discount = 0.99

#get_utility <- function(x,h) pmin(x,h)
#R <- outer(states, actions, get_utility)

R <- sapply(actions, function(h){
      sapply(states, function(x){
  if(h < 0.95 * x)
    h
  else 
     - 1 * max(states)
  })
})

Implementing policy

z <- function() rlnorm(1, meanlog = 0, sdlog = sigma_g)

simulate_policy <- function(states, actions, policy, f, z, s0, steps = 50, utility = function(s,a) NA, discount = 1){
  s <- numeric(steps)
  a <- numeric(steps)
  u <- numeric(steps)
  s[1] <- s0
  for(t in 1:(steps-1)){
    
    a[t] <- actions[policy[which.min(abs(states - s[t]))]]
    s[t+1] <- z() * f(s[t], a[t])
    u[t] <- utility(s[t], a[t]) * discount ^ t
  }
  
  # Final action determined but not implemented
  a[steps] <- actions[policy[which.min(abs(states - s[t]))]]

  data.frame(time = 1:steps, state = s, action = a, utility = u)
}

GP model

gp_matrix <- function(states, actions, E, C){
  
  transition <- array(0, dim = c(length(states), length(states), length(actions)))
  K <- length(states)
  sigmas <- sqrt(diag(C))
  
  for (k in 1:length(states)) {
    for (i in 1:length(actions)) {
      nextpop <- E[k] - actions[i]
      if(nextpop <= 0) {
        transition[k, , i] <- c(1, rep(0, K - 1))
      } else {
        transition[k, , i] <- dnorm(states, nextpop, sigmas[i]) / sum(dnorm(states, nextpop, sigmas[i]))
      }
    }
  }
  transition
}

P_gp <- gp_matrix(states, actions, E, C)
mdp_check(P = P_gp, R = R)
[1] ""
gp <- mdp_value_iteration(P_gp, R, discount = discount, epsilon = 0.00001, max_iter = 5e3, V0 = numeric(length(states)))
[1] "MDP Toolbox: iterations stopped, epsilon-optimal policy found"
ggplot(data.frame(stock = states, escapement = states - actions[gp$policy])) +
  geom_point(aes(stock, escapement)) + xlab("Population size") + ylab("Escapement")
data.frame(reps = 1:50) %>% 
  group_by(reps) %>% 
  do(simulate_policy(states,actions, gp$policy, f, z, s0 = 100, steps = 20, utility = pmin, discount = discount)[c("time", "state", "utility")]) ->
  sims 

mean(sims$utility)
[1] 4.751511
ggplot(sims) + geom_line(aes(time, state, group = reps), alpha = 0.3, col = "darkblue")

Simulate under the true model

data.frame(reps = 1:50) %>% 
  group_by(reps) %>% 
  do(simulate_policy(states, actions, gp$policy, f, z, s0 = 100, steps = 20, utility = pmin, discount = discount)[c("time", "state", "utility")]) ->
  sims 

mean(sims$utility)
[1] 4.751511

(Average utility is approximate here since it does not include penalty; since a function and not a matrix is requred by this function at this time.)

ggplot(sims) + geom_line(aes(time, state, group = reps), alpha = 0.3, col = "darkblue")

Comparison to true model

P <- transition_matrix(states, actions, f, pdfn)
#get_utility <- function(x,h) pmin(x,h)
#R <- outer(states, actions, get_utility)
mdp_check(P = P, R = R)
[1] ""
mdp <- mdp_value_iteration(P, R, discount = discount, epsilon = 0.001, max_iter = 5e3, V0 = numeric(length(states)))
[1] "MDP Toolbox: iterations stopped, epsilon-optimal policy found"
ggplot(data.frame(stock = states, 
                  escapement = states - actions[gp$policy], 
                  optimal_escapement = states - actions[mdp$policy])) +
  geom_point(aes(stock, escapement)) + 
  geom_point(aes(stock, optimal_escapement), col="red", alpha=.4) + 
  xlab("Population size") + ylab("Escapement")

Note that the altered award structure has almost no effect on the optimal policy given the true model, other than to avoid harvesting directly to zero even when the stock cannot persist, due to the explicit penalty for doing so.

data.frame(reps = 1:50) %>% 
  group_by(reps) %>% 
  do(simulate_policy(states,actions, mdp$policy, f, z, s0 = 100, steps = 20, utility = pmin, discount = discount)[c("time", "state", "utility")]) ->
  sims 

mean(sims$utility)
[1] 10.19602
ggplot(sims) + geom_line(aes(time, state, group = reps), alpha = 0.3, col = "darkblue")

Read more



Mdptoolbox Allen Model

29 Jul 2015

library("MDPtoolbox", quietly = TRUE)
library("ggplot2", quietly = TRUE)
K <- 150 # state space limit
states <- 0:K # Vector of all possible states
actions <- states # Vector of actions: harvest


sigma_g = 0.1
p <- c(2, 100, 50)

f <- function (x, h){
  sapply(x, function(x) {
    x <- pmax(0, x - h)
    x * exp(p[1] * (1 - x/p[2]) * (x - p[3])/p[2])
  })
}

pdfn <- function(x, mu, sigma = sigma_g){
  dlnorm(x, log(mu), sdlog = sigma)
}

# Utility function
discount = 0.95
get_utility <- function(x,h) {
    pmin(x,h)
}
R <- outer(states, actions, get_utility)
transition_matrix <- function(states, actions, f, pdfn){
  # Initialize
  transition <- array(0, dim = c(length(states), length(states), length(actions)))
  
  K <- length(states)
  
  for (k in 1:length(states)) {
    for (i in 1:length(actions)) {
  
  # Calculate the transition state at the next step, given the 
  # current state k and action i (harvest H[i])
        nextpop <- f(states[k], actions[i])
        
        ## Population always extinct if this is negative. since multiplicitive shock z_t * f(n) < 0 for all f(n) < 0
        if(nextpop <= 0)
          transition[k, , i] <- c(1, rep(0, length(states) - 1))
    # Implement demographic stochasticity 
        else {
  
        # Cts distributions need long-tailed denominator as normalizing factor:
          fine_states <- seq(min(states), 10 * max(states), by = states[2] - states[1])
        N <- sum(pdfn(fine_states, nextpop))  
          transition[k, , i] <-pdfn(states, nextpop) / N
          
            # We need to correct this density for the final capping state ("Pile on boundary") (discrete or cts case)
          # this can be a tiny but negative value due to floating-point errors. so we take max(v,0) to avoid
        transition[k, K, i] <- max(1 - sum(transition[k, -K, i]), 0)
        }
    } 
  }
  transition
}

P <- transition_matrix(states, actions, f, pdfn)

Using toolbox

mdp_check(P = P, R = R)
[1] ""
mdp <- mdp_value_iteration(P, R, discount = discount, epsilon = 0.001, max_iter = 5e3, V0 = numeric(length(states)))
[1] "MDP Toolbox: iterations stopped, epsilon-optimal policy found"
plot(states, states - actions[mdp$policy],  xlab="Population size", ylab="Escapement")

Compare to Reed

From Reed (1979) we know that the optimal solution is a constant-escapement rule when the growth function in convex. Note that this condition is violated by the growth function with alternative stable states (Allen/Ricker-Allee model), resulting in a very different optimal policy:

\[f'(s^*) = 1/\alpha\]

For growth-rate function \(f\), where \(\alpha\) is the discount factor and \(s^*\) the stock size for the constant escapement. Analytic solutions are clearly possible for certain growth functions, but here I’ve just implemented a generic numerical solution.

fun <- function(x) - f(x,0) + x / discount
out <- optimize(f = fun, interval = c(0,K))
S_star <- out$minimum

exact_policy <- sapply(states, 
                       function(x) 
                        if(x < S_star) 0
                        else x - S_star)
plot(states, states - actions[mdp$policy],  xlab="Population size", ylab="Escapement")

# The difference between Bellman and the analytical solution is small:
lines(states, states - exact_policy)

Read more



Fastgp Explore

29 Jul 2015

library(FastGP)
library(mvtnorm)
library(MASS)
library(rbenchmark)

Demo of elliptical slice sampling

Relevant Parameters:

A <- 1 #amplitude of the sin function used for our signal
T <- 5 #period of the sin function used as our signal
sigma <- 10 #variance parameter in the covariance function
phi <- 100 #decay parameter for the exponential kernel
N <- 100 #number of time points 

A function that creates a signal, which can be toggled for multiple shapes. In the present iteration we use a sin function with variable period.

signal <- function(t) {
  return(A*sin(t/T))
}

Define our covariance matrix using the exponential kernel

S <- as.matrix(sigma*exp(-as.matrix(dist(seq(1,N)))^2/phi))

Generate a copy of the signal on the time scale of 1…N

t <- mvrnorm(n = 1, mu = rep(0,100), Sigma = S , tol = 1e-6)
s <- signal(seq(1,N)+t)+rnorm(N,sd=sqrt(.001))

log-lik functions

log_lik <- function(w,s){
  return (dmvnorm(s, mean = signal(seq(1:N)+w),sigma=.001*diag(1,N),log=T))
}

#rcpp based log-lik function
log_lik_rcpp <- function(w,s){
  return (rcpp_log_dmvnorm(S = .001*diag(1,N),mu=signal(seq(1:N)+w),s,istoep=TRUE))
}
X <- matrix(seq(1,N),nrow=N)
Sig <- sigma*exp(-rcpp_distance(X,dim(X)[1],dim(X)[2])^2/phi)

and here we go:

mcmc_samples <- ess(log_lik_rcpp,s,Sig,1000,250,100,TRUE)
[1] "Running elliptical slice sampling..."
par(mfrow=c(1,1))
plot(colMeans(mcmc_samples), type="l")
points(t, type="l", lwd=3, col="red")
points(colMeans(mcmc_samples) + 2* apply(mcmc_samples, 2, sd), type="l", lty="dashed")
points(colMeans(mcmc_samples) - 2* apply(mcmc_samples, 2, sd), type="l", lty="dashed")

test ess with rcpp likelihood versus non rcpp likelihood

benchmark(ess(log_lik_rcpp,s,Sig,1000,250,100,FALSE), ess(log_lik,s,Sig,1000,250,100,TRUE),replications=2)
[1] "Running elliptical slice sampling..."
[1] "Running elliptical slice sampling..."
[1] "Running elliptical slice sampling..."
[1] "Running elliptical slice sampling..."
[1] "Running elliptical slice sampling..."
[1] "Running elliptical slice sampling..."
                                              test replications
1 ess(log_lik_rcpp, s, Sig, 1000, 250, 100, FALSE)            2
2       ess(log_lik, s, Sig, 1000, 250, 100, TRUE)            2
  elapsed relative user.self sys.self user.child sys.child
1  19.480    1.000    19.592    0.636          0         0
2  33.818    1.736    46.504   87.208          0         0

Read more



Mdptoolbox Ex 2

14 Jul 2015

Adapted from Marescot et al. appendix 5, to Reed optimal control problem, including direct comparison against (semi) analytic optimum.

step 1: define objectives

This is a conceptual step which does not require coding

step 2: define states

K <- 150 # state space limit
states <- 0:K # Vector of all possible states

step 3: define control actions

# Vector of actions: harvest
H <- states

step 4: define dynamic model (with demographic parameters)

p <- c(6,0.05)
f <- function(x, h){
  A <- p[1] 
  B <- p[2] 
  s <- pmax(x-h, 0)
  A * s/(1 + B * s)
}

sigma_g = 0.1

step 5: define utility

# Utility function
get_utility <- function(x,h) {
    pmin(x,h)
}

step 6: solve bellman equation with value iteration

# Initialize transition matrix
transition <- array(0, dim = c(length(states), length(states), length(H)))

# Initialize utility matrix
utility <- array(0, dim = c(length(states), length(H)))
# Fill in the transition and utility matrix
# Loop on all states
for (k in 0:K) {

    # Loop on all actions
    for (i in 1:length(H)) {

# Calculate the transition state at the next step, given the 
# current state k and the harvest H[i]
        nextpop <- f(k, H[i])
        if(nextpop <= 0)
          transition[k+1, , i] <- c(1, rep(0, length(states) - 1))
    # Implement demographic stochasticity by drawing 
  # probability from a density function
        else {

# We need to correct this density for the final capping state ("Pile on boundary")
# For discrete probability distribution, this is easy if `states` includes all possible
# discrete states below the capping state (e.g. all non-negative integers less than K).  
# For a continuous distribution, this is more problematic as we have to first normalize the densities.
# EDIT: this can be negative, due to floating-point errors. so we take max(v,0) to avoid

# Get long-tailed denominator as normalizing factor (continuous distributions only):
          fine_states <- seq(min(states), 10 * max(states), by = states[2]-states[1])
        N <- sum(dlnorm(fine_states, log(nextpop), sdlog = sigma_g))

      transition[k+1, , i] <- dlnorm(states, log(nextpop), sdlog = sigma_g) / N
      
        # We need to correct this density for the final capping state ("Pile on boundary")
        transition[k+1, K+1, i] <- max(1 - sum(transition[k+1, -(K+1), i]), 0)

        }
        
        # Compute utility
        utility[k+1, i] <- get_utility(k, H[i])

    } # end of action loop
} # end of state loop
# Discount factor
discount <- 0.95

# Action value vector at tmax
Vtmax <- numeric(length(states))

# Action value vector at t and t+1
Vt <- numeric(length(states))
Vtplus <- numeric(length(states))

# Optimal policy vector
D <- numeric(length(states))

# Time horizon
Tmax <- 150

Solution calculated explicitly:

The backward iteration consists in storing action values in the vector Vt which is the maximum of utility plus the future action values for all possible next states. Knowing the final action values, we can then backwardly reset the next action value Vtplus to the new value Vt. We start The backward iteration at time T-1 since we already defined the action value at Tmax.

for (t in (Tmax - 1):1) {

# We define a matrix Q that stores the updated action values for 
# all states (rows)
# actions (columns)
    Q <- array(0, dim = c(length(states), length(H)))
    
    for (i in 1:length(H)) {
    
# For each harvest rate we fill for all states values (row) 
# the ith column (Action) of matrix Q
# The utility of the ith action recorded for all states is 
# added to the product of the transition matrix of the ith 
# action by the action value of all states 
        Q[,i] <- utility[, i] + discount * (transition[,,i] %*% Vtplus)
    
    } # end of the harvest loop

    # Find the optimal action value at time t is the maximum of Q
    Vt <- apply(Q, 1, max)

# After filling vector Vt of the action values at all states, we 
# update the vector Vt+1 to Vt and we go to the next step standing 
# for previous time t-1, since we iterate backward
    Vtplus <- Vt

} # end of the time loop

# Find optimal action for each state
for (k in 0:K) {
# We look for each state which column of Q corresponds to the 
# maximum of the last updated value 
# of Vt (the one at time t + 1). If the index vector is longer than 1 
# (if there is more than one optimal value we chose the minimum 
# harvest rate)
    D[k + 1] <- H[(min(which(Q[k + 1, ] == Vt[k + 1])))]
}

plot solution

plot(states, states - D, xlab="Population size", ylab="Escapement")

proof of optimality: compare with analytical solution

From Reed (1979) we know that the optimal solution is a constant-escapement rule:

\[f'(s^*) = 1/\alpha\]

For growth-rate function \(f\), where \(\alpha\) is the discount factor and \(s^*\) the stock size for the constant escapement. Analytic solutions are clearly possible for certain growth functions, but here I’ve just implemented a generic numerical solution.

fun <- function(x) - f(x,0) + x / discount
out <- optimize(f = fun, interval = c(0,K))
S_star <- out$minimum

exact_policy <- sapply(states, 
                       function(x) 
                        if(x < S_star) 0
                        else x - S_star)
plot(states, states - D, xlab="Population size", ylab="Escapement")

# The difference between Bellman equation solution and the analytical 
# solution is small:
lines(states, states - exact_policy)

Using toolbox

library("MDPtoolbox")
mdp_check(P = transition, R = utility)
[1] ""
out <- mdp_value_iteration(transition, utility, discount = discount, epsilon = 0.001, max_iter = 5e3, V0 = Vtmax)
[1] "MDP Toolbox: iterations stopped, epsilon-optimal policy found"
plot(states, states - D, xlab="Population size", ylab="Escapement")
lines(states, states - H[out$policy], col="red", lty=2)

Read more