*Random idea for possible further exploration:*

The use of ‘big data’ by UPS to perform lots of small efficiency gains seems to be everybody’s favorite example (NPR, The Economist). During a typical applications of optimal control for ecological conservation talk yesterday I couldn’t help thinking back to that story. The paradigm shift is not so much the kind or amount of the data being used as it is the control levers themselves. As the Economist (rightly) argues, everyone typically assumes that a few principle actions are responsible for 80% of possible improvement.

Optimal control tends to focus on these big things, which are also usually particularly thorny optimizations. Most of the classic textbook hard optimization problems could have come right from the UPS case: the traveling salesman, the inventory packing/set cover problems, and so forth. Impossible to solve exactly on large networks, approximate dynamic programming approaches have since been the work-around. Yet the “Big Data” approach takes a rather different strategy all together, tackling many small problems instead of one big one. Our typical approach of theoretical abstractions to simple models is designed to focus on these big overarching problems. In abstracting the problem, we focus on the big picture stuff that should matter most – stuff like figuring out the optimal route to travel, and so forth. But when the gains through increasing optimization of these things are marginal, focusing on the “other 20%” can make more sense. However, that means abandoning the abstraction and going back to the original messy problem. It means knowing about all the other little levers and switches we can control. In the UPS context, this means thinking about how many times a truck backs up, or idles at a stop light, or what hand the deliveryman holds the pen in. Given both the data and the ability to control so many of these little things, optimizing each one in the first place can be more valuable than focusing on the big abstract optimizations.

So, does this work only once the heuristic solutions to the big problems are nearly optimal, so improved approximations have very limited gains? Or can this also be a route forward when the big problems are primarily intractable as well? The former certainly seems the more likely, but if the latter is true, it could prove very interesting.

So this got me thinking – if we accept the latter premise we find a case closely analogous to the very messy optimizations we face in conservation decision-making. Could the many little levers be an alternative? It’s unlikely given both the need for the kind of arbitrarily detailed data at almost no cost available to the UPS problem, and also the kind of totalitarian control UPS can apply to control all the little levers, while the conservation problem more frequently has nothing bit a scrawny blunt stick to toggle in the first place. Nevertheless, it’s hard to know what possible gains we have already excluded when we focus only on the big abstractions and the controls relevant to them. Could conservation decision-making think more outside the box about the many little things we might be able to more effectively influence?