Thursday, March 26, 2020

Ten Years and Beyond, Ten Years Ago: NSF's long-term research agenda

A criticism of my first two posts (coming from myself, as the lone reader of this blog), is that it is easier to criticize than to build. As the Wikipedia summary of the Cambridge capital controversy states "it was much easier to destroy neoclassical theory than to develop a full-scale alternative that can help us understand the world."

One group working to support innovative economic modeling is the National Science Foundation. Today, NSF's Directorate for Social, Behavioral and Economic Sciences (SBE) "supports research and infrastructure to advance understanding of a full range of human networks", through its Human Networks and Data Science (HNDS) program.

In this post I summarize an earlier effort. In 2010, the NSF's SBE invited economists to write
white papers describing the questions that are "likely to drive next generation research in the social, behavioral, and economic sciences." They called it "Ten Years and Beyond: Economists Answer NSF's Call for Long-Term Research Agendas". NSF received 252 papers from economists including Daron Acemoglu, David Autor, Andrew Lo, Raj Chetty, Stanley Fischer, and Hal Varian.

First, I highlight a couple of quotes from various papers and my thoughts on them, in particular their relevance to interdisciplinary agent-based models / theoretical macro. Then, I give a bit more of a summary of a few of the papers that were especially interesting to me.

Tuesday, March 24, 2020

Coronavirus: Making policy outside the database

In the middle of the coronavirus pandemic, fiscal policymakers, health professionals, and others are focused on critically important short-term decisions – whether to cut payroll taxes, send checks, act as the payer-of-last-resort, and so on – and rightly so. When policymakers were making similarly difficult decisions in 2009, Doyne Farmer and Duncan Foley wrote that one would assume that leaders in the US and abroad “are using sophisticated quantitative computer models to guide us out of the current economic crisis. They are not.” The same is true today.

Farmer and Foley pointed out that policymakers rely on two types of models to determine their response: empirical statistical models – which are fit to past data – and general equilibrium models – which assume a perfect world, thereby ruling out crises. These models have less-than-perfect explanatory ability due to their strong assumptions. Their main strength is high predictive power in stable periods. If GDP grew by 2 percent last year and we all maintain our routines, it’s generally a pretty good guess that GDP will grow by 2 percent this year.

However, as we see in times such as 2007-09 and today, the predictive power of these models becomes relatively nonexistent once the relationship between the models’ dependent and supposedly independent variables stop reflecting the behavior of the “complex networks of agents and institutions, stocks and flows, goods and services, money and credit”. Beyond the human tragedy, it is hard to know the full impact of empty streets, shuttered local retailers, and halted international supply lines. But clearly the statistical relationships of the empirical models and the assumptions of the general equilibrium models do not provide useable information to leaders during a pandemic. In the words of Bill Janeway in Doing Capitalism in the Innovation Economy, we are “living outside the database”.