5 questions for Daniel Gross on whether World War II-era science policy is applicable to today

This post was originally published on this site

By James Pethokoukis and Daniel P. Gross

Policymakers often look back on World War II-era R&D support as a model for promoting innovation today. To what extent, however, should science policy in a time of crisis inform how we fund research on a regular basis? Recently, I explored this question with Daniel Gross.

Daniel is an assistant professor at Duke’s Fuqua School of Business and a faculty research fellow at the National Bureau of Economic Research. He’s the author of several papers examining innovation policy in the World War II era, the most recent of which is “Organizing Crisis Innovation: Lessons from World War II,” which he co-authored along with Bhaven Sampat.

Below is an abbreviated transcript of our conversation. You can read our full discussion here. You can also subscribe to my podcast on Apple Podcasts or Stitcher, or download the podcast on Ricochet.

What spurred innovation in World War II, and how are the products of that innovation still impacting us today?

World War II was one of the most acute emergencies in US history, and we initially lagged behind the technological frontier of modern warfare. Fortunately, a handful of science administrators went to the president and basically asked him to fund R&D in military technology. That R&D, and the technology it produced, proved to be immensely important both in the war effort and also in the multiple dual-use technologies it yielded in the post-war era.

The scale and breadth of the effort were just amazing. And because so many fields were invested in and substantial progress really was made, the innovations of the 1940s have continued to impact contemporary society. Everything from applied technology and digital communications, electronics and radar, even all the way over to medicine and surgical techniques — these are all things that we continue to use today.

Via Twenty20

What was so unique about the context surrounding World War II that allowed for these lasting breakthroughs to be made? Can that environment be replicated today?

First of all, it was the urgency and time horizon. For example, while climate change is an urgent crisis today in the sense that if we don’t take measures now, consequences will come to bear in the future, the threat isn’t necessarily imminent. The World War II impending crisis scenario is more similar to the ways in which the COVID pandemic has presented a truly imminent threat to wellbeing.

Secondly, there was primarily a single customer for World War II R&D: the US military. This is actually part of what made such great developments possible. You had military advisors and liaisons sitting on research committees to help come up with specific research proposals while also being involved in the translation efforts from bench to battlefield.

Now, in the case of the pandemic, that kind of model might also apply to coordination between the scientists or science-funding agencies, and, say, hospitals, as they are actually on the front lines fighting this battle. For something like climate change, this relationship is harder to facilitate because the actual user-base is a bit more diffused, meaning it’s a bit more difficult to apply this “command-and-control” funding approach.

I’m concerned that we’ve gotten so confident in thinking that we know where funding should go — whether it be to vaccines, AI, or clean energy technology — that we’re going to forget about basic research. Do you agree?

Yes. There is certainly the risk that we could forget about basic research entirely, but there is also the risk that we might divert resources away from other fields that might yet hold promise. One of the questions that I reflect on in the World War II context is “What might’ve been left behind?”

In that situation, you had the scientific establishment — say, the country’s physicists — shifting from whatever work they’re doing before to instead focus their energies on atomic fission and radar. While we might celebrate that effort because it’s easy to see what we got from it, it’s difficult to know, and easy to overlook, what we might have also left behind. We have to consider what might’ve been crowded-out as the technological developments we now celebrate were crowded-in.

What do you feel confident saying about how federal R&D funding crowds out private funding? How does that interaction work?

That’s a difficult question to answer because counterfactuals are hard to observe, but we can approximate. While there’s mixed evidence here, there are certainly cases where public investment attracts and complements private investment. That’s not to mention the many other settings where public investment seeds basic research which can then be commercialized through a variety of avenues. The complementarities vary a bit from setting to setting.

However, there’s not a single number we can point to and say, “Here’s the degree of crowd-in or crowd-out.” It varies across settings. What I ultimately want an audience to be mindful of is that there can be crowd-outs. It’s actually very easy to overlook that, especially when we think about some of these crisis moments and how productive we were in actually meeting the moment, like we’ve been with vaccines today.

There are a lot of ideas right now about spending more on basic research. Jonathan Gruber has a plan to try and create technology hubs around the country. Would moving innovation away from the coasts — from tech hubs like Silicon Valley — to create more top-down science hubs across America be successful?

Going back to World War II, the OSRD really emphasized funding the best scientists and institutions available. Its goal was to deliver the best results as quickly as possible. And because of that, much of that era’s R&D funding ended up concentrating in specific locations, with different technologies being centered in different locations.

That ultimately led to policy and political debates in the 1940s over funding the best scientists versus ensuring that scientific research was being widely supported. Now, those two choices inevitably present trade-offs. I can tell you from my own work that the OSRD’s R&D investments ultimately seeded technology hubs in the different places where it operated, which had long-run effects that have carried into the modern day.

But non-crisis innovation objectives are different, meaning other goals can be adopted rather than merely prioritizing quickness and efficiency. For example, policy might have an objective of investing in scientific infrastructure to grow the number of regions that are R&D hubs, which could in turn increase national competition and prepare the scientific establishment for the next crisis that it will need to solve. Pulling that all together, I can better make a case for a Gruber-and-Johnson type of policy, where an objective is to invest in more regions and for R&D funding to be more evenly distributed.

James Pethokoukis is the Dewitt Wallace Fellow at the American Enterprise Institute, where he writes and edits the AEIdeas blog and hosts a weekly podcast, “Political Economy with James Pethokoukis.” Daniel is an assistant professor at Duke’s Fuqua School of Business, as well as a faculty research fellow at the National Bureau of Economic Research.

Related Posts