Health
The U.S. innovation framework is the envy of the globe. Here’s how it commenced.

During the Second World War, government-funded research enabled scientists to efficiently mass produce penicillin. Here, employees at a United States Department of Agriculture research laboratory, circa 1943, are searching for mold strains that yield the highest amounts of the antibiotic.
USDA file photo
Economist focused on technological evolution examines public-private research collaboration amidst growing concerns over federal financial support
The involvement of the federal government in the nation’s innovation ecosystem has come under scrutiny recently. For numerous years, federal financing has underpinned academic research, which in turn has propelled private advancement, catalyzing new breakthroughs in healthcare, technology, and other sectors. The Trump administration is attempting to limit reimbursements for indirect research expenditures in the biomedical sector, potentially resulting in funding reductions amounting to billions of dollars from the National Institutes of Health.
This matter has illuminated the nation’s public-private research collaboration, which has been acknowledged for its contributions across various fields and replicated globally. The Gazette engaged in a conversation with Daniel P. Gross, an associate professor of business management at Duke University’s Fuqua School of Business and a former professor at Harvard Business School. Gross, alongside Bhaven Sampat from Arizona State University, co-authored a recent National Bureau of Economic Research working document discussing the postwar diversification of biomedicine.
In this revised exchange, Gross stated that the partnership emerged as a reaction to the pressing needs of World War II, facilitated the U.S. and its allies in achieving victory, and laid the groundwork for the presently flourishing system.
What is your perspective on the partnership between the federal government and academic institutions and how did it originate?
This partnership has essentially existed since World War II. Its origins can be traced back to June 1940, when several leaders from U.S. universities and industrial R&D laboratories approached President Franklin D. Roosevelt with the proposal to leverage civilian scientists to create new technologies for the U.S. military, which at that time was considerably behind in the technological aspects of warfare.
This occurred over a year before the U.S. entered the conflict, but it represented the commencement of an initiative that involved tens of thousands of scientists from companies and universities in the wartime effort, resulting in numerous breakthroughs then, and was later enhanced and expanded throughout the Cold War and has continued to flourish since. This collaboration has been a cornerstone of U.S. technological supremacy over the last 80 years, particularly in biomedicine.
At that time, the National Institutes of Health was present, but it was very different from what it is now?
The U.S. innovation framework, particularly the biomedical innovation framework, appeared significantly different in 1940.
The three fundamental elements of U.S. biomedicine today comprise universities, the life sciences sector, and the NIH. Currently, they collaborate and build upon each other. However, in the 1930s, all three were far more rudimentary. Universities were less research-oriented and possessed very limited funding. The pharmaceutical industry was not well-structured, and largely consisted of chemical firms with a minor subsidiary pharmaceutical division rather than the large, specialized drug producers we recognize today.
Drug development during that era was propelled more by empirical trial and error than scientific methods — medications weren’t even subjected to FDA assessments for safety until 1938 and for efficacy until 1962. Furthermore, the NIH was small and exclusively intramural — it had not yet started providing extramural research financing as we know it today.
“In nearly every conflict before World War II, infectious diseases claimed more lives than combat injuries. Suddenly, there was a pressing need for innovation with immediate practical benefits — yet no genuine infrastructure to accomplish this.”

And was this deemed insufficient as the war commenced?
The conflict presented a multitude of technological challenges, ranging from the detection of enemy aircraft to maintaining soldier health. In almost every war prior to World War II, infectious diseases resulted in the deaths of more soldiers than injuries sustained in battle. Abruptly, there emerged an urgent demand for innovation with immediate practical applications — yet no substantial infrastructure was in place to achieve it.
The war created a need for organizational innovations to facilitate technological advancements. This led to the establishment of a new agency to coordinate and finance wartime research, the Office of Scientific Research and Development, or OSRD. It also initiated the creation of federal R&D contracts, new patent regulations, peer review protocols, and even indirect cost financing.
Most critically, however, was the acceptance of the idea that investing in R&D was a responsibility for the federal government, along with the development of a novel collaboration model involving the government, private enterprises, and academic institutions.
Was it largely effective? Penicillin is a commonly referenced example.
Most would affirm yes. After all, the Allies triumphed in the conflict — and advancements in technology, whether medical or otherwise, played a vital role in that victory. Novel pharmaceuticals may not be the first concept that springs to mind when envisioning military technology. Nonetheless, illnesses and other afflictions could incapacitate the military’s frontline units, necessitating increased manpower. Tuberculosis, measles, and sexually transmitted infections are prime examples of widespread health issues among soldiers of that era. Malaria was rampant in the Pacific theater and North Africa.
The extensive variety of battlefronts where this worldwide conflict unfolded, coupled with the innovative weaponry employed, undoubtedly broadened the spectrum of challenges that required focus — such as safeguarding soldiers from harsh environmental extremes like scorching and frigid climates, as well as oxygen shortages at elevated altitudes, strategies for controlling disease vectors, treatments for wounds and burns, blood replacements, among other matters.
The OSRD’s Committee on Medical Research (CMR) coordinated and financed numerous projects addressing these issues, achieving remarkable progress across many of them. You are correct in noting that one of the most significant and remembered breakthroughs was penicillin. Even though penicillin was identified in the 1920s, at the onset of World War II, there was no viable method for producing it in sufficient quantities for clinical trials, let alone for treatment.
CMR commenced with two strategies for developing penicillin as a medication, uncertain of which would prevail. One involved attempting to synthesize it. The other focused on cultivating it in large volumes from the mold that produces it. Initially, scientists believed the synthetic method held greater promise, but ultimately, it was the large-scale fermentation of natural penicillin that triumphed.
This advancement was revolutionary — not solely for military health but for civilian health as well. The evidence lies in the statistics: Between World War I and World War II, military hospital admissions and mortality rates from the most prevalent infectious diseases plummeted by 90 to 100 percent. Research during World War II effectively resolved the military’s bacterial disease challenges.
Perhaps more critically, it initiated a golden era in drug development. The antibiotic revolution of the 1950s and 1960s can be directly connected to the successes achieved during the war.
Some initiatives flourished in the postwar era. What enabled this effort to endure?
Throughout the CMR portfolio, the actions taken to address the pressing needs of war established a groundwork upon which postwar biomedical science and technology commenced to evolve. This foundation encompassed new research tools and methodologies, innovative therapies and therapeutic candidates, novel drug development platforms, augmented capacities at existing and nascent pharmaceutical companies — including expertise in specific drug categories and a general familiarity with science-driven strategies for drug discovery, such as rational drug design — and, most significantly, enhanced scientific comprehension.
What about cultivating a new generation of scientists?
This is an excellent inquiry. Many readers might assume that public R&D funding primarily aids research endeavors. However, scientific training is equally crucial. The war mobilization engaged not just experienced scientists but also thousands of graduate students, predoctoral researchers, and newly minted Ph.D.s. This held true for both medical and nonmedical research: The laboratories undertaking the work were bustling with young individuals. While we do not quantify the contributions of these students within biomedicine, it’s reasonable to assume that for many, it was a transformative experience.
In related research with Maria Roche, an assistant professor and former colleague at HBS, we have illustrated this phenomenon for researchers involved in World War II radar projects. More broadly, when examining university and policy leadership across U.S. science in the initial 25 years following World War II, one can observe OSRD alumni present throughout. The war served as a catalyst for developing technical and administrative capacity that the U.S. leveraged afterward.
When discussing CMR funding, it encompassed reimbursement for indirect expenses — a topic of contemporary debate. What was the reasoning behind that at the time?
It’s beneficial to consider the context: OSRD needed to motivate firms and universities to undertake military R&D initiatives. Accomplishing this necessitated reorienting ongoing research pursuits and displacing future ones — a process that was disruptive. Companies were being requested to utilize their own facilities, equipment, and at times, their top talent on national issues instead of commercial ones. Some were hesitant to comply without full reimbursement. Medical researchers were also initially apprehensive about governmental funding and bureaucratic oversight.
Compensating these R&D performers for overhead costs, in addition to the immediate incremental expenses of OSRD-contracted work, was one strategy to encourage participation. Ultimately, the policy objective was for OSRD research to result in “no gain, no loss” for its contractors. Although the structure and motivations for indirect cost recovery have evolved to some extent since then, the fundamental principles trace back to this time.
Today the circumstances are somewhat different, in that we’re not constructing something, but rather attempting to sustain something that has proven successful?
It seems this has been notably fruitful. I wouldn’t argue against the existence of opportunities to enhance the system’s efficiency, but overall, when assessing the output of this 80-year collaboration among U.S. universities, federal research funders, and industry, it’s a narrative of success. We should be cautious that, while pursuing reforms in science policy, we don’t jeopardize the golden goose.
The U.S. innovation framework, particularly the biomedical innovation sector, is globally envied. It has spurred decades of progress that have bolstered national defense, health, and economic advancement. Dismantling that would lead to significant detrimental consequences for the U.S. and the world.