It’s easy to take progress for granted. The rule is that once countries pull onto the innovation highway, they stay on it. Sure, sometimes they hit traffic, other times they race forward, but the direction never changes: progress seems so automatic that we hardly ever question it.
The COVID-19 pandemic would seem to justify our complacency. Less than a year after lockdowns radically altered our way of life, scientists had created multiple vaccines shockingly effective at quelling the disease. Just one more example of our inexorable ability to solve problems with science and technology.
It’s the wrong lesson to draw.
The real lesson is that human ingenuity is the most powerful problem-solving force in the universe—but that ingenuity can function only under certain conditions. And these conditions are not always honored. Yes, we are continuing to advance—but not as much as we could. And, more worrisome, the areas in which we can advance are shrinking.
Vaccine development: When ingenuity is liberated
It’s hard to say where the story of the COVID-19 vaccines started, but a good place to begin is with Hungarian-born scientist Katalin Karikó. Inspired by research such as a 1990 study from the University of Wisconsin, where scientists were able to design messenger RNA (mRNA) and use it to create specific proteins in mice, Karikó wanted to develop mRNA technology to fight disease in humans.
It wasn’t obvious this would work. As StatNews points out, “synthetic RNA was notoriously vulnerable to the body’s natural defenses, meaning it would likely be destroyed before reaching its target cells.” It took a decade, but Karikó and her collaborator at the University of Pennsylvania, Drew Weissman, finally figured out how to solve the problem.
Every strand of mRNA is made up of four molecular building blocks called nucleosides. But in its altered, synthetic form, one of those building blocks, like a misaligned wheel on a car, was throwing everything off by signaling the immune system. So Karikó and Weissman simply subbed it out for a slightly tweaked version, creating a hybrid mRNA that could sneak its way into cells without alerting the body’s defenses.
Karikó and Weissman published their findings without much fanfare in 2005. But some people did notice—the future founders of Moderna and BioNTech, which would create the first two effective COVID-19 vaccines.
When Karikó and Weissman published their first paper on mRNA in 2005, Stanford scientist Derrick Rossi recognized the potential of the new technology. His main interest was in taking adult stem cells and using mRNA to reprogram them so they behaved like embryonic stem cells. After a year, Rossi succeeded.
Rossi, now at Harvard, teamed up with a fellow scientist, Timothy Springer, and an entrepreneur, Robert Langer in 2010. During their first meeting, Langer pushed them to look beyond the stem cell application. According to StatNews:
As he listened to Rossi describe his use of modified mRNA, Langer recalled, he realized the young professor had discovered something far bigger than a novel way to create stem cells. Cloaking mRNA so it could slip into cells to produce proteins had a staggering number of applications, Langer thought, and might even save millions of lives.
“I think you can do a lot better than that,” Langer recalled telling Rossi, referring to stem cells. “I think you could make new drugs, new vaccines—everything.”
Soon they joined forces with the venture capital firm Flagship Ventures to create Moderna.
A similar collaboration started in Germany, when the husband-and-wife team of serial entrepreneurs, Ugur Sahin and Özlem Türeci, joined with investors Thomas and Andreas Strungmann. to create BioNTech, whose aim was to develop cancer vaccines using mRNA technology. (They later expanded their mission to target other serious diseases.)
The cancer vaccines didn’t fail, but they haven’t succeeded either. When COVID-19 hit, mRNA was still an unproven technology.
Past experience with vaccines suggested that if a vaccine did arrive, it would take years. Here, for instance, is an New York Times chart from April 2020. Historical precedent suggested it would take scientists four years before we could even start Phase I trials.
But that’s not what happened.
On January 10, 2020, Chinese scientists posted the COVID-19 genetic sequence online. According to StatNews:
Because companies that work with messenger RNA don’t need the virus itself to create a vaccine, just a computer that tells scientists what chemicals to put together and in what order, researchers at Moderna, BioNTech, and other companies got to work. . . .
Moderna and BioNTech were able to develop vaccines in a matter of days. In less than two months—before most of us were paying attention to COVID-19—Moderna had a vaccine ready for clinical trials.
mRNA technology didn’t develop in a vacuum, of course. In particular, the SARS and MERS outbreaks earlier in the 21st century led to breakthroughs in understanding coronaviruses, and developing vaccines for them. According to Nature:
“A lot went into the mRNA platform that we have today,” says immunologist Akiko Iwasaki at the Yale School of Medicine in New Haven, Connecticut, who has worked on nucleic-acid vaccines — those based on lengths of DNA or RNA—for more than two decades. . . .
For instance, researchers at the US National Institute of Allergy and Infectious Diseases (NIAID) in Bethesda, Maryland, knew from their research on MERS and SARS that it was best to tune the RNA sequence to stabilize the resulting spike protein in the form it adopts before it docks with a host cell. “If you can trap it in its original pre-fusion state, it becomes a much better vaccine antigen,” says Barney Graham, deputy director of NIAID’s vaccine research centre. That work gave the NIAID team, which worked with Moderna, a head start once SARS-CoV-2 was sequenced in January. “The fact that people had been paying close attention to coronaviruses really allowed this whole process to accelerate,” says Dean.
By the time COVID-19 arrived, scientists had been exploring coronaviruses and coronavirus vaccines for years. And, importantly, as COVID-19 hit, different scientists took different approaches to developing a vaccine. Traditional approaches to vaccine development included injecting a weakened version of the virus, an inactive virus, or a subunit of the virus. And the Johnson & Johnson vaccine operates similarly to mRNA, but instead uses a disabled adenovirus to deliver instructions that teach our immune system to recognize the virus that causes COVID-19.
This is what innovation looks like. It’s a creative, trial-and-error process that gets magnified by connection and collaboration. Thousands of minds across the globe, each with its own unique store of knowledge and know-how, followed distinct and intersecting paths toward a solution—with no one sure exactly what the solution was going to look like or where it was going to come from.
It’s a process we did not follow across the board.
Vaccine approval and distribution: Stopping inventions from becoming innovations
Innovation “means much more than invention,” Matt Ridley notes, “because the word implies developing an invention to the point where it catches on because it is sufficiently practical, affordable, reliable and ubiquitous to be worth using.” Progress requires both steps: the creation of something new—and a process for delivering it at scale.
Bottom-up problem solving led to the invention of an effective COVID-19 vaccine in two days—and it created four effective vaccines in a matter of months. And yet it was a year before we moved towards vaccinating large numbers of people. From late 2020 through late April 2021, most Americans who wanted a vaccine could not get one. Something held back the vaccines from becoming genuine innovations. It’s no mystery what that something was.
While scientists were largely left free to pursue creative solutions to a COVID-19 vaccine, a small cadre of government experts controlled approval and distribution decisions. We did not honor the same independent decision-making that had led to the development of the vaccines—and we paid dearly for it.
In a pandemic, time is of the essence. The exponential growth of a disease like COVID-19 means that the costs of delaying a cure are exponential. In such a situation, there are strong reasons why individuals should be given full discretion to take unproven vaccines, so long as they are fully informed that the risks and benefits are unknown. At minimum, they should be free to take a vaccine as soon as there is evidence for its safety, if not its efficacy. That didn’t happen.
Not only did the government insist that the vaccines must be proven not only safe but effective, but they prohibited challenge trials. In a challenge trial, young and healthy volunteers would have received the vaccine and then been deliberately exposed to COVID-19 in a lab. This would have established the efficacy of COVID-19 vaccines in weeks, not months. The FDA said no.
Why? Why was it okay to watch hundreds of thousands die rather than allow a small number of well-informed individuals to make an independent risk/reward calculation? Because of a conviction that only the experts at the FDA could properly judge risk/reward. And what was their risk/reward calculus? They would be blamed if the results went poorly, and a dozen 30-year-olds got sick. They would not be blamed for withholding an effective vaccine from the public while hundreds of thousands of people died.
But that’s not where the story ends. Because even after approval, the distribution was botched. We could have relied on the same distribution method we use for vital goods like food, clothing, and cold medicine: the price system. In the price system, individuals make buying decisions based on how much they value something in the context of their own lives and budgets. On the other side of the equation, the profit motive encourages producers to move goods to where they’re most valued, to expand production so they can reap even more profits, and to keep costs low so they can outcompete rivals. But in the case of the COVID-19 vaccines, distribution was politicized. Selling the vaccine was illegal: the distribution would be rationed by the government.
The rationing of vaccines meant that untold doses were thrown out rather than used in defiance of bureaucratic rules. In New York, for example, anyone who violated the state’s rules for vaccine distribution faced draconian fines and the loss of their medical license. Result? According to the New York Times, state medical providers “had been forced to throw out precious vaccine doses because of difficulties finding patients who matched precisely with the state’s strict vaccination guidelines.”
And as bad as the U.S. rollout has been, the European rollout has been even worse. Paul Krugman rightfully fumes that:
European officials were not just risk averse, but averse to the wrong risks. They seemed deeply worried about the possibility that they might end up paying drug companies too much, or discover that they had laid out money for vaccines that either proved ineffective or turned out to have dangerous side effects. . . . They seemed far less worried about the risk that many Europeans might get sick or die because the vaccine rollout was too slow. . . . [T]he most disturbing thing about this whole fiasco is that it can’t be blamed merely on a few bad leaders. Instead, it seems to reflect fundamental flaws in institutions and attitudes.
The point is not that experts are useless. It’s that when experts act as authorities instead of advisors, when they get to override individual decision-making, they do so without much of the knowledge and incentives that make creative problem solving possible.
We want experts to inform us of the benefits and risks of vaccines, including how much uncertainty there is about the benefits and risks. But nothing justifies allowing experts to overrule individual decision-making. Individuals have skin in the game—and they have access to specific knowledge far-off experts necessary lack, including knowledge of their unique situations, values, risk-tolerances.
Testing: When ingenuity is throttled
As bad as our failures in vaccine approval and distribution have been, our most conspicuous failure has been in testing. And it’s this failure that should most trouble us when we reflect on the wider prospects from progress.
The basic challenge of a pandemic is to prevent exponential growth. If each infected person infects more than 1 person on average, the result is disaster. If they infect less than one person on average, the disease is beaten back.
The most powerful tool for preventing a pandemic from becoming a pandemic is testing. In the early stages of a disease, widespread testing in at-risk areas allows the government successfully test, trace, and isolate carriers, keeping the virus from spiraling out of control.
On this front, the U.S. completely failed. As one epidemiology professor put it, “To me, it all starts with the lack of early testing. The process was so bungled, it took so long to set up any kind of testing capacity, and by the time we could properly test people, there was already widespread community transmission.”
Why did the U.S. fail?
As the first suspected cases of COVID-19 popped up in the U.S., federal officials made it illegal for clinical labs to create their own tests. Scientists were ordered to wait for “better” tests developed by the Centers for Disease Control (CDC). For example:
In January, University of Washington epidemiologists were hot on the trail of Covid-19. Virologist Alex Greninger had begun developing a test soon after Chinese officials published the viral genome. But while the coronavirus was in a hurry, the Food and Drug Administration (FDA) was not. Greninger spent 100 hours filling out an application for an FDA “emergency-use authorization” (EUA) to deploy his test in-house. He submitted the application by email. Then he was told that the application was not complete until he mailed a hard copy to the FDA Document Control Center. After a few more days, FDA officials told Greninger that they would not approve his EUA until he verified that his test did not cross-react with other viruses in his lab, and until he agreed also to test for MERS and SARS. The Centers for Disease Control (CDC) then refused to release samples of SARS to Greninger because it’s too virulent.
In fact, the tests that first identified U.S. COVID-19 cases were created by scientists who simply ignored the government’s testing restrictions.
In January, a Seattle team led by infectious disease specialist Dr. Helen Chu was denied permission to use its own COVID-19 tests. But when the CDC’s tests finally arrived in early February it was immediately clear they didn’t work. Nevertheless, the government continued to rely on those tests and to prohibit alternatives. According to the New York Times,“By Feb. 25, Dr. Chu and her colleagues could not bear to wait any longer. They began performing coronavirus tests, without government approval.” Their fears were confirmed: COVID-19 had hit America.
A few days later, on February 29, the government finally decided to ease restrictions on testing. But it was too late to stop the spread.
Once the disease spread throughout the U.S. population, testing could still have suppressed the virus: if everyone had access to cheap, rapid home tests that detected the virus before people were symptomatic, the effect would have been similar to herd immunity. Infectious people would be able to shelter in place, keeping the transmission rate below 1.
Those tests were developed. In December 2020, Harvard assistant professor of epidemiology and immunology Michael Mina wrote:
The good news is that SARS-CoV-2, the virus that causes Covid-19, can effectively be detected using simple, paper-strip antigen tests that are inexpensive, relatively easy to manufacture and report results within minutes. Every single American household could have a pack of 20 tests within weeks if the U.S. government invested a mere $1 billion to build capacity to produce them. For reference, this is less than 0.1% of what this virus has cost us.
By using these simple, inexpensive, rapid tests once or twice a week, individuals can know their status even when they are asymptomatic, and therefore reduce transmission before the virus can spread.
So what happened? Take the example of Abbott Labs, which announced in August 2020 that the FDA had approved its easy-to-use test, which could give people results in only fifteen minutes at a cost of $5 a test. Why weren’t we all using their tests in September? As Abbott explained:
Under FDA EUA, the BinaxNOW COVID-19 Ag Card is for use by healthcare professionals and can be used in point-of-care settings that are qualified to have the test performed and are operating under a CLIA (Clinical Laboratory Improvement Amendments) Certificate of Waiver, Certificate of Compliance, or Certificate of Accreditation. Within these settings, the test can be performed by doctors, nurses, school nurses, medical assistants and technicians, pharmacists, employer occupational health specialists, and more with minimal training and a patient prescription.
In other words, the FDA made it illegal for individuals to use the test on their own. They would, instead, have to make an appointment with a healthcare specialist, get a prescription for the test, and pay for the whole thing with insurance. That easy-to-use $5 test suddenly became far more expensive and inconvenient.
Why did the government make at-home testing impossible? Its main justification was a concern for testing accuracy. But this concern was misplaced. Testing accuracy is vital for doctors treating individual patients. We don’t want people poked, prodded, and dosed for something they don’t have. But at-home tests serve a completely different purpose. They aren’t about treatment but reducing transmission. Any reasonably accurate test can serve that purpose, even if it gives more false positives than a lab test performed by trained medical personnel.
“In other words,” writes economist John Cochrane, “the FDA says: ‘Yes, you can use a thermometer to screen people out and send them home. Yes, you can use a questionnaire to screen people out and send them home. No, you may not use a far more accurate $1 paper test for exactly the same purpose. And if you try, we'll ruin your company and send you to jail.’”
But underneath the concern over testing accuracy was a deeper assumption: that individual decision-making cannot be trusted. That autonomous decisions by scientists, medical companies, and the man or woman on the street will lead to disaster—that the only thing we can trust is the wisdom of a handful of experts.
Yet their track record, which started out bad, remains bad to this day: the early testing failures have never been adequately resolved. The government maintained a monopoly as the buyer and distributer of tests, and as late as February 2021, tens of millions of fast-acting antigen tests shipped by the federal government were sitting on shelves, unused, about to expire. According to the Wall Street Journal, this was a result, not only of accuracy concerns but “logistical hurdles.”
Logistical hurdles? Amazon ships millions of packages a month. Distributing things to people who want them is a solved problem. Markets use prices to convey vital information about who values what while the profit motive encourages acting intelligently on that information. Had companies been free to sell the tests to those willing to pay for them, those tests wouldn’t be sitting unused and expiring on shelves.
The question to ask is: What would testing have looked like had we followed the same principles that led to the rapid creation of vaccines?
Anyone would have been free to create tests and anyone would have been free to buy and use tests. Businesses could have bought tests in bulk, keeping their doors open while minimizing the spread of the disease. Individuals could have used rapid tests to decide whether it was responsible to visit friends or go into the office.
Would there have been some fraudulent tests sold by hucksters? Maybe, though there is no reason why universal bans should be the solution to potential frauds.
Would there have been mistakes in how people used legitimate tests, or would some people have tested positive and nevertheless gone out and exposed others to COVID-19? Without a doubt. The freedom required for exploration and autonomous decision-making cannot forestall human error or recklessness.
But rule by experts doesn’t eliminate those errors. What it does is magnify the destruction when the experts get things wrong, and, more important, prevent the innovative problem solving and creative adaptation that emerges from decentralized decision-making. We lose out on the potential benefits of millions and billions of minds exercising ingenuity.
Ingenuism and the pandemic
Ingenuism sees human ingenuity as our most powerful problem-solving tool. Unlocking the ingenuity of billions of people by creating environments that encourage bottom-up decision-making is vital because it harnesses the intelligence—not only of a powerful few—but the intelligence of every individual. It allows for connection and exploration to find solutions, rather than for a handful of experts to dictate solutions. It believes in harnessing the knowledge and incentives provided by market prices rather than rationing.
Here’s the lesson of the pandemic: where Ingenuism was allowed to flourish, we performed beyond anyone’s best hopes—where Ingenuism encountered obstacles or was replaced by experts acting as authorities rather than advisors, we performed below anyone’s reasonable expectations.
What this means is that progress is robust, but not automatic. The less space we have for invention, and the less we protect the process that turns inventions into innovations, the more inflexible and brittle our productive capacity becomes.
That’s no reason to despair, because the outcome is one we have the power to avoid. Whereas theorists like Robert Gordon view progress as a temporary illusion that came to us courtesy of “low hanging fruit,” we view progress as the result of an indelible human power: the power of human ingenuity.
The point about the purpose of tests and the accuracy required to fulfill that purpose is important. The government's requirement for a certain very high accuracy goes directly against that goal, while using the concern with false negatives as rationalization which can numerically be shown to be false.
Great post, the point about available testing being able stop the spread from the beginning and yet was banned by govt hurts; it feels like I’ve been robbed and loved ones have been murdered, but there will be no court case and no justice received.
Typo “and Andreas Strungmann. to create”