In 1902, Rudyard Kipling published Just So Stories for Little Children. What started as bedtime stories for his daughter became one of his most enduring works.
Many of the Just So Stories take the form of explaining how certain animals came to become the way they are. Among other things, there are stories on how the elephant got his trunk, how the rhinoceros got his skin, and how the camel got his hump.
It’s this type of question that underlies many of the things that I’ve written on this site. How did some issue in medical training get to be this way? And almost never is the answer that we identified a problem, then sat down and tried to fix it in the best way that we could. Instead, most things in medical training are the way they are because of some mix of historical path dependence and financial incentives.
Want some examples?
Why do residents work such long hours?
Because the physicians who trained them did. Pretty simple, really.
Of course, if you trace this logic back to the beginning, you’ll find that modern residency training really dates to the program devised by Dr. William Stewart Halstead at the Johns Hopkins University in the late 1800s.
Dr. Halstead was a towering figure in medicine and easily the most renowned surgeon of his era. If you wanted to train under Dr. Halsted, you had to submit to an intense and regimented training program with extensive obligations in both research and clinical care (a program that stood in stark contrast to the apprenticeship model in place elsewhere).
Furthermore, progression in Halsted’s program was not guaranteed. There was a pyramid-shaped hierarchy in which a large number of interns were culled into a smaller number of junior residents and eventually a single chief resident. Furthermore, the term of the internship itself was indefinite: you progressed when – and only when – Dr. Halsted agreed you were ready. Given the structure of the program, competition among the residents for Dr. Halstead’s favor and attention was intense. But for those who survived the gauntlet, professional opportunities were abundant – which ensured that there was no shortage of new doctors eager to endure this challenging environment.
And a challenging environment it was. Halstead’s work ethic was legendary. He could operate all day; round on a bustling hospital service; and then retire to his laboratory to churn out surgical research. At times, he could work for days on end with little to no sleep. Naturally, he expected his residents to keep up with him.
But Halstead had a secret.
While experimenting in his laboratory with local anesthetics as a young man, he’d become addicted to cocaine.
In retrospect, the standard to which Halstead’s trainees aspired – of indefatigable academic and clinical productivity – was actually just cocaine-fueled mania. And to some extent, that’s the standard to which resident physicians (and medical school faculty) been held for the past 100 years.
Now, it is fair to point out that Halstead’s model of long work hours was also a function of its era. Laborers, farmhands, factory workers, etc. all worked hours in the early 1900s that would be unheard of – even illegal – today.
But while nearly other industry moved toward more humane working hours, medicine dragged its feet, with progressive generations of physicians lionizing their mentors while selectively ignoring that the system they’d created was exploitative.
[T]his structure was self-serving for a surgeon hiding his addiction, as the clinical service did not depend on Halsted always being at his best in order to achieve the excellent surgical results he demanded. The pyramidal nature of the program also minimized Halsted’s need for day-to-day interaction with trainees, medical and surgical colleagues and patients, thus allowing him to hide his impairment. The energetic cadre of surgical trainees provided Halsted a motivated workforce that was anxious to please him and that he did not need to pay.Wright JR, Schachar NS. Can J Surg 2020; 63(1): E13. PubMed
Why do residents today have an 80-hour workweek?
Because that’s what seemed reasonable to a couple of guys who were pondering resident duty hours while sitting on the front porch.
Look, it’s a simple fact of biology that humans need sleep. For many decades, people with common sense have questioned whether it was really such a great idea to have young doctors work unlimited hours while making life-and-death decisions about other human beings.
But it was the Libby Zion case that really set things in motion.
Zion was a previously-healthy 19-year-old college student who was admitted to the New York Hospital – now Weill Cornell Medical Center – on March 4, 1984. She had fever, chills, and otalgia, and was admitted for IV rehydration with a working diagnosis of “viral syndrome.” But the residents who cared for her were overworked and poorly supervised – and by the following morning, she was dead.
Zion’s father – a well-connected writer and former federal prosecutor – sued the hospital, and in the reckoning that followed, Dr. Bertand Bell took up the cause of resident supervision and duty hours reform. By the late 1980s, Dr. Bell successfully convinced New York to limit resident work hours to 80 hours per week, and by 2004, the ACGME followed suit.
Then – as now – debates over resident work hours were emotionally-charged, with significant friction between those who see long hours as inhumane and unsafe and those who assert that shorter hours will lead to inadequate training and poorer patient care. So you might think that Bell’s 80-hour limit represented some carefully-determined, data-driven compromise at an inflection point that satisfied both parties.
The 80-hour limit emerged from some back-of-the-envelope arithmetic performed while Dr. Bell and one of his colleagues were shooting the breeze on his front porch.
Bell’s 80-hour workweek certainly represented an improvement over the 36-hour shifts and q2 call schedules that pre-dated it. But the limit itself was arbitrarily determined and not based on any evidence related to human physiology, the adequacy of resident training, or the logistics of providing patient care. (Keep this in mind the next time the resident duty hours debate comes up.)
Why do we have a Match?
Well… that’s a long story.
Why do we have residency training? Why isn’t medical school enough?
I’ve taken a deep dive on this one, too.
[TL;DR: it’s a result of the growth of hospitals (and their desire for a cheap workforce) as well as the growth of scientific knowledge, leading to specialization in medicine (and specialists’ desire for economic protection).]
Why is there a ‘cap’ on federal funding for residency training in the United States?
Because it was a convenient – and legally permissible – way to reduce the number of physicians when the powers-that-be feared that we were headed toward a physician surplus.
The history of physician workforce projections is a topic deserving of its own post. But suffice it to say that pundits have never seen the U.S. physician supply as adequate. We’re either training way too many doctors, or we’re training too few. There’s really no in between.
And the 1990s was definitely a time when the pendulum was swinging toward fears of a physician surplus. Remember, this was the managed care era, when there was a belief that HMOs would rule the world, and that the United States wouldn’t need so many physicians (especially specialists).
To prevent this glut of physicians, some think tanks recommended reducing the number of U.S. medical schools. Naturally, the AAMC went on the offensive. They pointed out – correctly – that simply reducing the number of American medical students wouldn’t curtail the physician supply, because many physicians go to medical school overseas. Instead, you needed to cut the number of residency positions.
But this was tricky.
There was no top-down mechanism to limit the number of residency positions. Whether to sponsor a residency position – or not – was entirely up to individual hospitals, and depended on the economics of doing so.
But here, there was a powerful lever. Since 1965, the the federal government had subsidized the cost of training residents through the Medicare program. In fact, by 1993, Medicare paid hospitals an average of $54,000 per resident to cover educational costs, even though the average resident salary was only a little over $30,000 per year. If Uncle Sam was so eager to pick up the tab, why would any hospital not hire more residents?
And so, with the AAMC and others all lobbying for it, Congress passed the 1997 Balanced Budget Act, which – among other things – capped at 1997 levels the number of residents for which Medicare would provide direct graduate medical education subsidies.
The policy was a success… sort of.
It did result in a temporary decrease in residency program expansion, and American medical schools didn’t have to downsize. Of course, a few years later, when the AAMC decided that we were no longer should fear a physician surplus but instead faced a looming physician shortage, they found it wasn’t so easy to get Congress to expand funding as it had been to cut it.
So now we’re left with a nonsensical legacy system that dispenses GME funding unevenly based on where residents were trained 25 years ago. The bulk of GME funding goes to hospitals in the northeast, even as population has shifted to the south and west.
My point here isn’t that certain states need more federal GME dollars – training residents is likely still a financial winner for hospitals even without any federal financial support. My point is that, like so many other things in medicine, we would have never designed such an illogical system. But by refusing to tear down and rebuild, we condemn ourselves to a world of add-ons and workarounds to deal with the same problems over and over.
Folks, we really don’t have to do things the way we’ve always done them. But if we choose to do them that way, we shouldn’t expect to get anything different than what we’ve already got.