In my recent post on bureaucratic autonomy, I noted how the CDC was too risk averse during the Covid crisis. As Michael Lewis explained, this tendency stretched all the way back to the Ford Administration, when the CDC director was blamed for alarmism regarding a possible swine flu epidemic. The competitive nature of U.S politics means that the opposition party will always jump on any perceived mistake, while failing to give credit for crises averted. This creates an asymmetry of risks where mistakes are punished and successes unrewarded, thus leading to excessive risk aversion.
The one part of the U.S. government that has successfully navigated this problem, at least in part, is the U.S. military. Junior officers in combat situations are frequently asked to make risky judgment calls, and they are evaluated based on the appropriateness of that judgment and not necessarily on its success or failure. This practice is known as “freedom to fail,” and it is one of the reasons that the U.S. military is one of the best in the world.
Freedom to fail did not always have this status. Of the service branches, the U.S. Army in particular reached a low point at the end of the Vietnam War, with its drug use, fragging of officers, and general problems with both discipline and performance. In one of the most remarkable rebounds in the history of bureaucracy, the Army studied its own dysfunction and set about remedying it. By the time of the 1991 Gulf War it had become a very different organization.
This reform involved studying the history of other successful militaries and emulating them. One of the most important precedents was the German doctrine of Auftragstaktik, referred to in the U.S. military as “mission orders” or “commander’s intent.” The German practice originated in the closing days of World War I and was put on full display in the invasions that overwhelmed Poland in 1939 and France in 1940.
Auftragstaktik dictated that senior commanders give only the most general types of orders, and that authority for carrying them out be delegated to the lowest possible command echelon. The reasoning was that those low-level officers were in direct contact with the enemy and had a situational awareness that their superiors didn’t. This awareness would enable them to act quickly on fast-moving events, rather than waiting as decisions moved up and down a cumbersome chain of command. The panzer tactics used in the invasion of France were examples of this, where panzer group leaders could improvise on the spot even when they were out of contact for prolonged periods with higher headquarters.
Auftragstaktik or mission orders were simply a military version of bureaucratic autonomy, incorporated into Army doctrine through documents like Field Manual FM-100-5 on combined arms operations.
The way it was practiced in the U.S. military shows why that autonomy always involves judgments about risk. Any time a hierarchical organization delegates authority to a lower command level, it takes the risk that that agent will screw up or make a bad decision. But delegation is absolutely necessary in battle. Generals who seek to micromanage and second-guess their subordinates are likely to fail, and many armies with rigid, centralized command structures like the Egyptian army in 1967 or Saddam Hussein’s military in 1991 and 2003 failed spectacularly. In the latter case, an officer’s mistake in judgment could lead to his execution. By contrast, the U.S. Army had internalized mission orders by the time of the Gulf wars. American planners in 2003, for example, anticipated that conquering Baghdad would require at least a week of house-to-house fighting; instead, the commander on the spot executed a “Thunder Run” on his own authority and took the city in a single day.
Another army that internalized Auftragstaktik was the Israeli Defense Forces. As a paratroop commander, the late Ariel Sharon undertook operations in the Mitla Pass during the 1956 Suez War against the wishes of his superiors. His decisions led to the deaths of 38 Israeli soldiers, and were highly controversial at the time. In other militaries this would have ended his career, but “freedom to fail” was deeply embedded in the ethos of the IDF, and Sharon went on to many higher command positions.
This is not to say that the U.S. military does not suffer from the usual problems of bureaucratic rigidity and risk-aversion. The peacetime military is notorious for imposing complex rules and procedures, and for stubbornly resisting innovation in everything from procurement to human resource management. Procurement in particular is subject not just to the voluminous Federal Acquisition Regulations, but a host of other requirements imposed on the Department of Defense by Congress. This is what led to silly debacles like holiday cakes ordered for the troops gathering in Kuwait in late 1991 not being delivered until after the U.S. victory the following year. But the military has nonetheless been able to carve out an area of autonomy for its officers when performance is both critical and readily measurable.
What has made this possible? One critical factor has to do with the wartime military’s relative isolation from politicians. In the United States, political leaders have contented themselves with setting broad strategic goals, but have shown restraint in interfering with the military’s execution of those goals. There are of course famous exceptions, like LBJ’s choosing of individual bombing targets during the Vietnam War. But unlike a procurement decision that will affect jobs in a home district, military tactics and strategy are both obscure and regarded as the province of professionals.
It also helps greatly that the military’s personnel system is largely shielded from political intervention, until you get to the most senior command levels. Unlike other parts of the civilian bureaucracy, we do not reserve a large number of officer positions for political appointees. The military has an up-or-out promotion system that ruthlessly weeds out poorly performing officers and forces the hierarchy to focus on competence. This shielding was of course not always the case in U.S. history. Abraham Lincoln had to deal with patronage demands at the outset of the Civil War, and part of the reason for the Union Army’s poor performance in the war’s early years was the President’s perceived need to appoint politicians to command positions.
These military examples show why bureaucratic autonomy is critical to state effectiveness, and suggests why it is so hard to achieve. Politicians love nothing more than criticizing perceived failure on the part of their rivals. They are also severely tempted to move out of their lane when given the opportunity. Among the very many bad decisions made by former President Trump were his attempts to use the military against protesters in the summer of 2020, and his pardon of Navy SEAL Edward Gallagher against the wishes of the Navy hierarchy in November 2019. These decisions were not just bad in themselves; they set a precedent for future erosion of the guardrails protecting the military’s autonomy.
Freedom to fail has been at the root of private-sector success. Silicon Valley likes to trumpet the fact that entrepreneurs are not punished for taking risks and failing, unlike other cultures in which bankruptcy means permanent shame and disgrace. Unfortunately, what works in the private sector is usually not applied to the public sector. American political culture has always been highly suspicious of executive discretion, and has erected complex safeguards to prevent bureaucrats in and out of the military from taking important decisions on their own authority. How this plays out in civilian agencies will be the subject of future posts.
American Purpose newsletters
Sign up to get our essays and updates—you pick which ones—right in your inbox.Subscribe