Is War Necessary for Economic Growth? Detailed Analysis of Vernon W. Ruttan’s Book

34 minute read

Published:

Detailed Analysis of “Is War Necessary for Economic Development?” by Vernon W. Ruttan

Introduction

“Is War Necessary for Economic Development?” by Vernon W. Ruttan deals with a rather vast and complex relation of military procurement to technological development. The book presents arguments on the fact that military and defense-related procurement has, in the past, acted as a primary impetus to spur technological development across most sectors. These advances have, in turn, enormously impacted the economic growth and industrial production in the United States. Focusing on several leading historical examples and prominent technological breakthroughs, Ruttan presents a detailed consideration of those technologies where military needs provided the primary stimulus to invention and innovation that were later diffused into the economy. This field must be explored to determine the relationship between military investment and economic development.

Chapter 1: War and Economic Growth

Historical Perspectives

War and preparations for war have been among the most influential movers of economic institutions and among the most relentless driving forces of technological advance. Military requirements have led, again and again in history, to significant innovations and developments. For instance, the method of producing interchangeable parts and the whole armory system in America developed out of military needs. However, the innovations to meet military needs provided the foundation for mass production techniques for civilian industries.

This issue — concerning the connection between war and technological and economic development — has been at the heart of heated controversies among economic historians. Werner Sombart, writing in War and Capitalism in 1913, advanced the thesis that war and its preparation were fundamental to the development of capitalism in Western Europe. A different sentiment was posteriorly echoed by John U. Nef in his work “War and Human Progress” in 1950, whereby evidence of a connection between war and economic progress was cited as such: in times of war, military demand was highly heightened, capitalizing on all previously accrued scientific and technological knowledge.

Though most studies throughout the 1960s, including the HINDSIGHT by the Office of the Director of Defense Research and Engineering, concluded that most significant technological breakthroughs during that time were driven primarily by the military demand, other studies such as TRACES by Illinois Institute of Technology and independent research by Battelle Research Institute argued that scientific breakthroughs earlier on without any military implications played a significant role in the technology process.

During the Cold War period, it was believed that the burden of defense-related R&D withdrew resources from commercial applications, slowing industrial innovation. Critics held that military research was drawing away scientific and technical capacity that could be devoted to civilian technological progress.

Rate and Direction of Technical Change

Military procurement has been a forceful means of hastening the rate and determining the course of technical change. This can be well seen with the large amounts of funding and specific needs that spur innovation in some sought-after directions; it seems that history abounds with the needs of the military that gave birth to technologies that later made a big commercial splash.

Induced Technical Change and Evolutionary Theory

Therefore, the theory of induced technical change, which Jacob Schmookler best developed, holds that innovation responds positively to demand levels, primarily driven by military procurement. In his 1960s writings, Schmookler claimed to have found a very high correlation between investment in capital goods and the rate of technological change. His findings indicated that increased military investment generated a corresponding increase in the rate of technological advancement.

In this respect, this gives one an evolutionary theory framework against which technological innovation can be understood about firm behaviors and market selection processes. Their model emphasizes the role of routine and the search for improved techniques mainly because of the economic incentives and competitive pressures. This view aligns with the opinion that military procurement often provides incentives to look for better technologies to meet explicitly stated defense needs.

Path Dependence and Radical Technologies

Technological development is path-dependent because early innovations usually determine the trajectory that gets followed. Due to the scale and scope of military innovations, they have the potential for long-lived effects on technological development. For example, though nuclear technology was developed with a military orientation, it has had massive implications for energy creation.

Path dependence can result in technological “lock-in,” such that some technologies have become dominant. However, they may not be the most efficient long-term because they had early advantages. This has been very well illustrated by the dominance of the QWERTY keyboard layout, which became a standard because of early adoption but, according to some, may not be the most efficient layout for a keyboard.

Radical technological innovation can often be based on the perception of an anomaly in existing systems, such that new or recently emerging scientific knowledge suggests that current technologies will, in the long run, fail or could greatly be improved upon. Military contexts often provide the impetus for such radical innovations because the high stakes and significant resources involved in defense spur the search for transformative technologies.

In sum, wars and military procurement have driven history, paving the way for technological changes at their roots, shaping the economic landscape, and influencing the fate of the development of new technologies. This chapter sets the stage for case studies on specific technologies and industries in which military demand was the primary driver of technological progress.

Chapter 2: Interchangeable Parts and Mass Production

Interchangeable Parts

The idea of interchangeable parts was born in the US armory system and revolutionized manufacturing because mass production resulted, and complex machines could be more readily assembled and repaired. Before interchangeable parts were introduced, manufacturing meant constructing each piece by hand, which was labor-free and skill-dependent. Perhaps the best example of this methodology was making firearms, where artisans at their bench painstakingly crafted each part to fit a specific gun.

The development of interchangeable parts was supposed to standardize parts so they could be uniformly fitted in any assembly of a particular description. This concept was so radically simplified to the production process that it minimized the required skilled hands. The American System of Manufacturing was first implemented in manufacturing national armories in Springfield, Massachusetts, and Harpers Ferry, Virginia. This system was the precursor for further developments in mass production.

In many ways, the innovations that made the armory system with interchangeable parts successful were the work of people like Roswell Lee and John H. Hall. Lee, who became acting superintendent of the Springfield Armory in 1815, instituted several technical and managerial reforms designed to streamline production procedures. He hired Thomas Blanchard to design machines that could make gun stocks in ill-shapes and, therefore, hardly require skilled labor. As this happened in Harpers Ferry, the armory produced well-interchangeable rifles. The design of these rifles was such that their parts could be interchanged, and therefore, the rifles were simple to assemble even in the field and maintain.

By the mid-19th century, interchangeability, more than armory practice, was firmly in place in almost any number of industries. The improvement of communications and transportation networks and the cheap coal that powered the new machinery catalyzed the dissemination of these practices.

Mass Production

As a concept, mass production owes much to the equipping of military manufacturing, notably the system of interchangeable parts. So, instead of being handcrafted, production should be organized most efficiently and yield the highest output.

The sewing machine industry was one of the first examples of mass production in the commercial sector. The Wheeler and Wilson Manufacturing Company, and after that, the Singer Manufacturing Company, adopted armory practices to produce sewing machines. This adoption began the mass production era, where standardized parts and assembly line techniques were applied to civilian products.

The most iconic example of mass production ideals in business application is the Ford Model T. Henry Ford’s obsession with simplicity in engineering and production excellence led to the innovations that laid the groundwork for assembly line production techniques and just-in-time part delivery. Workers at the Highland Park Ford plant performed particular tasks at their assigned stations; parts arrived in sequence for assembly and on time. This approach significantly reduced the time involved in producing a single vehicle, making the Model T affordable to many of the population.

Ford’s mass production had three principles:

  • The product moves through the shop in a good sequence.
  • Parts move to the worker where needed.
  • Operations have been split into their elemental operations to ensure no wasted motion.

In addition, each setting, i.e., the part’s positioning, performs an exact positioning in the assembly. This approach eliminated the need for “fitters” and allowed for the rapid and cost-effective production of large quantities of standardized commodities.

It was not only the principles of mass production that those who introduced the automobile supply chain pioneered; it is hard to find a sector in which they were not pioneers and that needed to follow the principles of mass production. Even the manufacturing of bicycles showed marked signs of acting as a transition agent between the American System and the age of mass production. Significant technical innovations in the bicycle industry, including ball bearings, pneumatic tires, and sheet steel stampings, set the stage for innovations that would become tremendously important in the automobile industry.

The development of national systems of rail and telegraph in the 19th century, combined with a large domestic market, permitted the widespread adaptation of mass production techniques in the early 20th century. The branded and packed goods industries, as well as those that resorted to the primitive light machine and standardized industrial machine, were, in fact, the pioneers in resorting to mass production techniques, which finally brought about staggering economic growth and the country’s emergence as a fabrication leader in the industrial sector.

Chapter 3: Military and Commercial Aircraft

Early Developments

Military demands of World War I and II encouraged innovation in aircraft design, jet propulsion, and supersonic flight. Massive airmail subsidies, purchases of military aircraft, and funding of aeronautics research and development were the means applied by the US government to promote the creation of a commercial aviation infrastructure. These were very successful and resulted in rapid productivity growth in the air transport sector.

The aircraft industry was unique among the manufacturing industries in that it had a source of government research institution — the National Advisory Committee for Aeronautics, established in 1915, which provided research and technology support to the industry. For its part, NACA contributed much to aeronautical knowledge and supported dual-use technology development used in both military and commercial aircraft.

Military needs had a profound impact on the development of aircraft. World War I brought about tremendous expansion in the aircraft industry through military purchases. Between April 1917 and November 1918, the US aircraft industry included three hundred firms and employed 175,000 workers who turned out nearly 13,000 aircraft and more than 41,000 engines. This production surge was underpinned by the formation of the Aircraft Production Board to organize production and distribute contracts.

In World War II, Boeing’s bombers were on the front lines. The B-29 had many new design features developed from wind tunnel data; it was the only pressurized cabin wartime airplane. Boeing’s dominance in the production of military aircraft during the war laid the ground for its post-war domination of the commercial aircraft market.

The NASA Era

In the post-war period, NASA went on to lead aeronautics research, making great strides in military and commercial aircraft designs. The Soviet Union launched Sputnik I and II towards the end of the 1950s, so the United States absorbed NACA into a newly set up organization called NASA in 1958. NASA took over the facilities and functions of NACA and took over other space programs which were managed by the Department of Defense, including the Jet Propulsion Laboratory and the Army Ordinance Ballistic Missile Agency.

The absorption of NACA by NASA represented an essential step towards the shift from in-house research to research by aerospace contractors. It also moved a significant part of R&D efforts from aeronautics and concentrated on space-oriented programs. However, the new agency maintained considerable defense-related activities, conducting aeronautics research that was imperative for military and civilian purposes.

Boeing’s Role

Success in producing military aircraft during wartime easily translated to dominance in the commercial aircraft market. The company used military aircraft contracts to test new concepts and fund development work that could be used to design advanced commercial aircraft. For example, Boeing’s B-247 was developed in the early 1930s based on its B-9 bomber and set up the configuration for future commercial airliners.

These technological advancements from the war effort now benefited Boeing’s post-war commercial aircraft, from the Stratocruiser to the Boeing 707. The Boeing 707 became the first American commercial jet transport in the late 1950s and set the standard for other wide-bodied jet designs. Further innovations at Boeing came with models such as the 727, 737, 747, 757, and 767, which enhanced technology, efficiency, and performance.

Military procurement and commercial aircraft development are related in a very pointed manner across Boeing’s history. The company’s ability to use military contracts to develop technology and strategically use such innovations in commercial applications ensured it held the dominant position in the world aircraft market for decades.

Chapter 4: Nuclear Energy and Electric Power

Atoms for War and Peace

The nuclear technology that began in warfare led to the peaceful application of technology in generating electrical energy. At its very birth, atomic energy was first channeled due to applications related to the military — particularly the preparation of nuclear weapons during World War II. The most far-reaching institutional innovation of the war years was the setup of the Manhattan Project, which was established to develop and produce the atomic bomb. This project initiated the shift from the public armory system toward the private contractor system in armament development. It introduced the model for “big science” in mobilizing scientific resources for mission-oriented research and development.

On 2 December 1942, a team led by Enrico Fermi, operating under the University of Chicago’s Stagg Field, demonstrated the first self-sustaining nuclear fission. This event marked the onset of an active role for US military and defense-related institutions in technology development for the electric power industry. Initial developments of nuclear power for electricity generation were, therefore, very intimately connected with the needs and capabilities of the military. The Atomic Energy Commission, established in 1946, inherited the facilities and workforce of the Manhattan Project and received a twin mandate to promote and regulate nuclear technology for both military and civilian purposes.

Under President Dwight D. Eisenhower’s December 1953 “Atoms for Peace” speech, a commitment to a more aggressive commercial nuclear power program was laid. The statutory basis set up the private sector for nuclear technology development and international cooperation in the peaceful applications of nuclear energy under the 1954 Atomic Energy Act. It allowed private companies to construct and own nuclear power plants while the government would retain ownership and control of the fuels. The AEC and utility companies jointly developed the first commercial nuclear power projects, of which Shippingport Atomic Power Station was a part.

Cost and Alternative Energy

The exorbitant cost of nuclear energy has increased interest in alternative energy sources. Nuclear power was supposed to provide a future in which electricity would be so inexpensive that it would be “too cheap to meter.” This has yet to happen. Construction and operation costs began to rise sharply, particularly after increasing safety measures and regulatory requirements stemming from the 1979 accident at Three Mile Island. More expensive electricity from nuclear-fueled plants than coal-burning plants resulted from higher capital costs.

The oil price shocks of the 1970s, coupled with mounting concerns over the environmental and health implications of fossil fuels and nuclear technology, pushed the debate over energy futures to the forefront. Historically, US energy R&D had been focused almost solely on atomic energy, with much smaller amounts allocated to coal, petroleum, and natural gas. Only in the late 1970s were renewable energy sources and conservation primarily ignored. Still, there was a belated recognition that energy conservation and renewable energy could significantly reduce dependence on fossil fuels and nuclear power.

By the late 1990s, reactor technological advances and operational experience gave new life to considering nuclear power as a contributor to long-term electricity demand. Interest in nuclear power is also due to its strong potential for reducing greenhouse gases. Various unresolved issues with safety, health concerns, nuclear proliferation, and waste disposal are central to the ability of nuclear power to make significant contributions to electricity supplies around the world.

Fusion energy is touted to be the potential alternative to nuclear fission. This process has several advantages in that the fuels used are abundant, less expensive, and result in less radioactive waste. However, the high capital costs of fusion technology represent a massive barrier to commercial feasibility.

Whereas energy has contributed in a big way to peaceful applications, high costs and unsettled problems have created interest in alternative sources of energy. Finding economically viable and environmentally sustainable energy solutions will depend on how these challenges are addressed regarding the future of nuclear power.

Chapter 5: The Computer Industry

Early Innovations

Early development work on the first computers was significantly spurred by military need, particularly during World War II. The Army Ballistics Research Laboratory (BRL) needed a way to calculate artillery-firing tables — the military had to compute the trajectory of artillery shells under various weather conditions, which required enormous computational effort. John W. Mauchly and J. Presper Eckert at the University of Pennsylvania’s Moore School of Electrical Engineering developed the Electronic Numerical Integrator and Calculator (ENIAC) for this task. By 1946, he had designed what could compute more than a thousand times faster than any available electromechanical machine.

Second, the US military’s cryptographic demands during the war also significantly contributed to the computing capability. The operations of breaking enemy codes and keeping their communication secret gave more headroom for computing machines, thus laying the foundation for much future developmental work in the computer world.

Whirlwind Project and Semiconductors

Military projects continued to drive forward technological development in computing after the war. One such project, the Whirlwind project at MIT, started life as a project to design a flight simulator for training bomber crews. It rapidly developed into creating a general-purpose real-time digital computer for the Air Force’s SAGE air defense system. The project extensively enabled advances in digital computing technology: the development of high-speed electronic digital processing and real-time data processing capability.

Another significant milestone attributed to military-influenced research is the creation of semiconductors. In the late 1940s, at Bell Laboratories, William Shockley, John Bardeen, and Walter Brattain developed the first transistor. This improved the performance and reliability of electronic devices, leading to the development of integrated circuits and, finally, the microprocessor. This laid the groundwork for the modern computer industry.

Supercomputers and Software

The initial roots of the development of supercomputers and leading-edge software also lay in the military. By the early 1950s, an apparent dichotomy of computers designed for business use had separated from those for scientific applications. Some early customers of high-speed scientific computers were the AEC and the Department of Defense for weapons design and nuclear research. It required machines of the order of the IBM 701 and the Control Data Corporation CDC 6600.

Apart from its military significance, the SAGE project was one of the driving forces for computer technology. This call gave birth to such vital inventions as real-time data processing, computer-to-computer telecommunications, and man-machine interaction through keyboard terminals. All these features found their way into commercial computer systems and were very influential on the computer industry as a whole.

The software industry also garnered huge benefits from military procurement. Indeed, big military projects, like SAGE, made grand demands on software development, driving the innovation of software engineering and programming. By the mid-1980s, defense-related agencies supported a significant fraction of the research and development in computer science, in many cases supporting advances with significant spillovers into civilian uses.

Chapter 6: Inventing the Internet

ARPANET

This was the Internet’s forerunner, funded by the US Department of Defense to interlink research institutions. The Advanced Research Projects Agency, ARPA, a subsidiary of the Department of Defense, contracted a study on the ARPANET project to develop a network to link computers at various research centers for easy sharing of information and resources. It was a project born of the need for a better way for researchers to collaborate and for the military’s communication systems to survive a nuclear attack.

Joseph Licklider, the first director of the ARPA Information Processing Techniques Office, IPTO, was another critical personality with vivid ideas about the network. His vision was “time-sharing,” wherein several users could share a central computer from geographically dispersed locations using the lines of the telecommunications network, thus economizing the use of central terminals.

In 1966, IPTO chief Robert Taylor hired Lawrence Roberts from MIT’s Lincoln Laboratory to manage the development of a sizeable multi-computer network. Roberts’s task was to connect time-sharing computers at eighteen separate academic, industrial, and government centers funded by ARPA.

Packet Switching and Institutional Innovation

Key technologies, like packet switching, were developed under military contracts, which has created such a robust and scalable nature of the Internet. Of course, one of the most important of these was packet switching — a technique for data transmission whereby messages are broken into smaller packets and transmitted independently over the network. In the early 1960s, Paul Baran first proposed this method at RAND while trying to fashion a survivable communication system that would withstand military attack.

During this period, Donald Davies also independently worked on an almost identical concept at the British National Physical Laboratory. The ARPANET realized a packet-switching method to quickly and reliably send data over its network. Using small, inexpensive switches and digital transmissions won ARPA’s attention because it became a resilient and flexible military communication system.

An equally important milestone for the democratization of technology was the 1972 International Conference on Computer Communication, which won over many skeptics of the commercial viability of packet switching. It proved that ARPANET had other purposes besides the military: running personal and business email, among other applications.

ARPA ran three independent packet-switching networks at the height of its activities in the mid-1970s: ARPANET, PRNET, or Packet Radio Network, and SATNET, or Satellite Network. Interconnected with these networks were many military and research institutions. These were heterogeneous networks; therefore, proofs were shown that this kind of inter-networking could be done. Ultimately, all of these incorporated networks would become what is now called the Internet.

The collaboration of Robert Kahn and Vinton Cerf led to the development of the Transmission Control Protocol (TCP). TCP ensured a reliable, error-free data flow between entities operating on different networks. This protocol, together with the idea for gateways that would interconnect different networks, became core to the Internet’s architecture.

Chapter 7: The Space Industries

Missiles, Satellites, and Rockets

Military missile technology and satellite communications developed space industries. The Soviet Union launched Sputnik in 1957; this move propelled the US to make up for lost ground in space technology and hence cover more milestones on missile and satellite development.

A German rocket team led by Wernher von Braun was critical to these successes. Brought to the United States under Operation Paperclip after World War II, von Braun and his team developed rockets like the Redstone at the Redstone Arsenal in Huntsville, Alabama. In these early efforts, the foundational bases were laid for both ICBMs and space exploration vehicles.

In the late 1940s and early 1950s, vigorous interservice rivalry and competition between military and civilian forces that preferred management of missile development and space exploration marked the early period. This resulted, in 1958, in the creation of ARPA — the Advanced Research Projects Agency — a central body for the consolidation of defense-related space R&D.

Commercialization and Earth-Observing Systems

Commercialization of space technologies, like satellites for communications and observation of the Earth, originated from military programs. The first decision about launching satellites was taken in response to military and strategic requirements: control over Soviet military capabilities. However, the first successes of satellite programs, such as the weather forecasting TIROS satellites, drew attention to the prospective civilian applications of space technology.

From the launch of the first Landsat satellite in 1972, the Landsat program furnished invaluable data for environmental research and resource management. It had budgetary and managerial problems throughout its successful operation, raising debates regarding ownership and operational responsibility.

One example was the Global Positioning System, which illustrates how military technology can be translated into civilian applications. Initially, GPS was conceived to improve the navigation of military forces and provide pinpoint accuracy while delivering weaponry. Today, it is part of many diverse applications: aeronautics, shipping, and personal navigation.

Interest in private space launch services picked up during the late 1980s. Through several public statements, the Reagan Administration pushed for the privatization of launch activities, though such changes were easier said than done. In the early 2000s, NASA and the Air Force managed the space shuttle and military satellite launches. Private companies conducted all other types of launches. Military service and NASA investments in launch vehicle capabilities were large and firmly seeded in the space communication and Earth-observing industries.

Chapter 8: Is War Necessary?

Technological Maturity and Structural Change

As technologies mature, the locus of their development shifts from military to commercial applications. Indeed, many general-purpose technologies have been found to experience a burst of growth driven by military demand and funding, after which the growth rate falls off for such technologies, and commercial considerations start dominating further development.

For example, technological progress in the electric power industry was rather significant, and it was initially military-driven when development began. However, with time and maturity, these technologies became oriented towards commercial applications. This trend appears in many other industries, such as aerospace, nuclear power, and computing, whereby military-driven initial advancements were followed by commercial exploitation.

The concept of technological maturity also reveals that when technology experiences rapid or explosive growth, it may stagnate unless new trajectories or advances open up other opportunities. This oscillation between innovation and maturity is evident in the historical example of the shift from piston-propeller to jet propulsion in aircraft, catalyzed initially by military research and procurement in World War II and the Korean War.

Spin-Off and Dual Use

Most technologies developed for the military find dual-use applications in civilian life. The term dual use designates technologies that meet military and also commercial requirements. History is full of military innovations that had huge commercial spin-offs.

For instance, the semiconductor industry was driven by military demand in the early years, but the technology soon found extensive commercial applications. In the same way, GPS — developed for military navigation and targeting — has become indispensable in various civilian uses, from personal navigation to timing systems.

Efforts to institutionalize dual-use in procurement processes have become highly controversial. Some proponents of these proposals point to hundreds of millions of dollars in savings and advanced technologies already gained from adopting commercial acquisition practices. Critics respond that the overlap between commercial and military products and processes will be narrow and that specialized defense firms continue to play an essential role in effectively procuring weapons.

Consolidation and the Future

The concentration of defense and commercial industries suggests that general-purpose technologies will, in the foreseeable future, continue to share the ‘spin-off’ benefits of innovation driven by military demand. In the 1990s, the United States Department of Defense altered its policy to allow mergers among the different defense contractors; this move drastically reduced the number of major contractors and guaranteed that the industry would now be very concentrated. This was an attempt at cost-cutting for the government and efficiency.

At the turn of the twenty-first century, the United States remained the most prolific nation on Earth, churning out advanced defense systems, accounting for the lion’s share of defense-related R&D spending by NATO countries and Japan. The overall size of defense procurement diminished in real terms from the Cold War peak, mirroring broader structural changes in the economy.

These structural changes raise important questions about the future of defense-related innovation. The role of military procurement in driving technological advancement may also change as the US economy and industrial base evolve. The trend toward integrating military and commercial technologies may increase, but defense-related R&D will likely remain central to developing new general-purpose technologies.

Analysis

The book gives deep insight into the complex relationship between military procurement and technological innovation, influencing economic development. From the careful study, Ruttan has traced historical cases of how military demand triggered a technological change that later spilled over to civilian industries and eventually contributed to growth.

Ruttan makes his argument very well with rich historical examples. For example, the invention of interchangeable parts in the US armory system provided a basis for developing mass production techniques (Ruttan, 2006). Nuclear technology, also initially developed for military use, found deep technological applications in energy generation, with less than fulfilling economic results due to high costs and safety concerns (Ruttan, 2006). This book also located military procurement at the center of the very early development of computers and the Internet, both cornerstones of modern economic infrastructure.

Limitations

Nonetheless, even the great work of Ruttan does not go without its limitations. His book’s most critical limitation is that it is premised almost exclusively on historical examples based on experiences from only one country, the United States. This may give way to less, if not fair, coverage of the contexts and outcomes of military-driven technological innovation in other countries. This tendency of Ruttan toward focusing on positive spillovers means that opportunity costs and probable adverse effects of military procurement on civilian technological development and resource allocation might be underestimated.

It cannot, therefore, dig deep into the socio-political dynamics that shape military procurement decisions. The already-mentioned political lobbying and influence of defense contractors and strategic geopolitical considerations have huge impacts on military R&D investments yet are overlooked by Ruttan.

Future Research Suggestions

Future research must consider bringing military procurement and technological innovation together in their interaction within different national contexts. Comparative studies could shed light on how different the impact of military R&D on economic growth may be across different political, economic, and cultural settings.

For example, further research in the socio-political dimensions of military procurement can be conducted in the future. This would involve appreciating how political processes, ways of influencing defense contractor lobbying, and geopolitical strategies involved in influencing military R&D investments can provide incremental understanding concerning the extant relationship between military procurement and technological innovation.

Further research might also concern the long-term sustainability of military-driven technological innovation. Studying the ecological and social effects of technologies initially developed for military purposes helps point out the broader meanings of sustainable development.

Criticism

In particular, Ruttan’s meticulous and well-researched work underpins a tone that comes across as rather deterministic regarding the relationship between military procurement and technological innovation. Some critics, like Sivard, have charged Ruttan with overstating military R&D’s positive impacts while underplaying negative consequences, including militarizing science and technology, diversion of resources from civilian R&D, and ethical implications of innovation driven by military end-use.

Moreover, a few scholars attack Ruttan’s framework for disregarding the alternate mechanisms of technological innovation in the absence of military procurement. Indeed, Ruttan’s explanation of technological progress relative to the contribution of civilian demand, private sector entrepreneurship, and international collaborative effort is somewhat relative.

Conclusion

The book offers extensive proof that military procurement profoundly influences technological innovation and economic development. The book illustrates how military demands have influenced various industries, from developing interchangeable parts and mass production techniques to modern computing and the Internet. Ruttan believes that most of these technologies were initially developed for military use but found some critical applications in civilian use that generated considerable commercial spin-offs and broader economic benefits.

It is also common for the development of these technologies to shift from the military to commercial domains as they mature, further underscoring the dynamic interplay between defense-related R&D and economic growth. Indeed, dual-use technologies and increasing consolidation of the defense and commercial industries are harbingers of a future in which general-purpose technologies will continue to benefit from military-driven innovation. In summary, Ruttan’s work systematically underlines the persistence of the role of military procurement in fanning technological trajectories and giving impetus to economic progress, thereby offering lessons for policy thinkers and scholars keen on the relationship between military investment and economic development.

Nevertheless, future research should respond to the limitations of Ruttan’s work by further globalizing the perspective, researching socio-political dimensions, and addressing long-term sustainability issues of a military-driven technological innovation path. Focusing on these aspects will thus place much better, all-embracing views on the intricate relationships between military procurement and economic growth, hence a nuanced set of insights pertinent to policymaking and academics.