SPEED OF THE KILL CHAIN:


SENSOR-TO-SHOOTER LATENCY FROM IRONBOTTOM SOUND TO JADC2

Admiral Lee's most decisive innovation at Guadalcanal was not the radar itself — it was the architecture that connected radar to guns. From that 1942 rewiring of a battleship's communications to today's contested debates over Joint All-Domain Command and Control, a single engineering principle has governed naval combat: the side that moves data from sensor to shooter faster than the threat timeline wins. That margin is now measured in seconds, and the physics of hypersonic weapons and autonomous swarms has made human-serial kill chains structurally obsolete.

By Stephen L. Pendergast, Senior Engineer Scientist

I. THE ARCHITECTURE, NOT THE DEVICE

There is a version of the Guadalcanal radar story that focuses almost entirely on the SG radar itself — its S-band frequency, its PPI display, its 40,000-yard surface detection range, its Raytheon magnetron. That version is incomplete in a way that matters. South Dakota carried the same SG radar as Washington. South Dakota lost her radar to an electrical casualty and was savaged by Japanese gunfire. Washington, with the same hardware, destroyed a Japanese battleship in seven minutes without taking a meaningful hit.

The difference was architecture. Not the sensor. Not even, primarily, the admiral's personal technical fluency, though that mattered. The difference was the communications and data-routing structure Lee and Captain Glenn Davis had built around the sensor before the engagement began — the structure that determined how fast, how accurately, and to how many simultaneous consumers radar-derived targeting data could flow.

Lee abolished the serial chain. In the traditional arrangement, the radar plot officer spoke to a talker, who relayed to the gunnery officer, who relayed to the plotting room, who relayed to the gun director trainers. Each relay introduced a mean transmission latency of perhaps five to ten seconds and a nonzero probability of transcription error or misunderstanding. In a four-link serial chain, those probabilities compounded. Lee replaced this with a single direct-voice circuit on which the plot officer spoke simultaneously to every consumer — gunnery officer, plotting room, all directors — in parallel. The chain became a broadcast. Latency collapsed. Error probability fell. Washington's guns were on solution and firing before Kirishima's lookouts had resolved the shape in the darkness.

That architectural insight — parallel data distribution to simultaneous consumers, with human decision authority inserted only where necessary and nowhere else — is the organizing principle of every C2 system the Navy has built in the eighty years since. It is the principle AEGIS was designed to embody. It is the principle the Cooperative Engagement Capability was built to extend across multiple platforms. It is the principle JADC2 is attempting to scale to joint-force level. And it is the principle that hypersonic weapons and autonomous swarm tactics are now stress-testing to destruction.

II. THE PHYSICS OF LATENCY: WHY SECONDS ARE NOT EQUAL

To understand why sensor-to-shooter latency is the central organizing problem of contemporary naval C2, it is necessary to be precise about the relationship between threat speed, detection range, and engagement timeline. These are not policy variables. They are physics, and physics does not negotiate.

In November 1942, the fastest threat platforms Washington faced in a night battle were Japanese destroyers making perhaps 35 knots — roughly 60 feet per second. At 8,400 yards detection range, Washington had approximately eight minutes before a destroyer could close to torpedo range. Eight minutes is an enormous latency budget. A serial kill chain consuming 90 seconds from first radar contact to first salvo still leaves more than six minutes of engagement opportunity. The architecture Lee built was decisive not because it was necessary to beat the clock — it wasn't, against surface ships — but because it was necessary to beat the Japanese to first salvo at night, before visual detection was possible. Speed of the kill chain was the margin of surprise, not the margin of survival.

The threat calculus began changing during the Korean War and accelerated through the Cold War as Soviet naval doctrine centered on long-range anti-ship cruise missile salvos. A subsonic cruise missile at Mach 0.85 covers roughly 950 feet per second. At a detection range of 20 nautical miles — optimistic for a sea-skimming missile against a surface radar — the time of flight is approximately 130 seconds. The serial human kill chain that served adequately against surface ships in 1942 begins to look marginal against this timeline. This drove the AEGIS Combat System, which replaced the human-serial architecture with automated engagement sequencing: the Weapon Control System can engage a threat autonomously through a pre-authorized doctrine, with the commanding officer's authority embedded in the engagement rules rather than in a real-time decision loop.

AEGIS was, in this sense, a direct institutional descendant of Lee's 1942 rewiring — the same principle, extended further toward the machine end of the human-machine decision spectrum. The commanding officer retained authority by setting the doctrine; the system executed within that doctrine at speeds no human decision loop could match.

The Hypersonic Compression

Modern hypersonic glide vehicles — China's DF-17, Russia's Avangard, the emerging class of regional hypersonic weapons now proliferating across multiple threat states — operate at sustained speeds between Mach 8 and Mach 20 or beyond. A vehicle at Mach 15 covers roughly 15,000 feet per second, or approximately 2.8 miles per second. Against a detection range of 200 nautical miles — the outer edge of what space-based and over-the-horizon sensors might provide with cueing — total flight time is roughly 70 seconds. Against a detection range of 50 nautical miles — more representative of organic ship-based radar in a contested electromagnetic environment — flight time is approximately 18 seconds.

Eighteen seconds. That is the entire latency budget from first detection to required intercept, assuming a perfect sensor, a perfect communications link, and an interceptor already in a ready posture with a valid targeting solution. Any human decision node consuming more than a few seconds of that budget begins to foreclose engagement options. A serial kill chain of the 1942 pattern — even a fast one — is not merely suboptimal against this timeline. It is non-functional.

The problem compounds further in a saturation scenario. A coordinated attack combining hypersonic glide vehicles, supersonic cruise missiles, and large numbers of attritable UAS on multiple simultaneous axes does not simply compress individual engagement timelines — it multiplies the number of simultaneous fire-control solutions the combat system must generate and execute. An AEGIS baseline capable of handling dozens of simultaneous engagements in the Cold War anti-air warfare scenario faces a qualitatively different challenge when the threat combines hypersonic, supersonic, and subsonic vectors arriving from different quadrants, some maneuvering, some operating in electronic silence, some designed specifically to saturate the engagement radar's tracking capacity.

 

Table 1: Threat Timeline Compression — Detection to Engagement Window by Era

Era / System

Threat Speed

Detection-to-Fire Window

Latency Budget

1942  Washington / SG + Mk3/4 FC

Surface ships, ~30 kts

Several minutes

~60–120 seconds tolerable

1983  AEGIS / SPY-1 + Mk 41

Subsonic/supersonic missiles, Mach 0.8–2.5

60–120 seconds

~10–30 seconds tolerable

2000s  CEC / Link 16 / NIFC-CA

Supersonic / early hypersonic, Mach 2–5

30–90 seconds

~5–15 seconds tolerable

2020s+  JADC2 / HGV / saturation UAS

Hypersonic, Mach 8–20+; simultaneous multi-axis UAS swarms

10–60 seconds or less

~1–5 seconds tolerable; machine-speed mandatory

Note: Windows assume nominal detection ranges and terminal intercept geometry. Actual margins narrower in contested EM environments.

III. ANATOMY OF WASHINGTON'S KILL CHAIN — AND WHAT IT TEACHES

It is instructive to reconstruct Washington's sensor-to-shooter chain on the night of 14–15 November 1942 with engineering precision, because the architecture Lee built was not accidental — it was a deliberate response to a specific latency problem, and its structure maps directly onto the problems JADC2 is trying to solve at joint-force scale.

 

Table 2: Washington's Kill Chain Architecture — Node-by-Node Latency Analysis

Kill Chain Node

1942 Washington Architecture

Latency Inserted

Detection

SG radar operator observes PPI contact

~0 sec (continuous sweep)

Processing

Plot officer interprets PPI — direct headset to all consumers simultaneously

~2–5 sec (Lee's redesigned architecture)

Decision

Admiral Lee on bridge — pre-briefed ROE, no relay required

~5–10 sec (pre-authorized engagement criteria)

Targeting

Fire-control radar (Mk3/4) cued directly by SG bearing/range; solution computed

~15–30 sec

Engagement

Main battery fired on radar solution — no visual acquisition required

~5 sec (salvo interval)

TOTAL — Lee's parallel architecture

~27–50 seconds detection-to-first-salvo

Decisive: Kirishima engaged before Japanese detected Washington visually

Sources: Hornfischer, Neptune's Inferno; USNI Proceedings, September 1967; Naval History Forum (kbismarck.org).

Several features of this architecture deserve emphasis because they recur as design principles in every subsequent C2 system.

Pre-authorized engagement criteria. Lee did not make a fresh decision to engage when the SG painted Kirishima. The decision to engage any Japanese surface combatant entering the defined area had been made before the battle, in the form of rules of engagement. The commanding admiral's authority was embedded in the pre-battle doctrine, not exercised in real time. This is identical in logic to the AEGIS doctrine-based engagement authorization and to the current debate over human-machine teaming in autonomous weapons: decision authority can be exercised prospectively, by setting rules, rather than reactively, by participating in each engagement loop.

Parallel rather than serial data distribution. The plot officer's simultaneous broadcast to all consumers eliminated the compounding latency of a relay chain. In network engineering terms, Lee replaced a token-passing serial bus with a broadcast parallel bus. The data rate of the individual link was unchanged; the architecture change eliminated the queuing delay at each relay node. JADC2's 'any sensor, any shooter' concept is the same architectural principle extended across domains and Services.

Sensor cueing of fire-control radar. Washington did not rely solely on the Mk3/4 fire-control radar to find Kirishima. The SG surface-search radar provided initial bearing and range, cueing the fire-control radar to the correct sector. This sensor-cueing architecture — wide-area search sensor providing targeting data to narrow-beam precision fire-control sensor — is the operational logic of Cooperative Engagement Capability, of offboard cueing of ship's AEGIS systems by E-2D Advanced Hawkeye, and of the current NIFC-CA (Naval Integrated Fire Control–Counter Air) architecture. Lee's 1942 implementation was the template.

Information isolation as a vulnerability. South Dakota's electrical casualty did not merely disable her weapons — it severed her from the network. She went dark on sensors and communications simultaneously, turning her from a networked combatant into an isolated hull. The lesson is that network resilience — the ability to maintain data flow under degradation — is as important as peak network performance. A C2 architecture that collapses when any single node fails is not operationally adequate. This principle drives current investment in mesh networking, low-probability-of-intercept data links, and satellite-independent navigation and timing.

IV. FROM CIC TO AEGIS TO CEC: THE INSTITUTIONAL EVOLUTION

The Combat Information Center concept, formalized in 1943 as a direct response to the command-and-control failures of the Guadalcanal surface actions, was the first institutional codification of Lee's architectural insight. The CIC centralized all sensor inputs — radar, sonar, communications, visual reports — into a single compartment staffed by specialists whose sole function was to maintain a common tactical picture and route that picture to consumers in real time. It replaced the distributed, platform-specific sensor-reading arrangements that had prevailed — and failed — in the early Guadalcanal engagements.

The CIC was a human solution to a human-speed problem. As long as the threat moved at surface ship or subsonic aircraft speeds, humans in a well-designed CIC could maintain a picture accurate enough to support engagement decisions within the available timeline. The early Cold War threat — Soviet medium bombers and early cruise missiles — began to strain this model but did not break it. AEGIS broke it deliberately, replacing the human engagement decision loop with an automated doctrine engine capable of processing multiple simultaneous tracks and executing engagement sequences at machine speed.

The Cooperative Engagement Capability (CEC), introduced in the 1990s and progressively refined, extended AEGIS logic across multiple platforms. CEC creates a composite track by fusing the sensor data of every CEC-equipped ship and aircraft in the formation into a single shared picture, updated continuously, with sub-second latency. A missile detected and tracked by a destroyer on the formation's outer screen is immediately available as a targeting-quality track to an AEGIS cruiser ten miles away, enabling the cruiser to engage a threat that its own sensors have not yet detected. This is offboard cueing at machine speed — the SG-to-Mk3 sensor cueing principle Lee demonstrated in 1942, extended to a multi-platform networked force.

Naval Integrated Fire Control–Counter Air (NIFC-CA) extended the concept further, incorporating the E-2D Advanced Hawkeye airborne radar as an elevated sensor node capable of detecting sea-skimming threats below the radar horizon of surface ships, and passing engagement-quality tracks to AEGIS ships for over-the-horizon intercept. A surface combatant can now engage a threat it has never detected with its own sensors, guided entirely by offboard track data — the logical endpoint of the sensor-cueing architecture Lee employed at Guadalcanal.

“JADC2 is not a program. It is an architectural aspiration: the extension of Lee's 1942 parallel broadcast principle to every sensor and shooter across all domains and all Services, at speeds the threat demands.”

V. JADC2: THE RIGHT ANSWER TO THE WRONG INSTITUTIONAL QUESTION

Joint All-Domain Command and Control is the current programmatic expression of the sensor-to-shooter latency imperative. Its stated goal — any sensor, any shooter, any domain — is architecturally correct. The concept recognizes that the threat environment the United States will face in a near-peer conflict does not respect Service boundaries: Chinese and Russian integrated air defense systems, hypersonic weapons, and anti-access/area-denial architectures are joint problems that require joint solutions at speeds human serial chains cannot provide.

The implementation difficulties, however, are not primarily technical. They are institutional — and they are, once again, recognizably analogous to the institutional problems that nearly strangled radar development in the 1930s and nearly prevented its effective employment in 1942.

The Proprietary Format Problem

Each military Service has developed its C2 systems on different data architectures, different communications protocols, and different security frameworks over decades of parallel development. Army TITAN, Air Force ABMS, Navy NIFC-CA, and Marine Corps MCOP are each technically capable systems. They do not speak to each other natively. Bridging them requires translation layers — gateways, format converters, protocol adapters — and each translation layer is a latency insertion point and a potential failure node. The problem is structurally identical to the incompatible radar data formats and communications equipment that prevented effective inter-ship data sharing in the early Guadalcanal engagements.

The technical solution — a common data fabric, standardized application programming interfaces, open architecture standards mandated across all Service programs of record — has been identified and is being pursued under the Combined Joint All-Domain Command and Control (CJADC2) framework. The implementation timeline, however, is measured in years and is subject to the same proprietary vendor interests, Service budget priorities, and acquisition bureaucracy that slowed radar standardization in the early 1940s. The difference is that in the 1940s, the timeline for the threat was set by a Japanese naval schedule. Today it is set by Chinese and Russian modernization programs whose pace is not subject to American budget cycles.

The Classification Barrier Problem

Perhaps the most technically vexing obstacle to JADC2 implementation is the incompatibility between classification levels. Sensor data from national technical means — satellites, signals intelligence, overhead reconnaissance — is typically classified at levels that prevent its automatic distribution to tactical networks. The result is that the most capable sensors in the joint force are systematically isolated from the kill chains that need their data most. A hypersonic vehicle tracked by space-based sensors cannot automatically cue a ship's AEGIS system if the sensor data lives on a network the ship cannot access at machine speed.

Cross-domain solutions — systems that automatically sanitize and downgrade sensor data for distribution to lower-classification networks — exist and are being developed. They introduce their own latency. More fundamentally, the security architecture that prevents automatic cross-domain data flow was built for a world in which the threat to classified information was primarily human exploitation — a spy reading a document. In a machine-speed kill chain, the relevant threat is not a spy reading a document; it is an adversary who has penetrated the data fabric and can inject false tracks or deny access at the moment of peak operational need. The security architecture appropriate for that threat environment is different from the one appropriate for Cold War counterintelligence, and building it without sacrificing the speed the kill chain requires is an unsolved engineering problem.

The Human Authority Problem

The deepest tension in JADC2 is not technical. It is doctrinal and legal. International humanitarian law requires that lethal force decisions be made by a human being who can exercise meaningful judgment about the target. The Law of Armed Conflict's principles of distinction, proportionality, and precaution are not satisfied by a pre-programmed engagement doctrine alone — they require human accountability for each lethal act. This is not a bureaucratic requirement; it is a foundational principle of the laws of war that the United States has consistently upheld and that the Navy's Judge Advocate General corps correctly identifies as a hard constraint on autonomous engagement.

The tension is that meaningful human judgment exercised in real time is incompatible with the engagement timelines that hypersonic threats impose. A Mach 15 glide vehicle does not allow 30 seconds for a legal review. The resolution — the same resolution Lee reached in 1942 — is to move human decision authority earlier in the kill chain, into the doctrine-setting process, rather than later, into the real-time engagement loop. The commanding officer authorizes a category of engagements in advance; the system executes within that authorization autonomously when the threat matches the criteria. Human authority is exercised, but prospectively rather than reactively.

This architecture is legally and operationally defensible. It is also not yet fully institutionalized. The rules of engagement frameworks, training requirements, legal review processes, and accountability mechanisms appropriate for a doctrine-based autonomous engagement capability operating at hypersonic threat timelines have not been comprehensively developed. They are being developed, in scattered programs and policy offices across the joint force. The pace of that development is not synchronized with the pace of the threat.

 

THE LATENCY CRISIS: WHERE THE CHAIN BREAKS TODAY

CLASSIFICATION BARRIERS: Space-based sensor tracks cannot auto-cue tactical AEGIS networks across security domain boundaries at machine speed. Each cross-domain solution introduces 2–15 seconds of processing latency — potentially decisive against hypersonic threats.

FORMAT INCOMPATIBILITY: Army, Navy, Air Force, and Marine C2 systems use different data schemas. Gateway translation between ABMS and NIFC-CA, for example, introduces queuing latency and constitutes a single-point-of-failure node in the joint kill chain.

ROE LATENCY: Rules of engagement requiring real-time human authorization for each engagement are structurally incompatible with sub-20-second threat timelines. Doctrine-based pre-authorization frameworks exist but are not uniformly implemented across the joint force.

TRAINING GAP: Fleet exercises have not consistently stress-tested JADC2-concept kill chains against realistic hypersonic and swarm threat timelines. Operators and commanders lack the empirical experience needed to optimize doctrine-based engagement rules before a conflict reveals their inadequacy.

RESILIENCE DEFICIT: Current JADC2 architecture retains single-point dependencies on satellite communications and GPS timing. Adversary ASAT and GPS jamming capabilities can degrade or sever the data fabric at the moment of peak operational demand — the contemporary equivalent of South Dakota's electrical casualty.

 

VI. THE MACHINE'S ROLE: AUTONOMY, SPEED, AND THE DECISION BOUNDARY

The trajectory from Lee's 1942 parallel voice circuit to JADC2's machine-speed data fabric describes a consistent directional movement: progressive transfer of data processing, track correlation, and engagement sequencing from humans to machines, with human authority retained at the doctrine-setting level rather than the execution level. This movement has been driven not by ideology but by physics — by the inexorable compression of threat timelines that has progressively foreclosed the option of human participation in each engagement loop.

The contemporary debate about artificial intelligence in the kill chain is, in this context, a continuation of a trajectory rather than a departure from one. The question is not whether machines will process sensor data faster than humans — they already do, in every operational C2 system the Navy fields. The question is where on the kill chain the machine's authority ends and the human's begins, and how that boundary is defined, trained, and maintained as the threat environment evolves.

The most productive current framework — reflected in DoD Directive 3000.09 on autonomous weapons systems and in the ongoing development of responsible AI frameworks — is that machines handle speed-of-light data processing, track correlation, and execution of pre-authorized doctrine, while humans retain authority over the doctrine itself: the definition of valid targets, the rules of engagement, the authorization of specific engagement envelopes. This is Lee's 1942 architecture expressed in contemporary terms. The admiral sets the doctrine. The system executes it. The admiral remains accountable.

Where this framework becomes operationally stressed is in scenarios where the pre-authorized doctrine encounters conditions it was not designed to handle — where the target identification is ambiguous, where the threat trajectory is anomalous, where the tactical situation has evolved in ways that the pre-battle ROE did not anticipate. In those scenarios, the machine's execution of its doctrine may produce outcomes that a human decision-maker, with full situational awareness, would not have authorized. This is not a hypothetical concern; it is the operational risk that every doctrine-based autonomous engagement system carries, from AEGIS to any future JADC2-enabled autonomous interceptor.

The mitigation is not to slow the machine — the timeline does not permit it. The mitigation is to invest heavily in the quality and specificity of the doctrine, in the training that builds commander judgment about when the scenario has departed from the doctrine's valid envelope, and in the after-action review processes that continuously update the doctrine based on operational experience. These are human activities that determine how well the machine performs. They are also, predictably, the activities most vulnerable to underfunding in a defense budget constrained by platform procurement and readiness costs.

VII. WHAT THE NAVY MUST DO — AND WHAT INDIVIDUALS MUST DRIVE

The structural argument of this article converges on a set of engineering and organizational requirements that are neither classified nor controversial — they are simply under-resourced and under-prioritized relative to the threat timeline they must address.

Collapse the Classification Barrier

The most urgent technical requirement is a scalable, low-latency cross-domain solution that allows space-based and signals intelligence sensor data to automatically cue tactical engagement networks without human-mediated downgrading. This is an engineering problem with a known solution architecture — the difficulty is funding, security accreditation timelines, and the institutional resistance of intelligence community equities that are accustomed to controlling data distribution. It requires a Vannevar Bush-equivalent champion who can subordinate those equities to the operational requirement.

Mandate Common Data Standards

The CJADC2 data fabric will not exist at operationally useful latency until all Service C2 programs of record are required, by contract and by acquisition policy, to implement common application programming interfaces and data schemas. This is an acquisition policy decision, not an engineering decision. It requires sustained political will at the USD(A&S) level to override Service-specific procurement preferences. The history of interoperability mandates in defense acquisition is not encouraging, but the alternative — a joint force whose C2 systems cannot exchange targeting data at machine speed — is a force that will lose the first engagement of a near-peer conflict.

Build the Doctrine Before the Crisis

The rules of engagement and pre-authorization frameworks for doctrine-based autonomous engagement in a hypersonic threat environment must be developed, exercised, legally reviewed, and operationally tested before a conflict begins. This is the contemporary equivalent of Lee redesigning Washington's fire-control communications before the battle rather than during it. The institutional tendency to defer doctrinal development until a platform is fielded and a threat is imminent is precisely the tendency that produced Callaghan's fatal confusion on the first night of the Naval Battle of Guadalcanal. It will produce a contemporary equivalent at a moment and place not yet known.

Invest in Resilience, Not Just Performance

South Dakota's lesson is that a network-dependent force must be survivable when the network degrades. The JADC2 architecture must include graceful degradation modes — procedures and pre-delegated authorities that allow individual ships, aircraft, and ground units to execute effective combat operations when satellite communications are jammed, GPS timing is spoofed, and the data fabric is disrupted. Distributed Maritime Operations and Expeditionary Advanced Base Operations represent doctrinal frameworks for this problem. Their implementation in training and exercises against realistic electronic warfare environments remains insufficient.

Find and Protect the Technical Champions

Every structural requirement identified above will be executed or not executed by individual officers and civilian engineers who are technically fluent in both the operational problem and the engineering solution, and who are willing to absorb the institutional friction of being ahead of the consensus. The Navy's talent management system was not designed to identify, develop, and protect this population. It was designed to produce competent generalist warfare officers and functional specialist civilians on tracks optimized for platform-centric operational experience. The officer who has spent three years understanding CEC architecture, JADC2 data fabric design, and hypersonic engagement timelines is not easily accommodated in a promotion system that values time-at-sea and command tours above specialist technical depth.

This is the same problem the Navy had with Willis Lee, who was an anomaly — a line officer with deep technical knowledge who happened to be in the right place at the right time with enough seniority to act on what he knew. The contemporary force cannot afford to rely on anomalies. It needs a systematic approach to developing and employing the Lee-equivalents for unmanned systems, AI-enabled C2, and directed energy — not as a separate technical track divorced from operational authority, but as a recognized pathway to command for officers who combine technical depth with operational judgment.

VIII. CONCLUSION: THE UNCHANGING PRINCIPLE

From the moment Robert Page tracked an aircraft over the Potomac River at one mile in December 1934 to the moment Washington's fire-control computers solved a targeting solution on Kirishima at 8,400 yards in November 1942, eight years elapsed. From the cavity magnetron to the SG radar's first operational installation was eighteen months. From the SG's first installation to its decisive employment at Guadalcanal was seven months. The technology moved at the speed of individual initiative. The institution moved at the speed of institutional consensus. The gap between them was closed, barely, in time for the battle that mattered.

The contemporary gap between JADC2's architectural aspiration and its operational reality is measured in years, not months. The threats that will stress-test it — Chinese hypersonic weapons, Russian integrated electronic warfare, autonomous swarm tactics — are maturing on their own schedule. The compression of sensor-to-shooter timelines to the point of machine-speed necessity is not a future scenario. It is a present engineering requirement that the institution is addressing at an institutional pace.

The physics will not wait. A Mach 15 vehicle does not slow down because the cross-domain solution is still in accreditation review. A UAS swarm does not pause because the Service C2 formats are still incompatible. The engagement window that a hypersonic threat provides is fixed by thermodynamics and geometry, not by program schedules.

Willis Lee understood this logic intuitively in 1942, before the formal vocabulary of sensor-to-shooter latency had been invented. He understood that the speed at which information moved from detection to decision to effect was the decisive variable in night combat, and he organized everything he could control — the communications wiring, the training drills, the pre-battle briefings, the ROE — to minimize that latency before the battle began. The principle he applied is identical to the principle JADC2 must embody. The institutional challenge of applying it is identical to the challenge NRL, the Rad Lab, and the Bureau of Ships faced in 1940–1942.

The difference is that the margin for error has compressed along with the threat timeline. In 1942, the Navy had several engagements at Guadalcanal to learn the radar lesson before Lee demonstrated it definitively. In a near-peer conflict opening with hypersonic strikes and saturation UAS attacks, there may not be a second engagement in which to apply what the first engagement taught. The doctrine, the architecture, the training, and the individual champions who understand all three must be in place before the shooting starts. The time available to put them there is finite, and it is passing.

 

Stephen L. Pendergast is a Senior Engineer Scientist with more than 20 years of experience in radar systems engineering, signal processing, and aerospace defense applications. He holds an MS in Electrical Engineering from MIT and a BS from the University of Maryland. He is a Senior Life Member of IEEE and has taught technical courses at UCSD Extension. This article is the third in a series on radar, command architecture, and the adoption of transformative technology in naval warfare.

 

Sources and Formal Citations

Chicago author-date format. URLs verified February 2026.

1.  Hornfischer, James D. Neptune's Inferno: The U.S. Navy at Guadalcanal. New York: Bantam, 2011.

2.  Roskill, Captain S.W., RN. 'Shipborne Radar.' Proceedings 93, no. 9 (September 1967). https://www.usni.org/magazines/proceedings/1967/september/shipborne-radar

3.  Friedman, Norman. Naval Radar. Annapolis: Naval Institute Press, 1981.

4.  Friedman, Norman. Network-Centric Warfare: How Navies Learned to Fight Smarter Through Three World Wars. Annapolis: Naval Institute Press, 2009.

5.  Friedman, Norman. The Naval Institute Guide to World Naval Weapon Systems. Annapolis: Naval Institute Press, 1991/92.

6.  Pacific War Online Encyclopedia. 'SG Surface Search Radar.' http://pwencycl.kgbudge.com/S/g/SG_surface_search_radar.htm

7.  Pacific War Online Encyclopedia. 'Radar.' http://pwencycl.kgbudge.com/R/a/Radar.htm

8.  NavWeaps.com. 'Radar Equipment of the United States of America.' http://www.navweaps.com/Weapons/WNUS_Radar_WWII.php

9.  Naval History Forum / kbismarck.org. 'USS Washington Radars.' https://kbismarck.org/forum/viewtopic.php?t=2237

10.  Wikipedia. 'Cooperative Engagement Capability.' https://en.wikipedia.org/wiki/Cooperative_Engagement_Capability

11.  Wikipedia. 'Naval Integrated Fire Control–Counter Air.' https://en.wikipedia.org/wiki/Naval_Integrated_Fire_Control%E2%80%93Counter_Air

12.  Wikipedia. 'AN/SPY-1.' https://en.wikipedia.org/wiki/AN/SPY-1

13.  Wikipedia. 'Joint All-Domain Command and Control.' https://en.wikipedia.org/wiki/Joint_all-domain_command_and_control

14.  U.S. Department of Defense. DoD Directive 3000.09: Autonomous Weapons Systems. Washington, DC: DoD, 2023. https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf

15.  U.S. Department of Defense. Summary of the 2022 National Defense Strategy. Washington, DC: DoD, 2022. https://media.defense.gov/2022/Oct/27/2003103845/-1/-1/1/2022-NATIONAL-DEFENSE-STRATEGY-NPR-MDR.PDF

16.  Defense Innovation Unit. Autonomous Systems Roadmap FY2023–2028. Washington, DC: DIU, 2023. https://www.diu.mil

17.  Congressional Research Service. Navy Laser, Railgun, and Hypervelocity Projectile: Background and Issues for Congress. Washington, DC: CRS, 2023. https://crsreports.congress.gov

18.  Congressional Research Service. Hypersonic Weapons: Background and Issues for Congress. Washington, DC: CRS, 2024. https://crsreports.congress.gov

19.  Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. New York: Norton, 2018.

20.  Allison, David K. New Eye for the Navy: Origin of Radar at the Naval Research Laboratory. NRL Report 8466. Washington, DC: Naval Research Laboratory / GPO, 1981. https://apps.dtic.mil/sti/tr/pdf/ADA110586.pdf

21.  HyperWar: US Navy. 'Capabilities and Limitations of Shipborne Radar.' COMINCH P-08. https://www.ibiblio.org/hyperwar/USN/ref/RADONEA/COMINCH-P-08-03.html

22.  Rosen, Stephen Peter. Winning the Next War: Innovation and the Modern Military. Ithaca: Cornell University Press, 1991.

23.  Krepinevich, Andrew F. The Military-Technical Revolution: A Preliminary Assessment. Washington, DC: CSBA, 2002.

24.  Pendergast, Stephen L. 'The Radar Edge: Technology, Leadership, and the Night Battle off Guadalcanal.' Proceedings, February 2026. [companion article, this issue]

25.  Pendergast, Stephen L. 'The Adoption Problem: Why the Next War Will Also Be Won by Individuals, Not Institutions.' Proceedings, February 2026. [companion sidebar, this issue]

Comments

Popular posts from this blog

Why the Most Foolish People End Up in Power

A Student's Guide to Quantum Field Theory:

Earth's Hidden Ocean: The Ringwoodite Water Reservoir