As the tides of technological change accelerate, defense organizations worldwide stand on the brink of a transformation as profound as the introduction of the aircraft carrier or the nuclear submarine. At the heart of this transformation is Artificial Intelligence (AI). But unlike past innovations that changed what we built, AI is changing how we think, decide, and operate. This blog post synthesizes insights from historical military innovation, current defense strategies, and the thought leadership of Ethan Mollick (Co-Intelligence), Arvind Narayanan (AI Snake Oil), and aligned themes from Pursuit of Thought to propose a future-proof framework for AI integration.
From Steel to Silicon: A History of Transformation
Military history is defined by disruptive shifts. The dreadnought revolution redefined firepower. Carrier warfare extended the reach of the fleet. Nuclear propulsion enabled submerged endurance. Today, AI presents a cognitive revolution.
But like all prior transformations, success depends not just on adopting technology, but on changing culture, doctrine, and organizational DNA. AI is not merely a tool to bolt onto existing systems; it is a partner in co-intelligence.
This idea parallels the themes in “The Infinite Game”, where long-term thinking trumps short-term victories. Integrating AI demands a similar commitment to sustained learning, adaptation, and continuous innovation.
The Four AI Battlegrounds
Paul Scharre, in Four Battlegrounds, outlines the four critical pillars of national AI power: Data, Compute, Talent, and Institutions. This framework is not only applicable at the national level but is essential to defense organizations seeking to integrate AI.
- Data: AI’s effectiveness is contingent on access to high-quality, well-governed data. Aligning with VAULTIS principles—Visible, Accessible, Understandable, Linked, Trustworthy, Interoperable, and Secure—is essential to becoming AI-ready.
- Compute: As with the advent of nuclear power, compute power enables new operational concepts. Edge computing on platforms, integrated tactical AI systems, and hybrid cloud infrastructures must become as routine as radar or satellite comms.
- Talent: Just as carrier aviation demanded new skill sets, AI demands new cognitive operators. Defense organizations should cultivate AI literacy across all roles, integrating human-machine teaming exercises throughout training and operations. This echoes the insights in “AI and the Human Element”, which emphasizes preparing leaders for AI-augmented environments.
- Institutions: No strategy survives contact with bureaucracy. As highlighted in existing AI strategies and Scharre’s analysis, acquisition reform, cultural agility, and clear governance are prerequisites for successful AI adoption. Proactive reform is well-articulated in “Upstream and Reset”, which champions fixing systems before they fail.
Co-Intelligence, Not Artificial Intelligence
Ethan Mollick argues that AI is not a replacement for human decision-makers but an augmentor—a tool for co-intelligence. This means designing workflows where human intuition is amplified by machine learning. Tactical planning, intelligence analysis, and logistics can all benefit from this pairing.
Importantly, Mollick emphasizes experimenting in public and building trust through transparency. Defense institutions should adopt this ethos, moving away from closed-door development and embracing structured piloting, iterative testing, and cross-functional collaboration with technologists, operators, and policymakers.
This approach mirrors the mindset behind “Build the Life You Want”, where experimentation and intentional design of systems drive meaningful change.
Beware the Snake Oil
However, not all AI solutions are created equal. Arvind Narayanan warns against “AI snake oil”—systems that appear intelligent but offer little real value or are fundamentally unreliable. Facial recognition, predictive policing, and opaque black-box models pose particular risks.
This caution translates into a commitment to responsible AI. Every system must be validated, interpretable, and aligned with operational requirements. AI cannot become a crutch that undermines human judgment or ethical norms. Optimization must never come at the expense of security and accountability.
Further discussion on these concerns can be found in “Autonomous Weapons”, which explores both promise and peril.
Historical Rhymes and Future Imperatives
Each major military innovation was initially met with skepticism, friction, and institutional resistance. The aircraft carrier was once derided as a weak substitute for battleships. Submarines were seen as dishonorable. AI will face similar cultural hurdles.
Defense organizations must treat AI not as a project, but as a strategic posture. That means:
- Investing in data infrastructure as a warfighting asset
- Empowering personnel to become AI-augmented operators
- Reforming acquisition to support agile development and fielding
- Embedding ethics and reliability into every AI application
- Scaling only what is proven, transparent, and tactically relevant
The Path Forward
AI will not win wars on its own, but it can help us make better decisions, faster, and with greater clarity. It can help prevent wars through better deterrence and smarter diplomacy. It can reshape logistics, strategy, and readiness.
But only if we treat it as more than a technology.
It must become a trusted partner in the fight.
Defense organizations have the opportunity to lead—not just in hardware, but in cognitive warfare. The battlespace is vast, the threats are dynamic, and the information domain is contested. By embracing co-intelligence, avoiding snake oil, and learning from the past—and by applying the systemic thinking found in Pursuit of Thought—we can chart a course toward true strategic dominance in the age of AI.