Skip to content
Geopolits
Menu
  • MAKING SENSE OF THE STRATEGIC WORLD
Menu
AI in War

AI in War: How Algorithms are Changing Command, Control, and Chaos

May 5, 2025

On AI in War

In the fog of war, having clear and accurate information has always been incredibly valuable. For centuries, military leaders have had to make tough choices without knowing all the facts, often relying on guesswork and unclear reports. But now, things are changing in a big way—artificial intelligence is starting to transform how wars are fought. It’s not just about machines or self-driving weapons; it’s about how AI is changing the way decisions are made, how orders are given, how information is shared, and how intelligence is gathered—basically, the brain and nerves of today’s military operations.

The Evolution of Military Decision-Making

The way military leaders make decisions has changed a lot over time. It started with gut instincts—like Napoleon’s famous quick judgment—then moved to strict, rule-based systems during the industrial age, and now to high-tech command centres full of data. At every stage, there’s been a push and pull between trusting human instincts and relying on technology.

But AI is bringing a whole new kind of change. Unlike earlier tools that just helped humans do their jobs better, AI is starting to act more like a thinking partner—and sometimes even a competitor—when it comes to making decisions. This raises tough questions about who should really be in charge, especially in life-and-death situations.

Many military officers are worried AI might weaken the role of human judgment in key decisions. At the same time, they also admit that today’s fast and complex battles are too much for the human brain to handle alone. This creates a real dilemma for how to bring AI into military leadership without losing what makes human decision-making valuable.

The OODA Loop in the Age of Algorithms

Colonel John Boyd’s OODA loop—Observe, Orient, Decide, Act—has long been the go-to way to understand how military decisions are made. But now, artificial intelligence is changing how each part of that process works:

Observe: Modern militaries gather massive amounts of data during operations—sometimes dozens of terabytes in a single day. That’s way more than any human team could handle. AI helps by quickly sorting through all this information and turning it into useful insights that people can act on.

Orient: AI, especially machine learning, is great at spotting patterns in data that humans might miss. In some military tests, AI has already made a big difference by cutting down the workload for tasks like image analysis, so human analysts can focus on bigger-picture thinking.

Decide: In most cases, humans still have the final say in life-or-death decisions. But AI plays a big role in shaping the choices that commanders see. Studies show that AI systems often narrow down the list of options before humans even make a call, which raises concerns about hidden biases or decisions being steered by the algorithm.

Act: The time between making a decision and acting on it is getting shorter and shorter. In areas like missile defence or electronic warfare, things now happen so fast that people can’t keep up. That’s why these systems rely on automated responses that kick in within microseconds.

AI in War

Speeding up the OODA loop gives armies a clear edge in battle, but it also brings risks. When both sides are using AI, things can move so fast that misunderstandings or unintended attacks could happen before anyone has time to react or stop them.

Large Language Models in Intelligence, Surveillance, and Reconnaissance (ISR)

Bringing Large Language Models (LLMs) into military intelligence is one of the biggest changes since the days satellites first started being used for spying. Unlike older AI tools that were built for specific jobs, LLMs are flexible and can handle a wide range of intelligence tasks.

Bringing Together Different Types of Intelligence: In the past, different kinds of intelligence—like human sources (HUMINT), intercepted communications (SIGINT), satellite images (IMINT), and public info from the internet (OSINT)—were often kept separate. That made it hard to get the full picture. LLMs are good at pulling all this info together, breaking down these silos so analysts can connect the dots across different sources. Some intelligence agencies are already building systems where LLMs help analysts search across all this data quickly, making it easier to answer complex questions and make faster decisions.

Understanding Language and Culture: Modern wars often happen in regions with lots of languages and dialects. Military AI tools now exist that can translate many of these languages in near real-time—including rare ones that are often spoken in conflict zones. But they’re not just translating word-for-word. These systems are getting better at picking up on cultural meaning and subtle context, which helps make communication and intelligence gathering more accurate and effective.

Spotting Lies and Fake Information: In today’s information wars, one of the most valuable uses of LLMs is catching fake news, propaganda, or deepfakes. Some military research shows that LLMs can recognise when information is likely fake or manipulated and even suggest ways to respond. But this is a double-edged sword. The same tools that can detect deception can also be used to create more convincing lies. This sets the stage for an “AI arms race” in which both sides are trying to outsmart each other using increasingly advanced language models—potentially affecting both military decisions and public opinion.

Automated Battlefield Systems: Beyond the Killer Robot Debate

When people talk about military AI, they often focus on the idea of “killer robots” or fully autonomous weapons. While that’s a real concern, it overlooks a much bigger and already active world of battlefield automation that’s quietly reshaping modern warfare.

Drone Swarms and Group Tactics: Unlike single drones that are remotely controlled, drone swarms work as a team. Each drone follows simple rules, but together they make complex group decisions without relying on a single control point. Advanced militaries have developed swarms that can carry out coordinated missions—like surrounding targets or adapting to battlefield changes—without much human help. What’s groundbreaking here is how these drones learn and react as a group, behaving more like a living organism than a machine.

Smarter Electronic Warfare: Controlling the airwaves—radio signals, radar, and so on—has become just as important as controlling land or air. AI-powered systems can now spot and react to enemy signals almost instantly, far faster than any human could. Instead of relying on a list of known threats, these systems learn on the fly and adapt to new kinds of electronic attacks, making them much more flexible in fast-changing combat situations.

Smarter Maintenance and Supply Chains: Some of the most powerful uses of military AI are behind the scenes. For example, AI is being used to predict when aircraft parts are likely to fail, so repairs can be done before something breaks—saving time, money, and lives. In logistics, machine learning is helping armies better manage supplies and avoid shortages or waste. These tools might not look as dramatic as armed robots, but they could be even more important for keeping forces ready and winning future wars.

The Cognitive Battlefield: Human-Machine Teaming

As AI systems become smarter and more involved in military operations, the relationship between people and machines is getting more complicated. Military experts talk about “human-machine teaming,” but that idea brings up serious questions about who’s really in charge, who’s responsible when something goes wrong, and how much we can trust the technology.

Mental Strain and Over-Reliance on AI: Military studies show that AI can take a lot of mental pressure off operators—like those flying drones—by helping with routine tasks so humans can focus on big-picture decisions. But there’s a downside: the more operators depend on AI, the more their own skills can fade. When they’re suddenly asked to work without AI, their performance can drop. This raises concerns about keeping essential human abilities sharp in an increasingly automated world.

Clear Explanations Build Trust: For AI to be useful in real military decisions, it’s not enough for it to be accurate—it also has to explain how it reached its conclusions. A U.S. defence project called XAI (eXplainable AI) worked on making AI systems that can explain their reasoning in ways humans can understand. Tests showed that military users were much more likely to trust these explainable systems than ones that just gave answers with no reasoning—even if both were equally accurate. This shows that clarity and transparency are just as important as technical performance in combat situations.

Moral Risks and Emotional Detachment: One of the most worrying issues isn’t technical—it’s ethical. There’s a growing concern that people may be more willing to approve harmful actions, like using lethal force, when those decisions are backed by AI. This could lead to what’s called “moral injury,” where soldiers feel guilt or emotional damage from actions that go against their personal values. AI may act as a kind of buffer, making it easier for people to go along with decisions they wouldn’t make on their own. The real danger isn’t machines going rogue—it’s humans handing over their moral judgement to machines. That’s the core ethical challenge facing the future of AI in warfare.

Strategic Stability and the AI Security Dilemma

AI isn’t just changing how wars are fought on the battlefield—it’s also starting to reshape how powerful countries think about war and peace, especially when it comes to avoiding large-scale conflict or nuclear escalation.

Different Views, Dangerous Assumptions: Countries don’t all see AI the same way. Some believe it gives them a big edge in warfare, while others worry that it’s unreliable or could make conflicts spiral out of control. Research comparing how major powers view military AI shows big differences in how much they trust AI, how much human control they think is needed, and how likely they think AI could accidentally trigger a bigger conflict. These misunderstandings can be dangerous—if one country thinks it’s safe to rely on AI and another sees that as reckless, tensions can rise fast and lead to misjudgments.

AI and the Nuclear Question: One of the most serious concerns is how AI is being used around nuclear weapons—not to launch them directly, but to support the systems that warn of incoming attacks, assess threats, and manage communications. These are critical parts of nuclear defence. Some studies show that adding AI to early warning systems can speed up how fast leaders need to respond in a crisis. But faster decisions mean more room for mistakes, especially if the warning is false or misunderstood. This could increase the risk of a nuclear launch based on bad or rushed information.

Hard to Control, Harder to Monitor: In the past, arms control agreements worked because it was easy to count things like missiles and warheads. But AI is different—it’s software, and the same code can be used for many things, both civilian and military. That makes it nearly impossible to track or limit AI in the same way. International talks, like those led by the UN to regulate autonomous weapons, have been stuck for years. The core problem is that AI can’t be boxed in or easily inspected, so creating reliable rules for its military use remains a major challenge.

The Road Ahead: Governance in the Age of Military AI

As AI becomes a bigger part of military operations, figuring out how to manage and regulate its use is becoming more urgent. Several possible approaches are being tried, but each one comes with major difficulties:

Updating the Rules of War: One common idea is to apply existing international laws—like those that govern how wars should be fought—to AI systems. For example, the Red Cross says that rules like avoiding harm to civilians, using force in proportion to the threat, and taking precautions still apply, even if decisions are made by algorithms. The problem is, it’s hard to turn those human-centred rules into clear technical guidelines for machines. Studies show that many military AI projects don’t actually build these laws into their designs, highlighting a serious disconnect between legal expectations and how engineers are creating these tools.

Setting Technical Safety Standards: Some experts suggest we focus less on laws and more on setting technical safety rules. Organisations like the IEEE have created guidelines for building ethical AI systems, including those used in military contexts. The U.S. military also adopted ethical principles for AI in 2020, meant to make sure AI systems are reliable and fair. But while these frameworks exist, they’re often too general, and how they’re actually used in real projects varies a lot from one place to another.

International Talks and Trust-Building: Given the risks that AI might accidentally cause a war to escalate, some experts are pushing for countries to talk directly about these technologies. The first formal AI-related military dialogue between the U.S. and China happened in 2024, but it was mostly just an opening step. These kinds of talks are tough—partly because AI is complicated, but also because countries don’t want to share details about their systems. That secrecy makes it hard to build trust or avoid misunderstandings.

Military AI and the Global South: Asymmetric Challenges and Opportunities

While powerful militaries around the world are rushing to build advanced AI systems for war, countries that are still developing—like Bangladesh, Kenya, or Bolivia—face a different reality. These nations often don’t have the money, technology, or skilled experts needed to compete in this race. However, they still have to deal with the fact that AI is changing how global security works.

The Unequal AI Playing Field

The gap between rich and poor countries in military AI is growing fast. Most of the world’s military AI development is happening in a small number of countries like the US, China, Russia, Israel, and the UK. These countries have the resources, labs, and experts to keep advancing rapidly, while many others are being left behind.

For countries in the Global South, this leads to several challenges:

Not Enough Money: Many poorer countries spend very little on their military compared to global powers. For example, Bangladesh’s total military budget is only a tiny fraction of what the US or China spends just on new military technology.

Weak Tech Infrastructure: Building and using military AI requires strong digital systems like fast internet, secure cloud storage, and reliable communications. Many developing nations still lack these basics, which makes AI adoption harder.

Shortage of Skilled People: Developing AI for defence needs experts in both tech and military strategy. But many developing countries don’t have enough trained people in these fields—and often, their best talent moves abroad for better opportunities.

What Can Developing Countries Do?

Even with limited resources, developing nations have some smart options to prepare for the AI age:

Focus on Specific Needs: Instead of trying to match the big powers across the board, smaller countries can focus on areas most important to them. Vietnam, for example, has put its limited AI efforts into keeping watch over its disputed sea areas—and it has made real progress.

Invest in Defensive AI First Using AI for defence—like protecting against cyberattacks—is often cheaper than developing offensive weapons. Malaysia has shown how even modest investments in AI-powered cybersecurity can make a big difference in national safety.

Team Up with Neighbours: One country alone might not have the resources to build powerful AI systems, but working together as a region can help. ASEAN, for instance, is exploring ways for Southeast Asian countries to collaborate on AI security. South Asian nations, including Bangladesh, could benefit from similar teamwork.

Partner with Tech-Savvy Countries: Strategic partnerships with more advanced countries can help close the gap. India’s AI deal with Japan is one example—focused on learning from each other without becoming overly dependent.

Use Civilian AI for Military Purposes: A lot of military AI technology is built on tools from the civilian world—like speech recognition or computer vision. By growing their local tech sectors, developing countries can eventually use those tools for defence too. Rwanda has done this by investing in its tech industry, which is now helping with defence upgrades.

Unexpected Advantages for Latecomers

Ironically, being late to adopt military AI might actually be an advantage in some ways. Countries that haven’t yet rolled out large-scale systems can avoid the mistakes made by early adopters—like using AI without enough ethical safeguards. In fact, some countries like Costa Rica and Botswana already have clearer ethical rules for military AI than many rich countries do.
Also, not being tied down by old military systems lets developing nations build fresh, more flexible AI solutions. They can include protections for human rights right from the start, while advanced militaries often have to struggle to add those in later.

The Importance of Global Inclusion

Finally, one of the most important things for developing nations is to push for fair global rules around military AI. Right now, most international talks are dominated by big powers, while the voices of the Global South are often left out—even though they have the most to lose if things go wrong.

Truly fair global rules wouldn’t just focus on banning killer robots—they’d also deal with technology sharing, helping weaker nations build capacity, and stopping a dangerous arms race in AI that could make global inequalities even worse.

Conclusion: The Human Element in Algorithmic War

As we go through this major shift in military technology, one key idea stands out: we must keep real human control over decisions about using force. That doesn’t mean we have to reject AI in the military or hold on to old-fashioned ways of making decisions. Instead, it means building systems where humans and machines work together—each doing what they’re best at, while making up for each other’s weaknesses.

The militaries that do best in the future will be the ones that see AI not as something that replaces human judgment, but as something that helps improve it. AI can give commanders better information, but it’s still up to humans to make the final calls—bringing their values, experience, and responsibility to the table. Computers can crunch numbers, but only people can lead.

At the end of the day, the big questions about military AI aren’t just about technology—they’re about philosophy. We need to think hard about what leadership means, whether it’s right to let machines make life-and-death decisions, and how to keep the human side of warfare in focus, even as new tech changes how battles are fought.

——-
Geopolits Research Desk
———-

Related

Post navigation

← Maldives and China Strategic Engagement
China’s Hydropower Diplomacy, the Mekong River and the Ganges-Brahmaputra Talks →

Post Types

  • Post (269)
  • Page (2)

Categories

  • Indo-Pacific regions (172)
  • South Asia (169)
  • America (128)
  • Middle East (100)
  • Eastern Europe and Russia (83)

Tags

  • USA (137)
  • China (102)
  • India (99)
  • Bangladesh (88)
  • Russia (77)

Year

  • 2025 (74)
  • 2024 (42)
  • 2023 (36)
  • 2022 (41)
  • 2021 (7)

Editor's Pick

https://geopolits.com/2025/03/24/indias-geopolitical-tightrope-is-balancing-the-us-and-russia-amidst-a-shifting-global-order/
© 2025 Geopolits | Powered by Minimalist Blog WordPress Theme
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}