top of page

AI in warfare and autonomous weapons

  • Writer: Yatin Taneja
    Yatin Taneja
  • Mar 9
  • 11 min read

The setup of advanced artificial intelligence into military command, control, and weapon systems enables machines to identify, prioritize, and engage targets with minimal or no human input, fundamentally altering the space of modern conflict by shifting the burden of rapid decision-making from human operators to algorithmic processes capable of processing vast streams of data in real time. Lethal autonomous weapons systems (LAWS) operate on real-time sensor data and sophisticated algorithmic decision models, reducing response times from minutes to milliseconds, a drastic compression of the engagement window that renders traditional human reaction cycles insufficiently slow for contemporary high-tempo operations where speed determines survival and victory. This shift in decision-making speed moves from human cognitive limits to computational throughput, altering the tempo and escalation dynamics of conflict by creating an environment where machines can initiate and conclude engagements before a human commander comprehends the tactical situation. Lethal autonomous weapons systems represent a category of weapons that can select and engage targets without meaningful human control, operating across a spectrum of autonomy that includes distinct configurations such as human-in-the-loop, where a human operator must approve each engagement before execution; human-on-the-loop, which allows a human to monitor and override system decisions during operation; and human-out-of-loop configurations that enable the system to operate independently with no real-time human oversight. Algorithmic escalation refers to the rapid, machine-driven increase in force level due to feedback loops in perception and response, where an aggressive posture by one autonomous system triggers an immediate and disproportionate counter-response by opposing systems, potentially leading to uncontrollable spirals of violence that exceed the intent of any single operator. The core function of these systems is target discrimination, distinguishing combatants from non-combatants using visual, thermal, acoustic, and behavioral signatures through a complex process of sensor fusion that aggregates data from disparate sources into a coherent tactical picture.



A second function involves engagement authorization, determining whether to initiate a kinetic or non-kinetic response based on rules of engagement encoded in software that translate legal and ethical mandates into executable logic constraints. The third function is mission adaptation, adjusting tactics in real time based on environmental changes, enemy countermeasures, or collateral risk assessments, allowing the system to work through adaptive battlefields without requiring external intervention. All these functions depend heavily on sensor fusion, predictive analytics, and closed-loop feedback between perception, planning, and action modules, creating a continuous cycle of observation and adjustment that mimics biological cognition yet operates at electronic speeds. Target recognition subsystems employ convolutional neural networks and object detection algorithms to classify entities in energetic environments where visual obstructions, camouflage, or decoys might confuse simpler detection methods. Threat assessment engines evaluate intent, capability, and proximity using probabilistic models and historical engagement data to assign a risk score to each detected entity, prioritizing targets that pose the greatest immediate danger to the system or its protected assets. Fire control modules calculate course, timing, and weapon selection while factoring in weather, terrain, and friendly force positions to ensure the highest probability of kill with the least expenditure of munitions. Communication layers relay status updates to central command or swarms of coordinated units via secure, low-latency networks that maintain integrity even in contested electromagnetic environments. Fail-safe mechanisms include geofencing, mission abort triggers, and hardware kill switches, though the reliability of these safeguards varies by design and the specific operational context in which they are deployed.


During the 2000s, militaries deployed semi-autonomous drones with human-guided targeting capabilities that laid the groundwork for more independent systems by demonstrating the value of unmanned platforms for reconnaissance and strike missions without risking pilot lives. The 2010s saw the development of autonomous swarming drones and AI-enabled surveillance platforms by major global powers seeking to gain a tactical advantage through coordinated multi-domain operations. International bodies began formal discussions on LAWS regulation in 2016 as the technology matured and the prospect of fully independent combatants became a tangible reality rather than a theoretical exercise. In 2020, the first documented use of an autonomous drone swarm occurred in Libya, where units reportedly targeted retreating forces without direct human command, marking a significant milestone in the practical application of these technologies in active conflict zones. Defense organizations issued updated AI ethics principles in 2023 emphasizing human responsibility for lethal decisions, attempting to establish boundaries for autonomy even as technical capabilities continued to advance. Fully manual systems were rejected due to slow response times and high personnel costs in high-tempo conflicts where the volume of sensor data exceeds human processing capacity and the speed of engagement renders centralized command structures obsolete. Remote-piloted systems faced bandwidth limitations and vulnerability to signal interception or spoofing, prompting a move toward greater onboard autonomy to reduce dependence on fragile communication links. Rule-based expert systems without machine learning were discarded for their inability to adapt to novel tactics or camouflage used by adaptive adversaries who quickly learn to exploit rigid logic patterns. Centralized AI command nodes were deemed too vulnerable to single-point failures, prompting a shift to decentralized swarm architectures where collective intelligence emerges from local interactions rather than top-down directives.


Unmanned surface vessels, like the Sea Hunter, conduct autonomous patrols with human oversight for maritime domain awareness, demonstrating the viability of long-duration autonomous missions in domains where communication delays are significant and operational endurance is critical. Quadcopters, such as the Kargu-2 deployed in Libya, demonstrated the ability to track and engage human targets using facial recognition and onboard processing capabilities that allow them to operate independently of a pilot. Ground robots, like the Uran-9 tested in conflict zones, showed limited autonomy and highlighted setup challenges with existing command structures, revealing the difficulty of working with autonomous ground platforms in legacy combined arms formations designed around human-operated vehicles. Performance benchmarks focus on target identification accuracy, which exceeds ninety percent in controlled tests, and mission completion time under electronic warfare conditions where jamming and interference attempt to disrupt the system's sensors and communication links. Dominant architectures use modular, open-standard frameworks, allowing third-party algorithm connection, to ensure interoperability between different systems and allow for rapid upgrades without replacing entire hardware platforms. Developing doctrines emphasize intelligentized warfare, focusing on AI-driven command systems and swarm coordination as force multipliers that enable smaller forces to achieve effects previously requiring massive troop formations. Open-source AI models are increasingly adapted for military targeting, lowering entry barriers for smaller nations or non-state actors who lack the resources to develop proprietary machine learning systems from scratch. Edge computing enables onboard inference, reducing reliance on cloud connectivity and improving operational resilience against attacks on communication infrastructure or denial-of-service efforts targeting network links.


Physical constraints impose hard limits on the capabilities of autonomous systems, primarily power requirements for onboard computation, which limit the endurance of small autonomous platforms that must balance energy consumption between propulsion and data processing. Sensor accuracy degrades in cluttered, obscured, or electronically jammed environments, increasing false positive rates that could lead to unintended engagements or fratricide if the discrimination algorithms lack sufficient robustness or training data. Economic adaptability favors mass-produced, low-cost drones over high-end systems, enabling attrition-based warfare strategies that seek to overwhelm expensive defenses with quantities of expendable autonomous units rather than relying on the survivability of individual platforms. Training data scarcity for rare combat scenarios reduces model generalization, necessitating the use of synthetic data generation in large deployments to expose the algorithms to a wider variety of edge cases than available in real-world historical records. Latency in satellite or mesh networks can disrupt real-time coordination in distributed autonomous systems, forcing reliance on edge computing capabilities where decisions are made locally on the device rather than relying on cloud-based processing. Heat dissipation limits onboard processing power in compact drones, capping model complexity and forcing designers to fine-tune algorithms for efficiency rather than raw performance to fit within thermal envelopes. Battery energy density restricts flight time, forcing trade-offs between compute load and mission duration that require careful balancing of sensor usage against propulsion requirements to maximize operational utility. Technical workarounds include federated learning, model pruning, and hybrid analog-digital chips that improve efficiency by distributing training tasks or reducing the size of neural networks without significant loss of accuracy. Swarm tactics offset individual unit limitations by distributing computation and sensing across many platforms, allowing the collective to achieve high levels of performance even if individual nodes are constrained by size or power availability.



Modern warfare demands faster OODA loops than humans can sustain, especially against peer adversaries with comparable technology who can field their own autonomous systems capable of operating at machine speeds. Economic pressures drive militaries toward cost-effective, reusable autonomous platforms that reduce pilot risk and training overhead, while allowing for the rapid scaling of forces through mass production techniques borrowed from commercial manufacturing sectors. Societal expectations for reduced collateral damage incentivize precision targeting, which AI can theoretically enhance, provided discrimination is reliable enough to distinguish legitimate military targets from civilians or protected infrastructure with high confidence. Geopolitical competition accelerates adoption, as lagging states risk strategic disadvantage in deterrence and first-strike scenarios where the possession of superior autonomous capabilities could serve as a decisive deterrent against aggression. Leading nations in AI research focus on funding, testing infrastructure, and connection with legacy military platforms to ensure that theoretical advances in artificial intelligence translate into practical battlefield advantages. Some state-backed firms prioritize quantity and speed of deployment, producing thousands of low-cost autonomous drones annually to create saturation attack capabilities that overwhelm enemy defenses through sheer volume rather than technological sophistication of individual units. Nations with extensive urban combat experience excel in AI for dense environments, refining target discrimination using data from complex operations where distinguishing between threats and civilians presents unique technical challenges. Adversaries focus on electronic warfare counter-AI measures, developing jamming and spoofing tools to disrupt autonomous systems by flooding their sensors with noise or feeding them false data to induce malfunctions or errors in judgment.


International export controls attempt to restrict LAWS components, yet enforcement gaps remain due to the dual-use nature of many technologies such as computer vision chips and commercial drones that can be modified for military applications. Nations without domestic AI capacity rely on commercial off-the-shelf drones modified for military use, blurring civilian-military boundaries and making it difficult to apply traditional arms control frameworks to non-state actors or irregular forces. Strategic deterrence models are being rewritten to account for AI-enabled first strikes that could disable command centers before human response is possible, necessitating new approaches to maintaining stability under conditions of extreme uncertainty regarding adversary capabilities and intentions. Military alliances develop shared standards for ethical AI use in combat, though consensus remains elusive due to differing cultural attitudes toward automation and varying national security priorities among member states. Academic labs contribute foundational research in computer vision, reinforcement learning, and multi-agent systems that eventually filter down into military applications through partnerships and technology transfer agreements. Defense contractors partner with universities for applied research and development under government contracts to solve specific technical challenges related to autonomy in denied environments or target recognition under adverse conditions. The dual-use nature of AI means civilian advancements in autonomous vehicles directly inform military capabilities, as improvements in sensors and algorithms developed for self-driving cars are immediately applicable to unmanned ground vehicles or naval vessels. Classified programs limit peer review, reducing transparency in safety validation and bias testing, which raises concerns about the reliability of these systems when deployed in complex real-world scenarios where unexpected edge cases are inevitable.


Existing rules of engagement must be codified into machine-readable formats, requiring new legal-technical interfaces that bridge the gap between abstract legal principles and executable code logic governing autonomous behavior. Military communication infrastructure needs hardening against AI-driven cyberattacks and sensor spoofing that seek to deceive autonomous systems or corrupt their decision-making processes with malicious data injections. Training pipelines must evolve to include AI system monitoring, anomaly detection, and override procedures to ensure that human operators retain the ability to intervene effectively when systems behave unexpectedly or encounter situations outside their operational parameters. International humanitarian law requires reinterpretation to assign liability for autonomous weapon malfunctions or misidentifications, as current legal frameworks are predicated on human agency and may not adequately address accountability for actions taken by algorithms. Job displacement occurs in traditional roles like drone pilots, artillery crews, and reconnaissance analysts, as systems automate routine tasks that previously required direct human intervention or cognitive labor. New business models develop around AI audit services, red-teaming autonomous weapons, and compliance verification to ensure that systems adhere to ethical guidelines and perform reliably within specified safety margins. Insurance and liability markets begin pricing risk for algorithmic errors in combat scenarios, creating financial mechanisms to manage the potential fallout from unintended engagements or system failures that cause collateral damage. Private militaries and mercenary groups may adopt LAWS, bypassing state oversight mechanisms and raising the specter of automated warfare conducted by non-state actors with little regard for international norms or humanitarian considerations.


Traditional metrics like sortie rate and ammunition expenditure become less relevant as key performance indicators shift toward decision latency and system explainability scores that measure how quickly a system can act and how well it can justify its decisions to human supervisors. Mission success is now measured by adherence to ethical constraints such as civilian casualty probability thresholds rather than just objective completion, reflecting a shift toward precision warfare where minimizing harm is as important as achieving tactical goals. Trust calibration between human operators and AI systems requires quantifiable confidence intervals and uncertainty reporting so that operators understand the reliability of the system's assessments and know when to question its recommendations or intervene manually. Future systems will feature onboard continual learning, allowing them to update models during deployment without retraining from scratch based on new data gathered during operations. Cross-domain autonomy will integrate air, land, sea, and cyber operations under unified AI command structures that coordinate effects across multiple battlespaces simultaneously to achieve synergistic outcomes that exceed the sum of individual domain capabilities. Bio-inspired swarm intelligence will enable self-healing networks that reconfigure after losses by automatically redistributing tasks among surviving units to maintain mission capability even after significant attrition. Quantum-resistant encryption will protect AI decision pathways from future decryption threats posed by quantum computing capabilities that could otherwise compromise the security of communication links or decision logic stored onboard autonomous platforms.



AI enhances sensor fusion across radar, lidar, RF, and optical inputs, creating unified battlefield pictures that provide a comprehensive view of the environment superior to what any single sensor modality could achieve alone, allowing systems to see through obscurants or detect stealthy targets by correlating faint signatures across multiple spectra. Connection with satellite constellations enables global tracking and targeting with minimal delay, providing persistent surveillance and strike capabilities that cover vast geographic areas without the gaps inherent in traditional manned platforms or limited duration drone flights, ensuring that targets remain tracked even when they move between different theater coverage zones. Cyber-physical systems link digital targeting decisions to physical actuators like missiles, guns, and jammers in smooth loops that eliminate latency between perception and action in the physical domain, allowing for near-instantaneous response to detected threats within microseconds of classification. Human-machine teaming interfaces evolve to support supervisory control of multiple autonomous assets simultaneously using augmented reality displays and advanced input methods that allow a single operator to manage a swarm of units effectively by delegating high-level goals while retaining authority over escalation decisions. Current discourse overemphasizes autonomy as a binary switch, while most systems operate on spectrums of human involvement shaped by mission context and risk tolerance that dictate the appropriate level of machine independence for any given scenario, meaning that effective oversight requires subtle understanding of system capabilities rather than simple prohibition categories. The greater danger lies in predictable, scalable misuse by state or non-state actors using commercially available tools rather than rogue AI acting independently of human intent or direction, as widespread access to powerful algorithms lowers the barrier to entry for creating devastating weapons without requiring specialized scientific breakthroughs beyond what is already available in consumer technology markets.


Regulation should focus on use cases and outcomes such as banning anti personnel LAWS rather than attempting to define autonomy abstractly in ways that might become obsolete as technology advances or create loopholes that bad actors can exploit by claiming their systems fall outside specific technical definitions while still posing unacceptable risks to civilian populations. Superintelligence will treat warfare as an optimization problem with objectives defined by its training data and reward functions seeking solutions that maximize the probability of achieving specified goals within the constraints of the physical environment and adversary actions potentially identifying novel strategies that human planners would never conceive due to cognitive biases or limited processing capacity.


© 2027 Yatin Taneja

South Delhi, Delhi, India

bottom of page