spot_img
spot_img

Other publications

Documenting Potential IHL Violations Arising from Unverified Israeli Claims on Cross-Border Activity at al-Masnaa

  On 4 April 2026, the Israeli army’s Arabic-language spokesperson...

International List of Cultural Property under Enhanced Protection: The Afqa Archaeological Site

Name and Identification:The Afqa Archaeological Site is an ancient...

International List of Cultural Property under Enhanced Protection: Adlun Archaeological Site

Name and Identification:The Adlun Archaeological Site is a complex...

Media disinformation, AI-driven forgery, and violations of international humanitarian law in the digital age.

By Roland Abi Najm – International consultant and expert in cybersecurity, artificial intelligence, and digital transformation

In every modern armed conflict, two wars are fought simultaneously. The first is waged with weapons on the ground. The second is fought with algorithms and digital content, and is often more dangerous than the first, because it does not target land but rather the truth itself. We are living in an era in which a fabricated video spreads faster than any verified report, a map bearing an incorrect name can ignite violence, and artificial intelligence can generate convincing fictitious atrocities in a matter of seconds—within reach of anyone holding a smartphone. These are not theoretical threats; they cost lives, destroy reputations, alter the course of wars, and systematically undermine a core principle of international humanitarian law: the obligation to distinguish between truth and propaganda, and between civilians and combatants.

Maps as Weapons: When Geography Lies

Maps have never been neutral. Whoever controls the name of a place controls the narrative of who belongs to it, who holds sovereignty over it, and who is the aggressor. In the digital age, this issue has taken on new and more dangerous dimensions. Today, maps are produced and updated in real time, satellite imagery is processed by algorithms, and collaborative platforms allow anyone to modify geographic databases—while falsified maps spread through the fog of war as quickly as real intelligence.

The deliberate alteration of place names, the erasure of original names and their replacement with those of occupation, and the renaming of villages to obscure their historical identity are not merely symbolic acts. They constitute a form of cultural violence that strips communities of their identity and their legal rights to their land. The issue goes beyond names: systematic interference with global navigation systems (GPS) has been observed in multiple conflict zones, misleading ships, aircraft, and humanitarian convoys. Cases of coordinated manipulation of open-source mapping platforms during conflicts have also been documented, involving the deletion of villages, the alteration of administrative boundaries, and the renaming of roads. Furthermore, generative AI tools have made it possible to produce highly realistic fake satellite images depicting non-existent military infrastructure or buildings portrayed as destroyed despite never having been bombed.

Under international humanitarian law, the principle of distinction obliges parties to a conflict to differentiate between military objectives and civilian objects. When falsified maps depict civilian infrastructure as military targets, or conceal military installations within civilian areas, they directly violate this obligation and put lives at risk.

The Rumor Machine: Why Critical Thinking Stops When Bombs Begin

There is a well-documented psychological phenomenon that operates in times of crisis: cognitive overload combined with fear leads to a near-total suspension of critical thinking. When fear prevails and reliable information becomes scarce, verification turns into a luxury that most people feel they cannot afford. The result is catastrophic: rumors spread, unverified claims go viral, screenshots taken out of context become breaking news, and videos from years ago in other countries are repackaged as current events. And when a correction is issued—if it is issued at all—it reaches only a small fraction of those who saw the falsehood.

One of the most alarming features of the modern information ecosystem is the phenomenon of the democratization of false expertise. Every conflict today unleashes a flood of amateur analysts and self-proclaimed geopolitical commentators who present themselves as experts without training, sources, or accountability. They explain battlefield dynamics they have never studied, interpret satellite imagery without possessing the technical vocabulary, and deliver confident judgments about international humanitarian law obligations they have never encountered. This is not merely a nuisance: in a conflict where a mistake in identifying a military unit or strike location can trigger retaliatory attacks or diplomatic crises, the proliferation of unqualified commentary becomes a real danger.

The problem is structural at its core. Social media platforms reward engagement, not accuracy. A bold and sensational claim from an anonymous account generates more interaction than a cautious statement from a verified expert. The algorithm does not distinguish between a seasoned conflict analyst and someone who has watched a few videos online. In wartime, rumors tend to follow a predictable pattern: they are first published by an anonymous source or a fake account, then amplified by followers through unverified commentary, then picked up by traditional media under pressure to break news—turning the claim into a cited reference used by politicians, commentators, and even official statements. The correction comes late and reaches only a few.

International humanitarian law does not yet provide a comprehensive framework for digital information warfare, but the principle of taking feasible precautions before any attack and verifying the reliability of information implicitly extends to the information environment. Beyond the legal dimension, there is an equally important ethical and civic responsibility: if you are not sure about the accuracy of what you read, do not share it. In war, a rumor can kill.

The Deepfake Crisis: When What We See Lies

Throughout human history, the visual has been synonymous with truth. Images were evidence, and video footage was proof. Today, that assumption has collapsed. Generative artificial intelligence has produced what is now known as deepfake—synthetic content that even trained experts struggle to distinguish from reality. Faces can now be seamlessly swapped onto bodies with perfect lip synchronization, voices can be cloned from just three seconds of audio, and entire scenes can be fabricated from a written prompt. More concerning still, the tools required to do all this are no longer restricted to states or advanced institutions; they are free, widely accessible, and within reach of anyone with a smartphone and an internet connection.

This is what makes the crisis exceptional. Only a few years ago, producing convincing deepfake content required a studio, expensive software, and specialized expertise. Today, it can be done in minutes on a personal computer. The technological barrier has collapsed, opening the door to non-state actors and resource-limited groups alike. In the context of armed conflicts, the risks are manifold: AI can generate highly realistic images and videos of massacres, chemical attacks, or civilian casualties that never occurred—designed to provoke international condemnation or retaliatory strikes based on fabricated events. It can also put words into the mouths of leaders and politicians—false surrender orders, nonexistent ceasefires, or fabricated incitement speeches. Equally dangerous, such fabrication provides a tool for those seeking to dismiss genuine evidence of violations of international humanitarian law as AI-generated, offering a veil of denial that undermines accountability mechanisms.

The issue is further compounded by what can be described as the detection gap. Deepfake detection tools do exist; they analyze pixel patterns, compression data, and biometric indicators. However, they lag behind generation capabilities and require expertise that most media institutions do not possess. Moreover, the challenge is not purely technical: in the midst of conflict, when footage appears to show an atrocity, the pressure to react immediately is immense, and the delay required for forensic analysis does not align with the speed of the social media news cycle. As a result, even digital forensics experts and intelligence analysts are sometimes unable to reliably distinguish between synthetic and real content. This is not a future risk—it is the present reality.

This crisis extends to the very foundations of international humanitarian law. International investigations into violations—whether conducted by the International Committee of the Red Cross, UN Special Rapporteurs, or international courts—depend fundamentally on the credibility of evidence. If images and videos can no longer be presumed authentic, the entire system for documenting violations enters a zone of doubt. Accused parties will challenge every piece of visual evidence as potentially fabricated, and the historical record of conflicts—serving not only judicial processes but also humanity’s collective memory—will become permanently tainted by suspicion.

What Must Be Done?

It is not enough to identify the crisis without understanding the response it requires. At the level of legal frameworks, the conventions of international humanitarian law need an explicit update addressing information warfare, AI-generated content, and the manipulation of digital maps. Deliberate disinformation campaigns during armed conflicts should be classified as a form of unlawful warfare when they foreseeably lead to violations. Deepfakes used to fabricate evidence of atrocities, deny real violations, or incite violence should also be criminalized, with mandatory disclosure requirements imposed on digital platforms for synthetic content.

At the level of digital platforms, social media companies and mapping platforms can no longer continue to treat themselves as neutral infrastructure during conflicts. They must adopt instant detection tools for synthetic content with mandatory labeling, establish emergency protocols for cooperation with accredited fact-checkers and humanitarian organizations, and enforce strict standards on map edits in conflict zones. In the media sphere, journalistic institutions must return to essential verification standards that should precede any publication of content coming from war zones, and they must build partnerships with digital forensics specialists. Ultimately, however, institutional solutions alone cannot be relied upon. The information ecosystem is made up of billions of individual decisions. Every person who shares an unverified claim, or republishes a deepfake without scrutiny, contributes to the erosion of truth. The most powerful tool against disinformation is a single pause before pressing the share button.

Truth Is a Value Worth Protecting

International humanitarian law is built upon essential distinctions: between combatants and civilians, between military objectives and protected sites, and between lawful warfare and prohibited conduct. Each of these distinctions depends on the availability of accurate information. The triple threat we have examined—map manipulation, viral disinformation, and AI-driven deepfakes—not only complicates the information environment but threatens the very possibility of maintaining these distinctions. When maps lie, rumors spread faster than facts, and the artificial becomes indistinguishable from the real, the foundations upon which this law stands begin to collapse.

This is not a call for pessimism, but a call for urgency. The National Human Rights Commission in Lebanon and similar institutions play a pivotal role: documenting violations, challenging false narratives, demanding accountability, and insisting that truth remains a value worth protecting. Because when we lose our grip on what is real, we do not merely lose a news cycle—we lose the ability to hold power accountable. And when power is not held accountable in times of war, civilians pay the price.

 

هذه المقالة متاحة أيضًا بـ: العربية (Arabic) Français (French)

NHRCLB
NHRCLBhttps://nhrclb.org
مؤسسة وطنية مستقلة منشأة بموجب القانون 62/ 2016، تتضمن آلية وقائية وطنية للتعذيب (لجنة الوقاية من التعذيب) عملاً بأحكام القانون رقم 12/ 2008 (المصادقة على البروتوكول الاختياري لاتفاقية مناهضة التعذيب). An independent national institution established under Law No. 62/2016, which includes a National Preventive Mechanism against torture (the Committee for the Prevention of Torture), in accordance with the provisions of Law No. 12/2008 (ratifying the Optional Protocol to the Convention against Torture). Une institution nationale indépendante établie en vertu de la loi n° 62/2016, qui comprend un mécanisme national de prévention de la torture (le Comité pour la prévention de la torture), conformément aux dispositions de la loi n° 12/2008 (ratifiant le Protocole facultatif se rapportant à la Convention contre la torture).