The UAE's top prosecutor has ordered the arrest and urgent prosecution of 10 individuals accused of spreading fabricated and misleading video content on social media platforms during a period of heightened regional tension. Attorney General Dr Hamad Saif Al Shamsi announced that the defendants — who hold passports from eight different countries — face criminal charges carrying a minimum prison sentence of one year and fines of at least Dh100,000 (approximately $27,200). The case represents one of the most significant enforcement actions against AI-generated misinformation in the Middle East and sends a clear message about the consequences of using technology to spread fear and undermine public stability.
What Happened
According to the Attorney General's office, the 10 defendants published two distinct categories of problematic content on social media platforms. The first category involved real footage of air defence systems intercepting incoming threats — material that, while genuine, was classified as sensitive because it revealed details about military operations and could compromise national security.
The second and more troubling category involved entirely fabricated videos generated using artificial intelligence. These AI-produced clips were designed to look realistic and depicted scenes that never actually occurred — explosions at prominent UAE landmarks, missile strikes on civilian infrastructure, large-scale fires across different areas of the country, and the destruction of military facilities. None of these events happened, but the videos were crafted with sufficient sophistication to be convincing to untrained viewers.
Some of the content was particularly manipulative. Investigators found that certain defendants had created videos that exploited children's emotions, using footage designed to falsely suggest that children were in immediate danger from security threats. Others repurposed genuine footage from incidents that occurred in other countries and falsely presented them as taking place inside the UAE, deliberately misleading audiences about the situation on the ground.
Who Was Arrested
The 10 defendants come from a wide range of nationalities, reflecting the UAE's diverse expatriate population. According to official reports, those arrested include:
- Two Indian nationals
- One Egyptian national
- One Filipino national
- One Vietnamese national
- One Pakistani national
- One Iranian national
- One Bangladeshi national
- One Cameroonian national
- One Nepalese national
The multinational nature of the arrests underscores that this is not an issue confined to any single community. The Attorney General's office made clear that enforcement applies equally to all residents and visitors regardless of nationality — anyone who uses social media to spread fabricated or misleading content that threatens national security will face the full weight of the law.
The Legal Framework
The charges against the defendants are grounded in the UAE's cybercrime legislation, which has been progressively strengthened in recent years to address the growing threat of digital misinformation. The relevant provisions criminalise the deliberate dissemination of false or misleading information that threatens public security, spreads fear among the population, or undermines social stability.
The penalties are substantial. Conviction carries a minimum prison sentence of one year and a minimum fine of Dh100,000. The word "minimum" is significant — judges have discretion to impose longer sentences and larger fines depending on the severity of the offence, the reach of the content, and the intent behind its publication. For cases involving AI-generated content specifically designed to deceive, the penalties could be considerably higher.
The defendants have been referred to urgent trial, meaning the courts will prioritise their cases rather than allowing them to proceed through the normal judicial timeline. This expedited process reflects the seriousness with which UAE authorities view the offences and their desire to establish a strong deterrent effect quickly.
The AI Dimension
What makes this case particularly significant is the role of artificial intelligence in creating the fabricated content. Unlike traditional misinformation — which typically involves misrepresenting real footage or making false textual claims — AI-generated video content represents a qualitative leap in the sophistication of digital deception.
Modern AI video generation tools can produce footage that is increasingly difficult to distinguish from genuine recordings. Scenes can be fabricated from scratch, existing footage can be altered to show events that never occurred, and realistic audio can be synthesised to accompany false visuals. For consumers of social media, who typically encounter content in rapid-scroll environments with limited time for critical evaluation, AI-generated fake videos pose a serious challenge to their ability to distinguish fact from fiction.
The UAE case demonstrates that governments are beginning to treat AI-generated misinformation as a distinct category of threat — one that requires specific legal responses and enhanced investigative capabilities. The Attorney General's office specifically identified the use of artificial intelligence as an aggravating factor in the case, suggesting that the technology used to create the deception is being weighed alongside the intent behind it.
Why This Matters for UAE Residents
For the millions of expatriates and citizens living in the UAE, this case carries important practical implications. The message from authorities is unambiguous: sharing unverified content on social media during a period of heightened security carries real legal risks. This applies not only to people who create fabricated content, but potentially to those who share it onwards — even if they did not create it themselves.
Key guidelines that residents should follow:
- Do not share unverified footage of military operations, interceptions, or security incidents — even if you believe the footage is genuine
- Rely on official sources for information about the security situation. The UAE government provides regular updates through official channels, the national news agency WAM, and verified social media accounts
- Be sceptical of dramatic content that appears on social media, particularly videos showing explosions, fires, or attacks on landmarks. AI can now produce highly convincing fake footage
- Do not forward content from unverified sources to friends, family, or groups — forwarding misleading content can itself constitute an offence
- Report suspicious content to platform administrators and, where appropriate, to local authorities
The Broader Context: AI Misinformation Globally
The UAE's action against AI-generated fake videos is part of a growing global trend. Governments around the world are grappling with the challenge of deepfake technology and its potential to distort public discourse, manipulate elections, damage reputations, and — as this case demonstrates — threaten national security during periods of conflict.
The challenge is not just legal but technical. Detecting AI-generated content requires sophisticated analysis tools, and the technology is advancing faster than the detection capabilities in many cases. The UAE has invested heavily in its digital forensics capabilities, and the arrests suggest that investigators were able to identify AI-generated content and trace it back to the individuals responsible — a non-trivial technical achievement.
Several other jurisdictions have introduced or are developing legislation specifically targeting AI-generated misinformation. The European Union's AI Act includes provisions around transparency and labelling of AI-generated content. China has enacted regulations requiring AI-generated content to be clearly labelled. And in the United States, several states have passed or are considering laws addressing deepfake content, particularly in the context of elections.
What distinguishes the UAE's approach is the speed and severity of the enforcement. Rather than relying solely on platform-level moderation or voluntary compliance, the UAE has chosen to prosecute individuals directly, using existing cybercrime laws to address a new form of threat. The minimum penalties — one year in prison and Dh100,000 — are designed to ensure that the consequences of spreading AI-generated misinformation are severe enough to deter future offenders.
Official Response
In his statement announcing the arrests, Attorney General Dr Hamad Saif Al Shamsi was direct about the government's position:
The authorities will not tolerate attempts to exploit cyberspace or modern technologies to spread fabricated or misleading information that affects national security or disturbs public order. Those who seek to undermine stability through the dissemination of false content will face the full consequences of the law.
Dr Hamad Saif Al Shamsi, UAE Attorney General
The statement emphasised that the enforcement action was not about restricting legitimate expression or preventing people from sharing factual information. Rather, it was targeted specifically at individuals who deliberately created or distributed content designed to deceive — content that had the potential to cause public panic, undermine confidence in the country's security apparatus, and destabilise social order during an already challenging period.
Implications for Social Media Platforms
The case also raises questions about the responsibility of social media platforms in detecting and preventing the spread of AI-generated misinformation. While the UAE has chosen to focus its enforcement on individual users, the platforms themselves face growing pressure to develop better tools for identifying and flagging AI-generated content before it goes viral.
Most major platforms have policies prohibiting the sharing of manipulated media, but enforcement of these policies remains inconsistent. The speed at which content spreads on social media — often reaching millions of users within hours — means that even a few hours of circulation can cause significant harm before a video is identified as fake and removed.
The UAE's aggressive enforcement approach may push platforms to invest more in AI detection tools and to respond more quickly to reports of fabricated content in the UAE and the broader Gulf region. For platforms operating in a market where the legal consequences of hosting harmful content are severe, the business case for better content moderation becomes much stronger.
What Comes Next
The 10 defendants are now awaiting urgent trial. The outcomes of their cases will set important precedents for how the UAE courts handle AI-generated misinformation going forward. The sentences imposed — whether at the minimum level or significantly above it — will signal how seriously the judiciary views this new category of digital offence.
For the broader UAE population, the message is clear. The country's legal system is prepared to act swiftly against those who use technology to spread fear and misinformation. In a region navigating complex security challenges, the integrity of public information is not just a matter of principle — it is a matter of national security. And the UAE has shown that it is willing to enforce that principle with real consequences.