Israeli Court Reveals Names of Brothers Accused of Selling AI-Generated Fake Intelligence to Iran

Israel’s High Court of Justice has lifted a gag order, allowing the names of two brothers accused of spying for Iran to be made public.

The accused are Meir Nahum, 24, from Beitar Illit, and Yosef Nahum, 28, from the Beit Shemesh/Modi’in Illit area. They were charged last month in the Jerusalem District Court with serious offences, including contact with a foreign agent and sharing information with an enemy. Click Here To Follow Our WhatsApp Channel


How the Case Began

According to prosecutors, the case started in August 2025 when one of the brothers was contacted by an Iranian operative through the messaging app Telegram.

The contact reportedly offered money in exchange for sensitive information. Authorities say the brothers continued communication and received more than $30,000 in cryptocurrency.


Use of AI to Create Fake Intelligence

Investigators allege that the brothers used popular AI tools such as ChatGPT, Grok, and Gemini to create false but realistic-looking intelligence reports.

These included:

  • Fake military documents with logos of elite units like Unit 8200
  • Made-up coordinates of Israeli targets
  • False claims about a planned Israeli attack on Iran
  • Reports pretending to come from an intelligence officer

In one serious claim, fake information allegedly led to the arrest of an innocent person in Iran.


Court Decision to Lift Gag Order

The case was initially kept secret under a gag order. However, Justice Alex Stein approved a request by media outlets, including Walla, to make the names public.

The judge said that the reasons for keeping the identities hidden were not strong enough to continue the ban.

One of the brothers is currently in custody, and the case is still ongoing. They have not been convicted, and the charges remain allegations at this stage.


Bigger Concerns About AI and Security

This case highlights a growing concern: AI tools can now be used to create very convincing fake information.

Key concerns include:

  • Easy misuse of AI: Anyone can generate realistic documents with little effort
  • Real-world harm: Fake information can lead to serious consequences
  • Security risks: Even false intelligence can create confusion or damage

The case comes at a time of ongoing tensions between Israel and Iran, where both sides remain highly alert to intelligence threats.


Final Thoughts

This incident shows how powerful AI tools can be misused, even in sensitive areas like national security. While the brothers allegedly tried to profit by selling fake intelligence, the case also raises serious questions about how governments handle AI-driven deception.

Verified by MonsterInsights