Irmak Akcan, Zeynep Katre Oran, and Elif Gultekin Karahacioglu
10 May 2026•Update: 10 May 2026
- Reports and expert assessments point to use of Palantir systems in Gaza, Lebanon, and attacks targeting Iran
US-based data analytics company Palantir Technologies has expanded its role in defense and military intelligence through artificial intelligence-powered systems used by the Israeli military, with reports and expert assessments indicating the technologies have been deployed in Gaza, Lebanon, and operations linked to Iran.
Palantir has increasingly integrated AI-driven intelligence tools into military operations through platforms such as Gotham, the Artificial Intelligence Platform (AIP), Foundry, and Skykit, which combine large-scale data analysis, operational AI systems, and portable field intelligence capabilities.
Last year the company signed a $10 billion agreement with the US military and has become a key partner in the Pentagon’s Maven program, which uses AI for target identification and battlefield analysis. Palantir is also involved in the US’ TITAN intelligence ground station project and the Army Vantage data platform aimed at accelerating AI-assisted military decision-making.
Palantir co-founders Alex Karp and Peter Thiel held a board meeting in Tel Aviv in January 2024 and met Israeli President Isaac Herzog, while also reaching an agreement with Israeli defense officials on strategic cooperation with the Israeli army.
The company later announced it would provide AI-based technological support focused on targeting and data analysis for “war-related missions,” though it did not disclose details of the agreement or the systems involved. During his visit to Tel Aviv, Karp said demand from Israel for the company’s services had increased after Oct. 7, 2023 -- referring to the day Israel began its genocidal campaign on Gaza -- adding that Palantir had begun supplying products different from those previously offered to Israel.
Reports link Palantir systems to Gaza, Lebanon, and Iran operations
In a 2024 statement, the Business and Human Rights Resource Centre, an international NGO, said Palantir’s technologies were directly used in Israeli attacks on Gaza. Palantir denied the allegations, saying its activities in Israel predated Oct. 7, 2023, and were consistent with support provided to US allies globally.
In his book The Philosopher in the Valley: Alex Karp, Palantir and the Rise of the Surveillance State, journalist Michael Steinberger said Israeli operations targeting senior Hezbollah figures in Lebanon in 2024 benefited from Palantir technologies. The book also said the company’s systems were used in what Israel called “Operation Grim Beeper,” in which exploding pagers wounded hundreds of Hezbollah members.
According to a report by The Washington Post, the Pentagon also used Palantir’s Maven Smart System integrated with Anthropic’s Claude AI model while planning attacks targeting Iran. The report said the system identified and mapped potential targets using advanced AI analysis.
Former Microsoft employee alleges AI systems enable lethal targeting
Speaking to Anadolu during protests against Microsoft’s cooperation with Israel at the company’s 50th anniversary event in April 2025, former Microsoft employee Ibtihal Aboussad said Palantir’s technologies played a critical role in Israeli military operations.
“Palantir is essentially weaponizing artificial intelligence and also weaponizing data analysis to make deadly decisions,” Aboussad said, alleging that Israeli authorities collect data from social media platforms, messaging apps, phone calls, and location services in Gaza to support targeting operations.
She also warned that military officials could try to evade accountability by attributing attack decisions to technology, saying another dangerous aspect of Palantir’s systems is that they function as “a shield protecting Israel from legal accountability.”
Aboussad said technological infrastructure provided by Palantir was used in Israeli-developed AI systems such as Lavender and Where’s Daddy, which have reportedly been widely employed by the Israeli military for target identification in Gaza.
“I hesitate to even call it software, because it's clearly designed with the surveillance and warfare and killing purpose,” Aboussad said, adding that the Israeli military also used the systems in detention operations and raids carried out in the occupied West Bank.
Experts warn AI could increase risks and blur accountability
Laura Bruun, an AI governance expert at the Stockholm International Peace Research Institute, said the integration of AI into conflict environments is rapidly transforming warfare in terms of speed and scale, particularly in targeting processes.
She warned that the use of AI as a mass surveillance tool carries serious human rights and privacy risks and further blurs the line between civilian and military technologies.
Bruun said the use of AI in conflict zones creates risks distinct from human-driven errors, adding that existing research indicates the integration of such technologies into warfare could increase the likelihood of mistakes and unintended risks.
She said states bear responsibility for errors caused by AI in warfare, particularly in targeting processes, adding that governments could be held accountable if they fail to prevent malfunctions within such systems.
Bruun also stressed that it is extremely difficult to determine whether problems occurring in conflict zones are directly caused by AI systems, noting that the abstract nature of the technology makes identifying the source of errors more complicated.
“As of now, it's still not really established what a state practically has to do to use AI in a lawful and responsible way,” she said.
*Writing by Seyit Kurt