ICE is using AI from Palantir and OpenAI for immigration enforcement, critics warn of surveillance overreach
ICE is quietly building one of the most advanced AI stacks in federal law enforcement, and the latest disclosure confirms it. According to an update to the Department of Homeland Security’s 2025 AI Use Case Inventory, U.S. Immigration and Customs Enforcement is using artificial intelligence tools from Palantir Technologies and OpenAI across enforcement, data analysis, and internal operations.
The expansion signals a deeper shift toward automated decision systems inside immigration policing, setting off fresh warnings from privacy and civil liberties groups.
“ICE Says It Uses AI From Palantir, OpenAI,” The Information reported on Thursday.
How ICE Is Using AI From Palantir and OpenAI to Scale Immigration Enforcement, Alarming Privacy Advocates
Palantir sits at the center of ICE’s AI infrastructure. The company has worked with the agency since 2011, and its software now touches multiple enforcement workflows. One of the newest systems, called “AI Enhanced ICE Tip Processing,” went live in May 2025. It uses generative AI to summarize public tip submissions, translate messages written in other languages, and generate short “bottom line up front” summaries that help agents decide which tips require immediate attention. DHS records say the system relies on commercially available large language models trained on public data, with no additional training on ICE-specific records.
Another Palantir-built system plays a far more consequential role. The Enhanced Lead Identification & Targeting for Enforcement platform, known as ELITE, pulls data from multiple government sources to identify and map potential targets for deportation. The system creates individual profiles, assigns confidence scores to addresses, and highlights geographic clusters for enforcement activity.
Palantir’s Role in ICE’s AI Ecosystem and Immigration Enforcement
DHS documents confirm that ELITE draws from sources, including Medicaid records maintained by the Department of Health and Human Services. That capability builds on Palantir’s broader contract work for ICE, including a $30 million deal awarded in 2025 to develop “ImmigrationOS,” a platform intended to combine passport data, Social Security records, IRS information, and license plate reader data into a single operational system.
“The Department of Homeland Security is actively working on 200-plus artificial intelligence use cases, a nearly 37% increase compared to July 2025, according to its latest AI inventory posted Wednesday. Immigration and Customs Enforcement is a driving force behind the growth,” FedScoop reported.
Palantir’s role extends beyond field enforcement. ICE is using generative AI tools supplied by the company to assist with software development, database queries, and system diagnostics inside its Investigative Case Management System, a customized version of Palantir’s Gotham platform. DHS filings again describe the models as off-the-shelf systems rather than agency-trained AI.
Per FedScoop, “ICE added 25 AI use cases since its disclosure last summer, including to process tips, review mobile device data relevant to investigations, confirm identities of individuals via biometric data and detect intentional misidentification. Of the newly added uses at ICE, three are products from Palantir, which has been a notable — and at times controversial — technology partner for the U.S. government under the Trump administration.”
OpenAI’s Contribution to ICE Operations and Broader Expansion of AI at ICE
OpenAI appears in the inventory through a separate application. ICE began using GPT-4 in January 2026 for an AI-assisted resume-screening tool to score job applicants and speed up hiring decisions. The system falls under DHS’s “high-impact” category and remains subject to internal testing, monitoring, and compliance reviews. The inventory does not specify whether OpenAI models are used in Palantir’s enforcement tools, though Palantir’s Artificial Intelligence Platform supports multiple model providers, including OpenAI.
“The inventory does report ICE’s use of an “AI-Assisted Resume Screening Tool.” That use case began earlier this month and leverages OpenAI’s GPT-4 to review resumes and apply scores to candidates. Like Mobile Fortify, the tool is labeled as high-impact and is in the process of pre-deployment testing, an impact assessment, an independent review and monitoring protocol development,” FedScoop wrote.
The disclosure arrives as DHS reports a sharp increase in AI adoption across the department. The 2025 inventory lists more than 200 active AI use cases, up 37 percent from the prior update. ICE alone added 25 new systems since mid-2025. Beyond Palantir and OpenAI, the agency relies on biometric tools such as Mobile Fortify for facial and fingerprint matching and has worked with vendors like Clearview AI for facial recognition.
“In May [2025], Immigration and Customs Enforcement (ICE) reported using 23 active AI software programs for immigration enforcement. By July, four of those had become “inactive,” and one—Email Analytics for Investigative Data—was moved back to “implementation and assessment” phase for reconfiguration under a new system. At face value, this looks like a pullback in AI programs. However, recent reports show there is more going on. Many similar features are part of larger AI software platforms that generate automated decisions that are harder to oversee,” The American Immigration Council reported.
Privacy Concerns and Ethical Debates
That expansion has reignited a long-running debate about surveillance and consent. Civil liberties advocates argue that pulling health or benefits data into immigration enforcement crosses a line. The Electronic Frontier Foundation has warned that using information collected for healthcare or social services to support deportation operations risks repeating surveillance practices that followed the September 11 attacks. Critics say people never agreed to have that data used in enforcement decisions, raising concerns about trust in public institutions.
“Palantir/ICE connections draw fire as questions raised about tool tracking Medicaid data to find people to arrest,” Fortune Magazine reported.
Tensions have also surfaced within the tech industry. Palantir employees have previously questioned the company’s involvement in immigration enforcement, and AI leaders across the sector have voiced concerns about large-scale data systems that blur the line between targeting serious criminals and sweeping population-level monitoring. Some executives have called for clearer limits on how government agencies deploy AI systems that combine personal data across databases.
ICE argues that the technology improves efficiency, shortens response times, and helps agents focus on higher-priority cases. The agency’s inventory filings emphasize that many tools rely on commercial models rather than custom-trained systems. The documents still leave gaps around how data flows between agencies, how long records are retained, and how automated outputs shape enforcement decisions.
Implications for Immigration Enforcement
In the summer of 2025, the Electronic Frontier Foundation (EFF) said it asked a federal judge to block the federal government from using Medicaid data to identify and deport immigrants. The group said it had also warned about the risks of the Trump administration consolidating vast amounts of government data into a single searchable, AI-driven system with support from Palantir, a company it described as having a poor track record on privacy and human rights. EFF said recent disclosures now show those concerns have materialized.
“We also warned about the danger of the Trump administration consolidating all of the government’s information into a single searchable, AI-driven interface with help from Palantir, a company that has a shaky-at-best record on privacy and human rights. Now we have the first evidence that our concerns have become reality,” EFF noted.
“Palantir is working on a tool for Immigration and Customs Enforcement (ICE) that populates a map with potential deportation targets, brings up a dossier on each person, and provides a “confidence score” on the person’s current address,” 404 Media reported. “ICE is using it to find locations where lots of people it might detain could be based.”
Closing
The expanding relationship between ICE, Palantir, and OpenAI shows how quickly AI has shifted from pilot projects to routine use inside federal agencies. That shift is forcing renewed scrutiny over where limits should be drawn as automated systems intersect with immigration enforcement, personal data, and civil rights.
ICE has framed its growing use of AI as a way to accelerate case processing and improve how tips and leads are evaluated. At the same time, limited visibility into how large language models are trained and how data moves across government systems has fueled concern, especially given the scale and sensitivity of the information involved. With DHS expected to update its AI inventory each year, civil liberties groups and policy experts are calling for closer examination through public debate, legal challenges, and congressional oversight.
As AI-driven tools take on a larger role in enforcement decisions, questions of accountability move to the forefront. The choices made now will shape how data, automation, and human judgment interact in federal operations, with lasting consequences for individuals and communities alike.

