According to Fast Company, the Customs and Border Protection agency has established a comprehensive framework for the “strategic use of artificial intelligence” through an internal directive obtained via public records request. The document explicitly bans agency officials from using AI for unlawful surveillance and prohibits the technology from serving as the “sole basis” for law enforcement actions or to target or discriminate against individuals. The framework includes detailed procedures for introducing AI tools, requires a “rigorous review and approval process,” and outlines special approvals needed for deploying “high-risk” AI applications. However, sources indicate the rules contain several workarounds that could enable misuse, particularly concerning given the militarization of the border and increasingly violent deportation practices. This framework represents a critical step toward AI governance, but raises fundamental questions about implementation and enforcement.
The Gap Between Policy and Practice
The most significant challenge facing CBP’s AI directive isn’t the policy language itself, but the implementation gap that has historically plagued government technology oversight. We’ve seen this pattern before with surveillance technologies like facial recognition systems where strong policies on paper failed to prevent real-world abuses. The directive’s prohibition against using AI as the “sole basis” for enforcement actions creates a dangerous loophole—in practice, human agents can easily treat AI recommendations as decisive while maintaining the fiction of independent judgment. This “AI-assisted” enforcement model has already proven problematic in predictive policing systems where human discretion becomes a rubber stamp for algorithmic outputs.
The Border Exception Problem
Border environments have historically operated under different legal and operational standards than domestic law enforcement, creating what legal scholars call “constitution-free zones.” Within 100 miles of U.S. borders, customs officials exercise extraordinary authority with reduced oversight. The concern is that CBP’s AI framework, despite its safeguards, will be interpreted through this existing permissive culture. The document’s workarounds could effectively create an AI exception to established privacy protections, particularly given the agency’s track record of expanding surveillance capabilities at ports of entry without adequate public debate.
Enforcement and Accountability Voids
The critical unanswered question in CBP’s framework is enforcement mechanism. Without independent oversight, transparent auditing, and meaningful consequences for violations, even the strongest policy language becomes meaningless. Historical precedent from other government technology deployments suggests that internal review processes often fail to catch systemic abuses until they become public scandals. The framework mentions handling reports of “prohibited” applications, but provides no details about whistleblower protections, external review, or public accountability. This creates a classic fox-guarding-the-henhouse scenario where the same agency developing and deploying AI systems is responsible for policing their use.
The Generative AI Wildcard
The document’s warning about generative AI suggests CBP recognizes the unique risks of large language models, but provides little concrete guidance for managing them. Generative AI systems pose particular dangers in law enforcement contexts through their tendency to “hallucinate” false information, embed training biases, and operate as black boxes. If CBP personnel use these systems for intelligence analysis, threat assessment, or even translation services, the potential for catastrophic errors multiplies. The rapid evolution of generative AI means any static policy document will struggle to keep pace with emerging risks, requiring continuous oversight that most government agencies lack the technical capacity to provide.
Broader Implications for Government AI
CBP’s framework represents a test case for how federal agencies will approach AI governance following the White House’s AI Bill of Rights and recent executive orders. The tension between operational flexibility and meaningful safeguards reflects a broader struggle across government. If CBP’s workarounds become standard practice, they could establish a dangerous precedent for other agencies seeking to deploy AI in high-stakes environments. Conversely, if the framework proves effective at preventing abuses while enabling legitimate uses, it could become a model for responsible government AI deployment. The outcome will depend largely on whether Congress, courts, and civil society can maintain sufficient pressure for transparent implementation.
