Law Reform: Senate Bill 1638 Will Cut Off Foreign AI From Federal Infrastructure
Foreign-built AI may soon be denied US public contracts. Find out how Senate Bill 1638 could affect foreign AI developers and suppliers.
A new Senate bill in the United States is aiming to keep foreign-linked AI out of government contracts. It may impact how public agencies buy and use artificial intelligence. Whether you build, sell, or rely on AI tools, this one matters. From national security to software compliance, here is what you need to know about Senate Bill 1638.
🇺🇸 United States of America: New Bill Wants to Keep Foreign AI Out of American Public Contracts
A new bill was recently introduced in the United States Senate. Senate Bill 1638, formally titled the Protection Against Foreign Adversarial Artificial Intelligence Act of 2025. It seeks to stop any artificial intelligence system linked to a "foreign adversary" from being used in federal procurement.
This is not the first time lawmakers have tried to draw a line around who gets to supply software to the federal government, but this proposal makes it clear that the target is artificial intelligence.
The bill wants to stop certain foreign entities from offering their AI systems to public U.S. agencies.
The concern is not just competition. It is about national security, data privacy, and controlling who gets access to the digital tools that sit deep inside state infrastructure.
This is not about consumer apps. This is about AI systems that operate in energy, transport, healthcare, and administrative systems that handle sensitive or confidential data.
What The Bill Says 📘
The bill proposes an amendment to Chapter 47 of Title 41 of the United States Code, the section that governs public procurement.
The primary clause is clear. It blocks any entity that is either owned by, controlled by, or subject to the jurisdiction or direction of a "foreign adversary" from providing artificial intelligence systems to the federal government.
This means any attempt to sell, lease, license, or offer such AI tools to government departments, directly or through subcontractors, would be illegal if the bill becomes law.
The proposed legislation does not only prevent direct contracts with foreign-linked firms. It also bars such technology from being incorporated through any backdoor channel, such as a third-party contractor or vendor.
The term "foreign adversary" is not a vague phrase pulled from headlines. It has a defined legal meaning under existing U.S. law.
Specifically, the bill references the definition established by the Secretary of Commerce, currently used for restrictions under the Information and Communications Technology and Services (ICTS) regulations. According to the most recent determinations, countries currently listed as foreign adversaries include China, Iran, North Korea, Cuba, Russia, and Venezuela.
However, the list is flexible. The Secretary of Commerce has the authority to add other nations if and when they are deemed to pose risks to the United States. This dynamic element is what gives the bill its extended reach. It creates a system that can grow and adapt over time. Companies would not just have to watch their codebase but also stay alert to changes in foreign policy.
The consequences for non-compliance are not outlined in detail in the bill itself, but under current procurement frameworks, a violation could lead to suspension or debarment from federal contracts. It could also spark audits and investigations into companies suspected of skirting the restriction. For many suppliers, this is reason enough to proceed with caution.
The bill does not spell out what counts as an artificial intelligence system in technical terms. Instead, it refers readers to Section 238(g)(1) of the Homeland Security Act of 2002. That reference brings in an expansive definition.
According to that section, artificial intelligence systems include any software that performs tasks that would normally require human intelligence.
This covers machine learning, pattern recognition, natural language processing, and any tools that can learn from data to make decisions.
That broad definition raises a challenge for enforcement. What if a piece of software includes a small AI component? What if the AI system was trained on a dataset that originated in a foreign adversary's jurisdiction? Would using an open-source model developed partly by researchers in China violate the rule? These are the types of grey areas that federal agencies and contractors will need to think about.
Another complication is the global nature of software development. AI systems are rarely built in isolation by a single developer in one country. They rely on global supply chains, often combining open-source libraries, pre-trained models, third-party datasets, and cross-border talent.
The bill, while aiming to close potential security gaps, may also create compliance bottlenecks for firms that use components whose origin is hard to trace.
The bill applies not just to final products but also to any embedded or dependent components. A US company might develop an AI system that functions on its own, but if its model architecture or training tools were licensed from or built with input from an entity under foreign adversary control, then the whole system could be tainted under the proposed law.
This puts a heavy burden on contractors. They may need to document and certify every stage of their development pipeline, including where training data came from and who contributed to the software’s code.
In practice, that means new forms of documentation, internal reviews, and possibly even audits. This might also require firms to conduct background checks on development partners or restrict collaboration with foreign nationals in certain projects.
If passed, the law will not automatically rewrite existing contracts, but it could affect renewals or amendments. Agencies may begin to request warranties or certifications that products are free from foreign adversary influence. Vendors that cannot provide this assurance may find themselves excluded from future bids.
For now, the bill sits in committee, but the political climate in Washington leans in favour of this kind of restriction. Public trust in foreign technology is low. The AI boom has stirred new anxieties about control, influence, and vulnerability.
This bill attempts to contain those fears through a legal firewall around procurement. Whether it achieves that goal remains to be seen. What is not in doubt is that it shows a growing resolve to decide who gets to build the AI systems that power the machinery of U.S. government.
This Bill is not just about politics or trade. Actually, it is about the digital tools running public services. Governments (not just the U.S. alone) are now questioning whether their AI is safe from foreign influence. The U.S. Bill is proactively controlling which countries can provide tech inside its institutions. And when that control includes AI, it forces everyone, even non-government tech vendors, to think carefully about their code’s origins.