
Last month, the White House launched their AI Action Plan at an event entitled, “Winning the AI Race.” The plan lays out a U.S. roadmap to establish global dominance reminiscent of preceding races in space exploration and nuclear weapons.
This race, however, has a less obvious endpoint than putting a man on the moon. Indeed, the point at which the winner will be declared is unclear. It will unlikely be marked by the development of a single breakthrough AI tool that becomes widely used across society. Instead, success will rest on building the robust, secure infrastructure that enables many AI innovations to flourish while ensuring trust in the technology through the quality of its outputs and the responsibility with which it is applied.
While many nations are advancing in this race, the United States views China as its principal competitor. With both nations releasing increasingly capable new models, solutions, and robotics, public trust in AI has diverged sharply between China and the U.S. A recent United Nations Development Program survey measuring confidence that AI systems are designed in society’s best interest found trust levels at 83 percent in China compared with just 38 percent in the U.S.
A recent United Nations Development Program survey measuring confidence that AI systems are designed in society’s best interest found trust levels at 83 percent in China compared with just 38 percent in the U.S.
Experts are far more optimistic about AI than the general public. For example, Pew Research found that AI experts feel that “AI will have a very positive impact” on the United States over the next 20 years (56 percent vs. 17 percent). China’s high-trust environment could enable it to close the AI adoption gap more quickly and take the lead in global AI. Ultimately, the AI race will be won or lost by the country that succeeds in building genuine public trust in the technology.
The health sector’s uptake of technology remains uneven. Some partnerships, like that of NewYork-Presbyterian and Columbia, show the extraordinary transformation within reach. They developed an AI tool called EchoNext that accurately identified structural heart disease from electrocardiogram readings more frequently than cardiologists, flagging more than 7,500 high-risk patients in eight months. Meanwhile, at least 70 percent of healthcare providers still exchange medical information using a fax machine (according to Steve Posnack, deputy assistant coordinator in the Office of the National Coordinator for Health Information Technology). Many still are not able to access a complete patient record digitally. For modernization to truly take hold, and for adoption of AI to be systematic and widespread, we need greater confidence in its safety and effectiveness.
The precedent exists for our achieving that goal. Indeed, in 1968, the U.S. National Highway Traffic Safety Administration (NHTSA) mandated seatbelts and introduced new crash safety standards. From 1968 through 2019, NHTSA’s safety standards prevented more than 860,000 deaths on the nation’s roads, 49 million nonfatal injuries, and damage to 65 million vehicles. Today, automakers now compete on safety features (see Volvo’s consistent seatbelt marketing). In the 1950s, air travel was perceived as dangerous, with a fatal accident rate of roughly 5 deaths per 100,000 flight hours. After the Federal Aviation Administration introduced stringent pilot training, mandatory black box monitoring, and advances in aircraft design, that rate has dropped to 0 deaths per 100,000 flight hours for scheduled service flights, with the U.S. National Airspace System now operating more than 45,000 commercial flights a day.

For modernization to truly take hold, and for adoption of AI to be systematic and widespread, we need greater confidence in its safety and effectiveness.
Other sectors have signaled that policy change to prioritize safety can and does restore trust and confidence in the marketplace, which in turn fuels adoption, drives competition, and accelerates growth. As U.S. Sen. Mike Rounds (R-SD) said at the top of a July 30 Senate Banking insurance subcommittee hearing, “we need regulatory frameworks that both support innovation and protect consumers.” Hard policy levers, in the form of regulation, legislation, and funding incentives, can support innovation in healthcare by investing in three essential building blocks:
1. Privacy-preserving data systems that let information stay local (e.g. in a health system) so that developers can securely access the right data;
2. Continuous monitoring to track safety, bias, and performance in real time so that failures or degradation can be corrected in real-time; and,
3. Public benchmarks and registries so AI tools can be tracked, compared, reproduced, and improved to raise quality and lower cost.
These investments would make it easier for innovators to develop safe, effective tools and for health systems to adopt them quickly, ensuring that people in rural, tribal, and urban communities alike can benefit from AI.
Building a clear, strong national infrastructure for AI in healthcare is not just about technological progress, it is about building trust, closing care gaps, and guaranteeing that every patient has access to tools that can improve care. As Senator Rounds noted, the AI Action Plan “recognizes the urgency to accelerate innovation, incentives to build AI infrastructure and lead in international diplomacy and security, all while acknowledging the need to address potential risks and promote trustworthy AI.”
The “move fast and break things” slogan that was coined by Mark Zuckerberg and is at the heart of AI development in Silicon Valley today is diametrically opposed to the credo of “do no harm” in healthcare. Therefore, AI moves at the speed of trust. If the United States is to maintain global leadership, we need to radically prioritize transparency and trust because only then will we see meaningful adoption, better performance, and real-world impact for patients.
Lucy Orr-Ewing serves as Chief of Staff and Head of Policy for the Coalition for Health AI (CHAI), a nonprofit coalition dedicated to ensuring widespread development and deployment of Responsible AI in healthcare.




