
The Trump administration on July 23 released what it calls its AI Action Plan, dubbed Winning the Race. The plan emphasizes accelerating AI innovation through deregulation – pledging to revise federal guidelines for AI safety, supporting the development of open models, and ensuring access to large-scale computing power for startups and research.
While the health tech industry repeated its call to ensure states align on AI regulations, the administration warned in the action plan that federal funding decisions will consider how states handle AI regulation. One expert says he expects a new surge of litigation, and in response, National Nurses United and other groups signed on to develop a People's AI Action Plan.
Faster uptake with ‘high stakes’ consideration
The plan is a result of President Donald Trump's executive order on AI, "Removing Barriers to American Leadership in Artificial Intelligence," signed Jan. 23. That document proposed that AI could bring about a "renaissance," including breakthroughs in medicine.
As Trump took the stage at the AI Summit in Washington, D.C., last week to promote the plan, he complimented the audience of tech leaders, calling them "the brain power, the greatest power of them all." The self-proclaimed "deal junkie" described collective efforts to fuel AI dominance as "historic action to reassert the future which belongs to America."
In the new AI Action Plan, the administration pledged to roll back some of the rules established by the Biden administration aimed at the safe and secure development of AI in various sectors, including healthcare.
"Many of America’s most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards," the administration said.
Regulatory sandboxes will be established to facilitate rapid testing of AI tools. Domain-specific efforts will be launched to develop national standards for AI systems.
"A coordinated federal effort would be beneficial in establishing a dynamic, 'try-first' culture for AI across American industry."
The plan references understanding AI systems for safe application in "high-stakes environments," calling rigorous evaluations essential for assessing AI system performance and reliability. Thus, it directs federal agencies to draft guidelines to conduct AI system evaluations and organize hackathons to test AI systems for vulnerabilities and effectiveness.
The main thrust of the AI Action Plan is to fuel innovation by removing bureaucratic obstacles, building a robust AI infrastructure and energy sources to power them, and establishing American AI as the global standard.
Putting states on the defensive
The EHR Association commended the federal commitment "to advancing safe, effective and innovative AI." The organization of health technology vendors also reiterated its "call for a uniform, risk-based regulatory model" in a statement shared with Healthcare IT News.
"Fragmented state mandates risk slowing innovation and complicating compliance, which could deter innovation and adoption," said Leigh Burchell of Altera Digital Health, chair of the EHR Association's executive committee. "We look forward to collaborating with regulatory agencies and impacted stakeholders to determine the best path forward in achieving the goals of the AI Action Plan."
While the plan orders the federal government to seek input from businesses to identify and eliminate regulatory barriers, it also will consider states' AI regulatory climates in funding decisions.
"The first question to be answered will be the level of transparency to determine [Centers for Medicare & Medicaid Services'] level of compliance or non-compliance with state standards in the use of AI, which will be the first wave of litigation," Stephen Bittinger, partner at law firm Polsinelli, said by email on Monday.
"The second will be based on the separation of powers with Medicaid programs."
Bittinger said he sees additional turbulence on the horizon for providers with the implementation of Trump's AI Action Plan.
"Providers will need to navigate evolving compliance hurdles to properly adopt and implement AI, which has implications for reimbursement disputes, HIPPA and cybersecurity, and more," he said.
"It will come down to who can fight fire with fire – AI versus AI," he said Monday by email. "As a result, there will be a lot of litigation."
Evolving role for NIST
Recommended policy actions would position the National Institute of Standards and Technology at the forefront of the development and testing of AI, including "AI testbeds for piloting AI systems in secure, real-world settings," specifically tailored to healthcare and a few other sectors.
Trump also signed an executive order last week to "prevent woke AI in federal government," while the AI Action Plan calls for AI systems designed to protect free speech and reflect "American values."
"The U.S. government will deal only with AI that pursues truth, fairness and strict impartiality," Trump said in his order to prevent 'woke AI.'
"The American people do not want woke Marxist lunacy in the AI models," he said from the podium that opened the AI Summit.
Specifically, the AI Action Plan directs the Department of Commerce, which contains NIST, to revise AI risk management frameworks to eliminate ideological biases and establish federal procurement guidelines to prioritize contracts with developers who ensure objectivity in AI systems.
The plan also orders the federal government to revise federal guidelines for AI safety by removing references to diversity, equity and inclusion, climate change and misinformation.
Investments will prioritize AI skill development in education and workforce funding, develop foundational technologies for robotics and drones, and incentivize researchers to nurture high-quality datasets for AI training.
Security measures include establishing minimum data quality standards for scientific fields, creating secure computing environments to facilitate controlled access to federal data, and other measures for defense.
The lengthy plan also addresses growing threats, such as deepfakes, charging NIST to develop formal guidelines for evaluating deepfake evidence and promising to issue guidance to agencies on adopting deepfake standards in legal adjudications.
Adding more federal AI officers
The Trump AI Action Plan follows previous administrative actions to rescind prior AI regulations. On Jan. 21, Trump revoked former President Joe Biden's 2023 executive order on responsible AI development, which required the Department of Health and Human Services to establish an AI safety program with an AI Task Force charged to develop policies and frameworks for "responsible deployment and use of AI and AI-enabled technologies in the health and human services sector."
Last year, HHS restructured AI, cybersecurity and IT and created a chief AI officer to set AI policy and strategy for the whole department, naming Dr. Meghan Dierks, former Komodo Health chief data officer, in December. She was replaced by acting Chief AI Officer Peter Bowman-Davis, a Yale University student and former engineering fellow at the venture capital firm Andreessen Horowitz, according to an April story in PoliticoPro, which Dierks referenced in a series of social media posts announcing her departure and the ongoing importance of the role.
Meanwhile, the Trump administration released AI guidance that preserved some policies from the Biden era, including the requirement to identify chief AI officers and their interagency council and establish a specialized oversight process for certain applications.
"In healthcare contexts, the medically relevant functions of medical devices; patient diagnosis, risk assessment or treatment; the allocation of care in the context of public insurance or the control of health-insurance costs and underwriting" are considered high-impact use cases, according to that memo.
Trump's new AI Action Plan calls for more individuals to oversee the adoption of AI at the federal level. The plan is to establish a council of chief artificial intelligence officers to coordinate AI adoption across agencies and mandate employees' access to AI tools. It also calls to establish an AI Information Sharing and Analysis Center – AI-ISAC – for threat intelligence sharing.
Call for a People's AI Action Plan
National Nurses United, which previously released guiding principles for healthcare AI development and has voiced concerns over patient acuity AI resulting in "inappropriate nurse-to-patient ratios and unpredictable scheduling," joined a new AI Now Institute's call for a People's AI Action Plan as a counter to Trump's.
This initiative aims for relief "from the tech monopolies who repeatedly sacrifice the interests of everyday people for their own profits" and "delivers on public well-being, shared prosperity, a sustainable future and security for all."
The signatory website, organized by the policy thinktank AI Now Institute, displays a lengthy list of organizations that already have signed they believe such a plan is needed in the face of the Trump administration's plans. The organizations include the American Association of People with Disabilities, Consumer Federation of America, Robert F. Kennedy Human Rights, TechEquity and dozens of others.
The Institute said in the online call for the People's AI Action Plan, "We can't let Big Tech and Big Oil lobbyists write the rules for AI and our economy at the expense of our freedom and equality, workers and families' well-being, even the air we breathe and the water we drink – all of which are affected by the unrestrained and unaccountable roll-out of AI."
Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication.