Even as the use of artificial intelligence (AI) in health care has been on the rise for years, federal policymakers are currently grappling with how to address the intersection of health care delivery, innovation, and AI regulation.
As AI becomes more integrated into health care operations and clinical decision-making, policymakers face the challenge of establishing frameworks that promote innovation while addressing concerns around bias, transparency, and accountability.
Through understanding the current federal landscape and the actions already taken, we can get a clearer picture of the direction policymakers and stakeholders are going and the implications for the health care system.
Federal Policy Actions
Recent federal involvement in AI policy is primarily driven by policymakers’ interest in:
- addressing concerns about regulatory gaps for AI use;
- establishing accountability measures for organizations that utilize the benefits of AI; and
- and improving public trust in AI systems to encourage responsible AI use.
In Congress, the Senate HELP Committee recently held a hearing on AI’s potential to support patients, workers, children, and families. The hearing showcased both support for the positive impacts of AI in various work sectors and concerns about the need for proper oversight.
Various bills have also been introduced, including:
- S.2997, the Right to Override Act, by Sen. Markey (D-MA)
- H.R.5045, the Health AI Act, by Rep. Lieu (D-CA-36)
- H.R.238, the Healthy Technology Act of 2025, by Rep. Schweikert (R-AZ-01)
These bills represent 3 different types of legislation related to AI and health care; those that seek to mitigate negative effects, those that seek to understand it better, and those that seek to utilize it to improve the health care system.
The Trump administration has also been active in the AI policy space, through its America’s AI Action Plan, which was published in July 2025. This plan encourages the development and use of new AI systems in health care.
Specifically, the Center for Medicare and Medicaid has proposed the WISeR Model, developed to review Medicare payments for certain services. Many stakeholders have concerns about the model, including the American Hospital Association (AHA), which recognizes the benefits of AI but is worried that the algorithm may ignore patient-specific care details without proper guardrails and human oversight.
Implications of Federal Actions
These actions represent early attempts at governing AI in health care and highlight the desire from policymakers to utilize AI to improve health care while negating potential adverse impacts. But how do these actions impact different stakeholders within health care?
Let’s start by looking at providers. We are already starting to see more AI platforms designed to increase efficiency for providers, but new regulations could require training modules and newly formed committees to monitor AI use in these settings, possibly increasing barriers to entry. Additionally, as AI use becomes more common in making care decisions, providers are raising concerns that regulations will not adequately protect their ability to use clinical judgement.
Drug manufacturers are seeing success with AI modeling to increase efficiency in the drug development process and could expand use to other aspects of manufacturing. The FDA has established the Center for Drug Evaluation and Research (CDER) AI Counsel to oversee these developments, however further guidance and action could bolster or limit AI use in the manufacturing process.
The AI Counsel has not finalized any guidance, as there are concerns about how AI is being defined, and how credibility and risk of different AI platforms can be established. Drug manufacturers have been supportive of AI use at the FDA so far, however there are concerns from within the industry about the technology providing incomplete and incorrect summaries which could lead to slowdowns in the approval process.
When it comes to patients, they are still skeptical about what health care in AI means for them. In a recent study published in JAMA, almost two-thirds of people surveyed reported distrust in their healthcare system’s use of AI and concern that AI could harm them.
This finding indicates a need for lawmakers to implement policies that increase confidence and trust in AI for patients to feel comfortable with ongoing AI use. This could require a change in approach from the Trump administration which has generally taken a view that stresses protecting AI innovation from overregulation.
Ongoing Policy Issues
While there has been movement regarding policies related to AI and health care, there are still some issues that will need to be addressed in the future.
- Equity and Bias: Some stakeholders, like the AHA, have expressed concerns that AI tools could possibly be biased or discriminate against patients, providing substandard care.
- Health Care Workforce: Both the AHA and the American Medical Association (AMA) have been supportive of expanded AI use to help address provider burnout, so long as it enhances providers’ work rather than replacing it.
- Oversight: The Biotechnology Innovation Organization (BIO) has previously stated that the organization supports an incremental approach to oversight and regulation to allow sufficient time for adaptation. The AMA emphasizes that providers must be included in oversight conversations to ensure that the technology is supportive for care.
What’s Next
AI in health care is no longer just a possibility; policymakers are actively grappling with its deployment for providers and payers. Decisions made now will impact how AI is utilized and perceived implicating access, quality, and cost of care for patients for years to come.