By La’Tise M. Tangherlini
Once confined to science fiction, artificial intelligence is now a real and growing presence in medicine. Hospitals and clinics across the United States have started deploying AI-driven tools for tasks ranging from interpreting medical images and predicting patient risks to automating appointment scheduling and recordkeeping.
As of January 2025, the U.S. Food and Drug Administration (FDA) has authorized more than 1,000 AI-enabled medical devices for marketing in the United States, reflecting the rapid growth of this technology in the health care sector. In parallel, providers are experimenting with generative AI systems such as medical chatbots and “ambient” AI scribes that draft clinical notes from doctor–patient conversations. An estimated 28 percent of physician practices are using some form of AI-driven documentation assistant, according to a 2024 Medical Group Management Association poll.
These innovations promise faster diagnoses, streamlined workflows, and cost savings. On the flip side, they also raise significant legal, regulatory, and ethical concerns regarding patient privacy, safety, bias, and accountability. As policymakers and courts grapple with the applicability of existing health laws to AI, they may also identify existing gaps and ensure the responsible use of these technologies.
AI in Clinical and Administrative Settings
On the clinical side, machine-learning algorithms, predominantly used in radiology, assist physicians by analyzing medical images, monitoring patient vital signs, and suggesting diagnoses or treatment plans based on patient data. For example, hospitals are increasingly using AI sepsis prediction models that scan electronic health records (EHRs) to alert staff to patients at risk for entering septic shock. Similarly, oncology decision-support systems recommend cancer treatments, robotic systems with AI-enhanced vision aid surgeons, and primary care doctors might use an AI symptom-checker tool as a second opinion in complex cases.
Beyond direct patient care, administrative and operational uses of AI are also proliferating. Health insurers employ AI in claims processing and utilization management, which has already led to lawsuits over alleged wrongful denials. Hospitals deploy AI tools to optimize staff scheduling, manage supply chains, and detect billing anomalies or fraud. The most widespread early adoption of AI in the exam room is in clinical documentation.
“Ambient” AI scribe systems listen to conversations during patient visits and automatically generate encounter notes for the EHR. Microsoft’s Nuance DAX system, Amazon Web Services’ HealthScribe, and others are becoming increasingly integrated into health systems. Studies indicate these AI scribes can save physicians significant time on paperwork and reduce burnout. The U.S. Department of Veterans Affairs has even initiated trials of AI scribe technology in its medical centers. At the same time, clinicians are testing generative AI for tasks such as drafting patient-facing educational materials, summarizing medical literature, and triaging patient messages.
Gaps in Federal Regulation
Given well-known issues with AI hallucinations and the need to ensure that patient-specific advice aligns with the proper standard of care, the legal and medical communities are voicing concern about AI tools. A 2024 peer-reviewed study published in the academic journal BMC Medical Ethics highlighted six key areas of concern: patient privacy, informed consent, algorithmic bias, medical liability, institutional oversight, and the impact on workforce roles.
Not surprisingly, the rise of AI in medicine has exposed significant gaps in federal privacy and health laws, particularly the limitations of the Health Insurance Portability and Accountability Act (HIPAA). Enacted in 1996, HIPAA was designed to protect protected health information (PHI) handled by “covered entities” such as health care providers and insurers. However, many modern AI-driven health tools fall outside the scope of HIPAA. Direct-to-consumer health apps, wearables, telehealth platforms, or chatbots not operated by a covered entity may collect and share highly sensitive health data without being bound by HIPAA obligations. Patients often assume their information is protected, but these tools may sell or disclose it to third parties without their consent.
Even within HIPAA’s framework, the use of AI stretches the boundaries of the “treatment, payment, and health care operations” exceptions, which enable the sharing of PHI without explicit consent. Hospitals may justify feeding patient data into AI algorithms under these exceptions, leaving patients with little transparency or choice. Compounding the problem is the difficulty of truly anonymizing health data in an AI context. Sophisticated reidentification methods can match deidentified records to real individuals, undermining the legal distinction between PHI and “anonymous” data.
Federal regulators have acknowledged these challenges, but comprehensive reform has stalled. The Federal Trade Commission has stepped in by utilizing its consumer protection authority, as seen in its 2023 enforcement action against GoodRx for violation of the Health Breach Notification Rule. However, its jurisdiction is limited to breaches of personal health records and deceptive practices. In short, HIPAA has proven inadequate for the AI era, leaving critical gaps in patient consent, transparency, and data protection.
Patchwork of State Legislation and Enforcement
In the absence of comprehensive federal regulation, states have begun crafting their own rules governing AI in health care. In early 2025, California enacted Assembly Bill 3030, requiring providers to disclose to patients when generative AI is used in clinical communications, thereby reinforcing transparency in patient care. The California Privacy Protection Agency adopted in August automated decision-making regulations, which, once finalized, will impose disclosure and oversight duties on entities using AI in health contexts.
Colorado took further action in 2024 by classifying most health-related AI systems as “high-risk” under Senate Bill 24-205, which mandates deployers of such systems to conduct impact assessments and provide consumer-facing disclosures as well as notifications regarding Colorado residents’ ability to opt out. Texas’s 2025 Responsible Artificial Intelligence Governance Act prohibits AI systems intended for discrimination, constitutional rights violations, or harmful behavior manipulation.
Meanwhile, other states have focused on specific medical contexts. For example, Arizona requires physician oversight of AI-driven review and denial of claims for medical necessity and prior authorization requests; Maryland and Nebraska have passed similar protections for health insurance utilization review. Utah has adopted strict disclosure rules for AI mental health chatbots, while Illinois has banned AI apps from simulating licensed therapists.
State attorneys general are also using consumer protection laws to investigate health AI companies. For example, the Texas attorney general is probing whether AI chatbot platforms misled consumers about mental health services. The result is a fragmented yet quickly developing landscape in which transparency, human oversight, and bias mitigation dominate state-level policy. For lawyers, these rules create a patchwork of compliance challenges and signal a likely wave of enforcement.
How Courts Are Confronting AI
As AI becomes more integrated into health care delivery, legal practitioners can anticipate an increase in litigation concerning AI in health care spaces. Courts are now grappling with how to apply established doctrines, such as medical malpractice, product liability, and bad-faith insurance practices, to novel contexts involving AI. For example, in Estate of Lokken v. UnitedHealth Group, Medicare Advantage beneficiaries alleged that the insurance provider utilized AI algorithms to deny or shorten post-acute care, claiming breach of contract and breach of implied covenant of good faith and fair dealing. The court allowed key claims to proceed, signaling a judicial willingness to scrutinize AI-driven coverage determinations. Similar lawsuits have been filed against Cigna and Humana, alleging that the insurance companies have relied on flawed algorithms for claim denials.
The future of medical malpractice claims in the AI health care space remains unclear. Plaintiffs will likely argue that reliance on faulty algorithms breaches the duty of care. The Federation of State Medical Boards has clarified that physicians retain responsibility even when they use AI, reinforcing that liability will rest with clinicians rather than shifting entirely to software vendors.
Product liability claims are also on the horizon. If FDA-cleared AI devices malfunction, manufacturers may face strict liability, though they may invoke the learned intermediary doctrine. Discovery disputes over proprietary algorithms and bias audits are also emerging, raising concerns about transparency and the accountability of these algorithms. Finally, civil rights litigation is possible if biased algorithms disproportionately deny care to protected groups, thereby violating statutes such as Title VI of the Civil Rights Act.
Ethical Challenges: Bias, Autonomy, and Equity
Ethical debates mirror legal ones, often further questioning whether AI in medicine aligns with core principles of fairness, autonomy, and justice. Algorithmic bias is one of the most serious risks. A 2019 article published in Science found that risk-prediction algorithms used by major hospitals underestimated the health care needs of Black patients compared to white patients with similar conditions. In a study published in 2023 in JAMA Health Forum, 42 respondents from clinical professional societies, universities, government agencies, and health organizations identified 18 algorithms in use at the time with the potential for bias.
Ethical frameworks from the American Medical Association (AMA) and the National Institute of Standards and Technology emphasize the importance of fairness audits, diverse training datasets, and ongoing monitoring to mitigate bias. Ultimately, equity necessitates that AI tools be validated across diverse populations and deployed in resource-constrained settings, not just in elite medical centers. Without such efforts, AI may increase disparities in health care.
Potentially compromising patient autonomy, many AI systems operate behind the scenes, thus making it difficult for patients to know when an algorithm shaped their diagnosis or treatment. California’s disclosure law, cited previously, is one attempt to address this issue, but most states lack equivalent protections. A 2024 paper, “Patient Consent and the Right to Notice and Explanation of AI Systems Used in Health Care,” published in The American Journal of Bioethics, argues that patients should have the right to refuse AI-mediated care when possible, and that patients should be informed, in plain language, about the way health care providers are using AI for their treatment.
With the growing use of AI in health care come increasing concerns about whether physicians remain the final decision-makers and whether overreliance on AI risks “automation bias,” where clinicians defer to an algorithm’s output even when it conflicts with their clinical judgment. Both the AMA and the Federation of State Medical Boards have called for physician training on the use of AI, emphasizing that AI should augment, not replace, professional expertise.
Compliance and Risk Mitigation for Health Lawyers
For attorneys advising health care clients, the challenge is navigating a new legal frontier where innovation is outpacing regulation. In this era, internal AI oversight committees can be a helpful resource to vet new tools, assess compliance, and monitor outcomes. Legal professionals working in the compliance space must incorporate state-specific requirements such as California’s disclosure rules and Colorado’s bias assessments. A good rule of practice is that privacy policies should treat all health data with the same level of protection as HIPAA, even if HIPAA does not technically apply to the specific data.
Legal professionals ensure that agreements with AI vendors incorporate provisions addressing data ownership, indemnification, bias auditing, and security obligations to ensure transparency and accountability. Even if not obligated by state law, insurers and providers can ensure that liability insurance covers AI-related claims, that updated informed consent forms include disclosure of AI use, and that clinicians receive training on how to explain AI involvement to patients effectively.
Additionally, legal professionals may help clients integrate continuous auditing and bias mitigation into clinical governance. To track these governance programs, records should be maintained within the bounds of HIPAA and state regulations to demonstrate compliance in the event of litigation or regulatory investigations.
The integration of AI into health care delivery and management brings the promise of efficiency and personalization, but it also raises complex legal and ethical challenges. HIPAA, in conjunction with other federal laws, provides incomplete guidance, leaving gaps that states are filling with their own regulations. Courts have begun to address AI-driven harms through existing doctrines, while ethical debates highlight ongoing risks of bias and inequity.
For attorneys, the task is to help clients navigate uncertainty by implementing strong governance, compliance, and communication practices. The legal profession can ensure that innovation proceeds responsibly, that patient rights are protected, and that trust remains at the heart of the healing professions. With careful oversight, AI can be a powerful ally in medicine without undermining the core protections of patient privacy.
La’Tise M. Tangherlini is a D.C.-based attorney and former general counsel with expertise in government contracts, health law, and technology policy. A Cornell Law School graduate and Army veteran, she has advised start-ups, defense contractors, and health care innovators on legal strategy, compliance, and ethics. Tangherlini is the founder of Law & Lattes, PLLC, a legal platform that delivers concise, digestible guidance on emerging legal issues at the intersection of law, policy, and innovation.