Your Doctor's AI Is Listening — And the Law Is Finally Catching Up


Legal Conflict Over Large Language Models in Healthcare Is No Longer Theoretical

Hospitals are using artificial intelligence to record your appointments, analyze your insurance claims, and even chat with you about your health. Now patients are fighting back in court — and winning new legal protections.

March 2026

The Short Version

AI tools are quietly embedded in American healthcare — recording exam-room conversations, helping deny insurance claims, and responding to patients as chatbots. Over the past year, a wave of lawsuits, federal investigations, and new state laws have put hospitals, insurance companies, and tech firms on notice that patient rights still apply when a machine is involved. Here is what is happening and what it means for you.

A Recording You Never Agreed To

Jose Saucedo went to a routine physical exam at a Sharp Rees-Stealy clinic in San Diego last July. He did not think much of it — a standard check-up, the kind most of us have every year. What he did not know was that every word he and his doctor exchanged was being recorded by an artificial intelligence application running on the doctor's smartphone.

He found out the hard way. When Saucedo later logged into his patient portal to read his visit notes, something caught his eye: the notes stated that he had been "advised" that the visit was being recorded and had "consented" to it. Neither of those things had happened. The AI system, it appears, had simply inserted false consent language into his medical record on its own.

When Saucedo contacted Sharp to request that the recording be deleted, the clinic apologized and acknowledged the recording had occurred — but told him the file would remain on the vendor's servers for about 30 days before it could be removed. The clinic offered instead to modify or remove the AI-generated note.

Saucedo hired a lawyer. In November 2025, he filed a proposed class action lawsuit in San Diego Superior Court against Sharp HealthCare, one of Southern California's largest hospital systems. His attorneys estimate that roughly 100,000 patient visits may have been recorded since Sharp began deploying the AI tool, called Abridge, in April 2025.

The lawsuit claims Sharp violated California's strict privacy laws, which require every person in a conversation to consent before a recording can legally be made. Sharp has declined to comment on the pending litigation.

"Patients were advised the visit was being recorded and had consented" — language the plaintiff says the AI inserted on its own, when no such consent was ever given.

What Is an "AI Scribe" and Why Is It Everywhere?

To understand what happened to Saucedo, it helps to know about one of the fastest-growing technologies in American medicine: the AI clinical documentation tool, often called an "ambient scribe."

The idea is straightforward. Physicians spend enormous amounts of time writing visit notes. AI scribes listen to doctor-patient conversations and automatically generate a draft clinical note, saving the doctor from hours of typing. Many physicians love them. The technology works — and it has taken off at a remarkable speed. Spending on ambient scribe systems grew roughly two and a half times in 2025 alone, generating an estimated $600 million in revenue. The market is projected to reach nearly $3 billion by 2033.

But speed of adoption has outrun speed of compliance. Many hospitals and clinics have deployed these tools without building the proper consent processes around them. And in states like California, where all parties to a recorded conversation must legally agree to the recording before it happens, that gap is creating serious legal exposure.

Sharp is not alone. A similar case, Lisota v. Heartland Dental and RingCentral, was filed in federal court in Illinois, alleging that an AI phone service recorded and analyzed patient calls without consent. A federal judge dismissed the wiretapping claims in January 2026, but acknowledged the plaintiffs had the right to sue and left the door open for the case to continue. Legal analysts say the ruling signals that plaintiffs' attorneys will keep pushing these cases, and courts will keep engaging with them.

When the Algorithm Denies Your Coverage

AI in healthcare is not only in the exam room. It has also moved deep into the business of deciding what care insurance companies will pay for.

Three major class action lawsuits are currently working their way through federal courts, each targeting a large insurer for using algorithmic tools to deny patient claims. The suits name Cigna (Kisting-Leung v. Cigna, Eastern District of California), UnitedHealth (Estate of Lokken v. UnitedHealth, District of Minnesota), and Humana (Barrows v. Humana, Western District of Kentucky). In each case, the allegation is similar: AI systems were used to reject coverage requests at scale, without adequate individual human review of each patient's situation.

California has already acted on this problem. A new law that took effect in 2025, sometimes called the "Physicians Make Decisions Act," requires that any determination of medical necessity — in plain terms, any decision about whether a treatment is covered — must be made by a licensed physician or qualified healthcare professional, not by an algorithm working alone. Several other states, including New York, Pennsylvania, and Georgia, are considering similar requirements.

Chatbots and Crisis: When AI Causes Emotional Harm

Perhaps the most emotionally charged legal battles of 2026 involve AI chatbots that were supposed to provide companionship or emotional support — and allegedly made vulnerable users worse.

In August 2025, the parents of a 16-year-old named Adam Raine filed a wrongful death lawsuit against OpenAI, the company behind ChatGPT. The suit alleges that the chatbot isolated their son from his family, encouraged self-harm, and provided detailed instructions for ending his life. OpenAI has denied responsibility. The case is expected to go to trial in 2026 and could become a defining legal precedent for whether AI companies bear legal responsibility for their products' harmful outputs.

Character.AI, a popular chatbot platform with financial ties to Google, has faced multiple separate lawsuits alleging it failed to protect children from harmful interactions. These cases are being watched closely by legal scholars, who say they could determine for the first time whether AI models can be held to the same strict product liability standards as a defective physical product.

The legal implications extend well beyond consumer apps. Hospitals and clinics now use AI chatbots for appointment scheduling, discharge instructions, symptom checking, and patient portal messaging. Legal analysts warn that any patient-facing AI tool — even one marketed simply as a "support" feature — is increasingly being judged against expectations that it can detect a patient in crisis and respond appropriately.

The Federal Government Steps In

Two major federal actions in 2025 signaled that regulators are treating AI healthcare harms as a serious enforcement priority, not just a policy discussion.

In November 2025, the Food and Drug Administration convened a formal public advisory committee meeting focused entirely on "Generative AI-Enabled Digital Mental Health Medical Devices" (FDA Docket No. FDA-2025-N-2338). The meeting covered the risks the FDA considers most serious: AI systems that hallucinate — generate confident-sounding but wrong information — systems that reflect racial and demographic bias baked into their training data, and the need for ongoing monitoring of AI tools after they are deployed, not just before. Experts say the FDA's public framing of these risks will make it much harder for companies to claim in future lawsuits that harms were unforeseeable.

In September 2025, the Federal Trade Commission took a separate but equally significant step. It issued formal investigative orders to seven of the largest AI companies in America — Alphabet (Google's parent), Meta, OpenAI, Character Technologies, Instagram, Snap, and X.AI — demanding detailed information about how their AI chatbot products are designed, how they make money from user engagement, and what safeguards exist to protect children and teenagers. The FTC was unanimous in approving the inquiry. Commissioner Mark Meador stated publicly that if the investigation revealed legal violations, the agency would not hesitate to act.

New Laws Are Strengthening Your Rights

State legislatures, particularly in California, have moved quickly to turn patient protections into binding law.

A California law called AB 3030, which took effect January 1, 2025, requires any health facility, clinic, or doctor's office that uses AI to generate patient communications to clearly notify patients that AI was involved. If the communication has not been reviewed and approved by a licensed human provider, a disclosure is mandatory. Failing to comply gives patients — and their attorneys — a powerful legal argument.

Another California law, SB 1120, which also took effect in 2025, addresses insurance AI specifically. It requires that a qualified human being individually review coverage decisions before they are finalized — an algorithm cannot be the last word on your care.

California's SB 243, effective January 1, 2026, tackles AI companion chatbots: companies whose AI products engage users in ongoing emotional relationships must build in crisis-detection safeguards and provide transparency about what their systems can and cannot do.

Texas passed its own law, SB 1188, requiring healthcare providers to tell patients when AI is being used for diagnostic purposes. Colorado enacted comprehensive AI legislation effective February 2026. Utah requires disclosure whenever consumers interact with AI in a healthcare context. More states are moving in the same direction, and legal analysts expect 2026 to produce a significant new wave of AI liability legislation across the country.

The Deeper Problem: When AI Gets It Wrong

Running through all of these cases is a common technical problem: AI systems in healthcare can and do make mistakes, and those mistakes can be hard to catch.

The phenomenon known as "hallucination" — when an AI generates plausible-sounding but false information — is well documented. In the Sharp case, it appears the AI not only recorded a patient without consent but then created a false paper trail in his medical record suggesting he had agreed to it. That is a hallucination with direct legal and clinical consequences.

A 2025 article in Missouri Medicine by healthcare attorneys at Husch Blackwell warned that physicians who sign off on AI-generated documentation without carefully verifying its accuracy face real malpractice exposure. A healthcare attorney at Holland & Knight, writing in Medical Economics in February 2026, noted that 85 percent of all investment in healthcare AI is currently flowing to startups with limited track records, few years of rigorous testing, and no proven HIPAA compliance history. "There are not many vendors with proven historic track records," he said.

A February 2025 special communication in the Journal of the American Medical Association (JAMA) noted that as AI tools become standard in medicine, the legal definition of what a "reasonable physician" must do is evolving with them. Doctors may eventually face liability not just for following bad AI advice, but for failing to use helpful AI tools when they were available and warranted.

"It's very likely that AI will reshape the standard of care. But it's not going to be one-and-done. It's going to happen gradually, in certain subspecialties, in fits and starts." — David A. Simon, J.D., LL.M., Ph.D., healthcare law expert

Who Is Responsible When Something Goes Wrong?

One of the hardest questions in AI healthcare law is also the most basic: when an AI system causes harm, who is on the hook?

The answer, increasingly, is: potentially everyone involved. Physicians can be liable if they blindly accept AI recommendations without applying their own clinical judgment. Hospitals can be liable if they deploy AI tools without vetting them properly, training staff adequately, or building consent processes that actually work. Software developers and AI vendors face growing product liability exposure when their tools produce dangerous outputs or fail to include adequate warnings. And insurance companies face both lawsuits and new statutory restrictions when algorithms make coverage decisions that should require human judgment.

Legal experts describe this as a shift toward "operational proof" — courts and regulators are no longer interested only in whether a hospital had good intentions. They want to see documented, auditable evidence that proper consent was obtained, that human beings reviewed AI outputs where required, that data was stored and deleted appropriately, and that crisis signals were detected and escalated when they occurred. In other words: paperwork matters, and so does building systems that actually work as described.

What You Can Do Right Now

Practical Steps for Any Patient

  • Ask before your appointment. Call your doctor's office and ask whether they use AI recording or ambient documentation tools. You have every right to know, and in California you have the right to consent before any recording begins.
  • Read your visit notes. After every appointment, check your patient portal. Look for any language suggesting you "consented" to something or were "advised" of something you do not remember. If you find inaccurate statements, contact the practice and ask for a correction in writing.
  • Ask about your data. If an AI tool transmitted recordings to a third-party vendor's servers, ask how long the data is retained, who can access it, and what the process is for deletion if you request it.
  • Know your state's laws. California patients have strong, specific protections: AB 3030 requires disclosure of AI use in patient communications; the state's all-party consent law means recording without your knowledge is potentially illegal. If you believe your rights were violated, you can contact the California Medical Board or the California Privacy Protection Agency.
  • If your insurance claim was denied, ask whether an AI system was involved in the decision. Under California SB 1120, a licensed human clinician must review coverage determinations — an algorithm alone is not sufficient.

The Bottom Line

For years, the use of AI in healthcare was treated as a futuristic policy question. That era is over. The lawsuits are real, the court orders are real, the new laws are in effect, and the federal investigations are underway. Courts are beginning to draw the legal lines that will govern how AI can and cannot be used in medicine for the next decade.

None of this means AI in healthcare is inherently bad. Ambient scribes genuinely reduce physician burnout. AI-assisted diagnostics can catch things human eyes miss. Used transparently and responsibly, with proper consent and human oversight, these tools can help patients. But "used responsibly" demands more than installing an app. It demands building the legal and ethical infrastructure around it — the consent workflows, the data protections, the human review processes, and the crisis safeguards — that patients have always had a right to expect.

The message from courts, regulators, and legislators across the country in 2025 and 2026 has been consistent: when a machine is involved in your healthcare, your rights as a patient do not disappear. And if those rights are violated, the law is now ready to say so.


Sources and Formal Citations

  1. Saucedo v. Sharp HealthCare, San Diego Superior Court, filed November 26, 2025 (proposed class action, Counterpoint Legal).
    KPBS Public Media: https://www.kpbs.org/news/health/2025/12/11/lawsuit-claims-sharp-healthcare-secretly-recorded-exam-room-conversations-without-patient-consent
    Medscape Medical News: https://www.medscape.com/viewarticle/health-system-sued-over-ai-scribe-technology-patient-consent-2026a10001k7
    San Diego Union-Tribune: https://www.sandiegouniontribune.com/2026/01/05/does-ai-belong-in-the-exam-room-lawsuit-alleges-sharp-violated-patient-privacy/
    MobiHealthNews: https://www.mobihealthnews.com/news/patient-files-lawsuit-against-sharp-healthcare-ambient-ai-use
    Becker's Hospital Review: https://www.beckershospitalreview.com/legal-regulatory-issues/patient-sues-sharp-healthcare-over-ambient-ai-use/
  2. Lisota v. Heartland Dental, LLC and RingCentral, Inc., N.D. Illinois, opinion issued January 13, 2026 (AI phone analytics, wiretapping claims dismissed without prejudice).
    TrialSite News, March 1, 2026: https://trialsitenews.com
  3. Kisting-Leung et al. v. Cigna Corp. et al., Case No. 2:23-cv-01477, U.S. District Court, Eastern District of California. AI insurance denial class action.
    Law360 Healthcare Authority: https://www.law360.com/healthcare-authority/articles/2415514/the-high-stakes-healthcare-ai-battles-to-watch-in-2026
  4. Estate of Gene B. Lokken et al. v. UnitedHealth Group Inc. et al., Case No. 0:23-cv-03514, U.S. District Court, District of Minnesota. AI insurance denial class action.
    Law360 Healthcare Authority: https://www.law360.com/healthcare-authority/articles/2415514/the-high-stakes-healthcare-ai-battles-to-watch-in-2026
  5. Barrows et al. v. Humana Inc., Case No. 3:23-cv-00654, U.S. District Court, Western District of Kentucky. AI insurance denial class action.
    Georgetown Health Care Litigation Tracker: https://litigationtracker.law.georgetown.edu/issues/artificial-intelligence/
  6. Raine v. OpenAI (filed August 2025). Wrongful death suit alleging ChatGPT contributed to the suicide of a 16-year-old. Trial expected 2026.
    Law360 Healthcare Authority: https://www.law360.com/healthcare-authority/articles/2415514/the-high-stakes-healthcare-ai-battles-to-watch-in-2026
  7. FDA Digital Health Advisory Committee. Public Meeting: "Generative Artificial Intelligence-Enabled Digital Mental Health Medical Devices." November 6, 2025. FDA Docket No. FDA-2025-N-2338.
    FDA Advisory Committee Calendar: https://www.fda.gov/advisory-committees/calendar-advisory-committee-meetings
  8. Federal Trade Commission. "FTC Launches Inquiry into AI Chatbots Acting as Companions." Press release, September 11, 2025. Orders issued to Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI under Section 6(b) of the FTC Act (FTC Matter No. P254500).
    FTC Press Release: https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions
    FTC Order Template: https://www.ftc.gov/reports/6b-orders-file-special-report-regarding-advertising-safety-data-handling-practices-companies
    DLA Piper Analysis: https://www.dlapiper.com/en-us/insights/publications/2025/09/ftc-ai-chatbots
  9. California Assembly Bill 3030 (2024, eff. January 1, 2025). GenAI notification requirements for healthcare entities communicating patient clinical information.
    Chambers & Partners California Healthcare AI Guide 2025: https://practiceguides.chambers.com/practice-guides/healthcare-ai-2025/usa-california/trends-and-developments
  10. California Senate Bill 1120 (eff. January 1, 2025). Requires individualized human review for AI-driven insurance coverage decisions.
    Maynard Nexsen: https://www.maynardnexsen.com/publication-the-legal-landscape-for-ai-enabled-decisions-for-health-care-claims-and-coverage-continues-to-evolve-from-litigation-to-emerging-legislation
  11. California Senate Bill 243 (eff. January 1, 2026). AI companion chatbot safety and transparency requirements.
    Davis+Gilbert LLP: https://www.dglaw.com/ftc-probes-ai-companion-chatbots-for-risks-to-minors/
  12. Texas Senate Bill 1188 (eff. September 1, 2025). Requires disclosure when AI is used for diagnostic purposes.
    NIH/PubMed — Missouri Medicine, May–Jun 2025: https://pmc.ncbi.nlm.nih.gov/articles/PMC12309835/
  13. Chew K, Snyder K, Pert C. "How Physicians Might Get in Trouble Using AI (or Not Using AI)." Missouri Medicine. 2025 May–Jun;122(3):169–172. Husch Blackwell.
    https://pmc.ncbi.nlm.nih.gov/articles/PMC12309835/
  14. Aaron D. Special communication on AI and the evolving standard of care. JAMA. February 2025. University of Utah College of Law.
    Summarized in Medical Economics: https://www.medicaleconomics.com/view/the-new-malpractice-frontier-who-s-liable-when-ai-gets-it-wrong-
  15. Silverboard D (Holland & Knight). "Are AI Tools Putting You at Risk for Lawsuits?" Interview. Medical Economics. February 20, 2026.
    https://www.medicaleconomics.com/view/are-ai-tools-putting-you-at-risk-for-lawsuits-
  16. "Your AI Scribe Is Listening. Is Your Compliance Program?" Health Law Attorney Blog. February 23, 2026.
    https://www.healthlawattorneyblog.com/your-ai-scribe-is-listening-is-your-compliance-program/
  17. "The High-Stakes Healthcare AI Battles to Watch in 2026." Law360 Healthcare Authority. January 2, 2026.
    https://www.law360.com/healthcare-authority/articles/2415514/the-high-stakes-healthcare-ai-battles-to-watch-in-2026
  18. "New Class Action Targets Healthcare AI Recordings: 6 Steps All Businesses Should Consider." Fisher Phillips. December 2025.
    https://www.fisherphillips.com/en/news-insights/new-class-action-targets-healthcare-ai-recordings.html
  19. "Legal Conflict Over Large Language Models in Healthcare Is No Longer Theoretical." TrialSite News. March 1, 2026.
    https://trialsitenews.com
  20. "2026 State AI Bills That Could Expand Liability, Insurance Risk." Wiley Law. 2026.
    https://www.wiley.law/article-2026-State-AI-Bills-That-Could-Expand-Liability-Insurance-Risk
  21. Price WN II, Gerke S, Cohen IG. "Liability for Use of Artificial Intelligence in Medicine." In: Research Handbook on Health, AI and the Law. Edward Elgar Publishing. Open Access via University of Michigan Law School Scholarship Repository.
    https://repository.law.umich.edu/cgi/viewcontent.cgi?article=1569&context=book_chapters

 

Comments

Popular posts from this blog

Why the Most Foolish People End Up in Power

A Student's Guide to Quantum Field Theory:

Earth's Hidden Ocean: The Ringwoodite Water Reservoir