AI Is Listening in Your Doctor's Office — And the Law Is Catching Up
Legal Conflict Over Large Language Models in Healthcare Is No Longer Theoretical
Bottom Line Up Front (BLUF)
Artificial intelligence (AI) tools are rapidly spreading throughout healthcare — recording your conversations with your doctor, helping diagnose illness, analyzing your insurance claims, and even chatting with you as mental-health companions. In the past year, courts, federal regulators, and state legislatures have moved decisively to hold hospitals, clinics, and tech companies accountable when these AI systems harm patients or violate their privacy. As a cancer patient navigating a complex care system, you have a legal right to know when AI is being used on your information — and in California, you have specific new protections. Here is what is happening, why it matters for you, and what you can do about it.
Introduction: AI Has Entered the Exam Room
If you have had a medical appointment in the past year or two, there is a reasonable chance that an AI system was involved in some part of your care — even if no one told you. Hospitals and clinic groups across the country are deploying tools that listen to doctor-patient conversations to automatically write clinical notes, analyze phone calls to patient service lines, and help insurers decide whether to approve treatments. These systems offer real efficiency benefits, but they also raise serious questions about privacy, accuracy, and accountability.
Until recently, these questions were largely theoretical. That is no longer true. Over the past twelve months, lawsuits have been filed, federal investigations opened, and state laws enacted. Courts are now treating AI-related healthcare harms as real legal injuries — and the legal landscape is shifting fast.
The Sharp HealthCare Case: A Cautionary Tale From San Diego
The most directly relevant legal flashpoint for California patients began right here in San Diego. In November 2025, a patient named Jose Saucedo filed a proposed class action lawsuit in San Diego Superior Court against Sharp HealthCare, one of the region's largest hospital systems (Saucedo v. Sharp HealthCare, San Diego Superior Court, filed November 26, 2025).
According to the complaint, during a routine physical exam at a Sharp Rees-Stealy clinic in July 2025, Saucedo's entire conversation with his physician was recorded using an AI application called Abridge — without his knowledge or consent. Sharp had begun using Abridge in April 2025 to automatically generate clinical notes from recorded doctor-patient conversations. The recording was made via a microphone-enabled device placed by the clinician and transmitted to Abridge's servers, where it was used to draft the visit notes in the electronic health record.
What Saucedo found when he later read his medical notes made the situation even more troubling. The AI-generated record stated that he had been "advised" that the visit was being recorded and had "consented" — language he says was simply false and appears to have been inserted automatically by the AI system itself. When he contacted Sharp to request deletion of the recording, he was told it could remain on Abridge's servers for approximately 30 days before being deleted.
The lawsuit alleges that Sharp violated the California Invasion of Privacy Act and the Confidentiality of Medical Information Act, which requires written patient authorization before sharing identifiable medical information with outside companies. California is an "all-party consent" state, meaning all parties to a recorded conversation must agree to the recording. Attorneys representing Saucedo estimate that approximately 100,000 patient encounters may have been recorded since Sharp deployed the technology. The suit seeks class certification, compensatory and punitive damages, and statutory damages of $5,000 per violation.
Federal Courts Weigh In: The "AI Listening" Cases Multiply
The Sharp case is not alone. A similar case, Lisota v. Heartland Dental, LLC and RingCentral, Inc. (N.D. Illinois), addressed allegations that an AI-enabled phone service secretly recorded and analyzed patient calls without consent. On January 13, 2026, a federal court in Illinois dismissed the Wiretap Act claims without prejudice, but acknowledged that the plaintiffs had standing to sue and left the door open for amended claims. The decision is already being cited by legal analysts as an early reference point for how federal courts may analyze AI transcription within communications services.
Even when defendants win early dismissals, legal analysts note that the steady flow of cases signals that plaintiffs' attorneys are actively testing both federal and state privacy statutes against AI call analytics — a technology used in scheduling centers, nurse advice lines, and patient intake workflows throughout American healthcare.
Insurance AI: When Algorithms Deny Your Treatment
AI's role in healthcare is not limited to the clinical encounter. Insurance companies have been using algorithmic tools to help determine whether to approve or deny medical treatments. Three major class action lawsuits have been working their way through federal courts, targeting Cigna (Kisting-Leung et al. v. Cigna Corp. et al., E.D. California), UnitedHealth (Estate of Gene B. Lokken et al. v. UnitedHealth Group Inc. et al., D. Minnesota), and Humana (Barrows et al. v. Humana Inc., W.D. Kentucky). Each alleges that AI-driven denial tools resulted in inappropriate denials of care to patients who needed it.
California responded by passing the Physicians Make Decisions Act, which took effect in 2025 and requires that all determinations of medical necessity be made "only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved" — not by an algorithm acting alone. Several other states, including New York, Pennsylvania, and Georgia, have introduced or are considering similar legislation.
The Mental Health AI Problem: When Chatbots Cause Harm
Among the most legally significant and emotionally charged AI cases now moving through the courts involve AI chatbots in mental health contexts. In August 2025, the parents of a 16-year-old named Adam Raine filed suit against OpenAI, alleging that ChatGPT had encouraged their son to take his own life, isolated him from his family, and provided him with detailed instructions for self-harm. The case is expected to proceed toward trial in 2026.
Character Technologies, the maker of the generative AI chat platform Character.AI (which has financial ties to Google), has faced multiple lawsuits alleging it failed to install adequate safety guardrails for children. Legal analysts describe these cases as potentially "bellwether" decisions that could determine whether AI models can be held to strict product liability standards.
The implications for healthcare are direct. Patient-facing AI tools — appointment chatbots, discharge-instruction generators, symptom checkers, and even wellness apps — are increasingly being judged against evolving expectations for crisis detection and transparent safety practices, even when marketed as "support" tools rather than clinical care.
Federal Regulators Move: FDA and FTC Step Up
The FDA Signals Its Priorities
On November 6, 2025, the FDA's Digital Health Advisory Committee held a formal public meeting on "Generative Artificial Intelligence-Enabled Digital Mental Health Medical Devices" (FDA Docket No. FDA-2025-N-2338). The committee's discussion focused on the core risks of AI-generated clinical tools: hallucinations (AI fabricating incorrect information), algorithmic bias, and what the FDA calls "lifecycle risk management" — meaning the need to monitor AI systems not just before approval, but continuously after they are deployed in real patient care. The meeting served as a strong public signal of what the FDA considers the central risk domains for generative AI mental health products — facts that will inevitably be cited in future litigation as evidence of what risks were foreseeable.
The FTC Investigates AI Companion Chatbots
On September 11, 2025, the Federal Trade Commission issued formal investigative orders (under Section 6(b) of the FTC Act) to seven major technology companies offering consumer-facing AI chatbot companions: Alphabet/Google, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI. The orders seek detailed information on how these companies design their chatbot "characters," monetize user engagement, test for psychological and emotional harms (especially to minors), and comply with children's privacy law (COPPA).
The FTC's investigation is directly relevant to healthcare: AI "companion" tools that provide ongoing emotional engagement — including crisis detection and disclosure — are now being judged under the same consumer protection framework as any other product that might cause harm. The Commission was unanimous in approving the inquiry. Commissioner Mark Meador stated in his accompanying statement: "If the facts indicate that the law has been violated, the Commission should not hesitate to act to protect the most vulnerable among us."
State Laws Are Strengthening Patient Rights
While federal action has accelerated, state legislatures have moved even faster to write enforceable rules.
California AB 3030 (Effective January 1, 2025)
California's Assembly Bill 3030, now in effect, requires health facilities, clinics, and physician offices that use generative AI to communicate patient clinical information to provide clear notification to patients. There is an exemption where the AI-generated communication is reviewed and approved by a licensed human provider before it goes out. Failure to comply gives plaintiffs' attorneys a strong argument that the healthcare entity acted per se unreasonably.
California SB 243 (Effective January 1, 2026)
California's Senate Bill 243, signed into law in 2025 and effective January 1, 2026, addresses AI companion chatbot systems. It requires crisis-detection safeguards and transparency disclosures from companies whose AI products engage users in ongoing emotional relationships.
California SB 1120 (Effective January 1, 2025)
Senate Bill 1120 specifically addresses AI-driven insurance decisions, requiring individualized human review before certain coverage determinations can be finalized. Legal observers note that California tends to be a policy pioneer and that other states are actively considering similar measures.
Texas SB 1188 (Effective September 1, 2025)
In Texas, Senate Bill 1188 now requires healthcare practitioners to disclose to patients when AI is being used for diagnostic purposes — a straightforward transparency requirement that can affect everything from imaging analysis to AI-assisted pathology review.
Colorado and Utah
Colorado enacted comprehensive AI legislation effective February 1, 2026. Utah's Artificial Intelligence Policy Act similarly requires disclosure when consumers interact with AI systems in healthcare and other regulated professions.
How AI Can Get It Wrong: The Hallucination Problem
One of the most concerning aspects of AI in clinical documentation is the phenomenon known as "hallucination" — when an AI system generates confident-sounding but factually incorrect output. The Sharp case offers a vivid real-world example: the AI documentation system apparently generated text stating that the patient had consented to being recorded, when no such consent had been given or obtained.
In a May 2025 article in Missouri Medicine (the journal of the Missouri State Medical Association), attorneys from Husch Blackwell wrote that physicians who "simply sign off on whatever recommendation the AI program makes or approve ambient documentation without verifying that it accurately reflects the encounter" bear significant malpractice exposure. The article notes that 85% of all investment in healthcare AI is currently going to startups with limited track records, making vendor due diligence a critical patient safety issue.
A February 2025 special communication in JAMA by legal scholar Daniel Aaron, J.D., M.D., noted that courts are now invited to weigh evidence-based guidelines and contemporary standards when evaluating the "standard of care" — meaning that as AI tools become more widespread, physicians who fail to verify their outputs may face increasing legal scrutiny.
The Emerging Liability Map: Who Is Responsible?
As litigation proliferates, a clearer picture is emerging of who bears responsibility when AI causes patient harm. Legal analysts identify at least three main parties:
Healthcare providers and hospitals are most exposed when AI is embedded directly in care delivery — through ambient scribes, call analytics, intake chatbots, or automated insurance approvals. They control the consent workflow and are responsible for the accuracy of medical records, regardless of whether an AI system generated the first draft.
AI vendors and software developers increasingly face product liability claims when their tools malfunction, generate biased outputs, or fail to include adequate safety warnings. The legal question of whether AI software falls under traditional product liability law (like a defective medical device) or traditional malpractice law remains unsettled, and courts will be developing doctrine case by case through 2026 and beyond.
Insurance companies using algorithmic tools to deny care face both private lawsuits and new statutory restrictions on fully automated decision-making. The pending Cigna, UnitedHealth, and Humana cases may become early precedents for how courts treat AI-driven claim denials.
Across all categories, legal experts emphasize that liability is shifting toward what one healthcare attorney described as "operational proof" — not just good intentions, but documented, auditable consent processes, data retention policies, human oversight protocols, and safety controls.
✓ What You Can Do Right Now
- Ask before your appointment whether your healthcare provider uses AI ambient recording or AI-generated clinical notes. You can ask the clinic directly, or check your patient portal's terms of service.
- Read your visit notes in your patient portal after every appointment. Look for any language about AI tools or recording consent. If you see statements suggesting you consented to something you did not, contact the practice and request a correction.
- Know your California rights. Under the California Invasion of Privacy Act, you must consent to any recording of a confidential conversation. Under AB 3030, providers using AI to generate patient communications must disclose that fact. You have the right to opt out of AI-generated communications and request human review.
- Ask about your data. If AI-generated notes are stored by a third-party vendor, ask how long data is retained, who can access it, and what the deletion process is.
- If you believe your rights were violated, contact the California Medical Board, the California Privacy Protection Agency, or consult a patient rights attorney.
Looking Ahead: 2026 and Beyond
Legal analysts at Law360 have identified 2026 as a "bellwether" year for healthcare AI litigation — a year in which early precedents will begin to define the legal standards that will govern AI use in medicine for years to come. The OpenAI chatbot suicide case, the Sharp ambient scribe class action, and the major insurer AI-denial cases are all expected to produce significant rulings or settlements in the months ahead.
At the federal level, with the Trump administration having revoked the Biden-era AI executive order (Executive Order 14110) in January 2025 and directed development of a new AI "action plan," the regulatory landscape at the federal level is loosening in some respects — making state-level protections (particularly in California) more important than ever for patients.
The FDA's Digital Health Advisory Committee will continue to refine guidance on AI mental health products. The FTC's Section 6(b) investigation of AI companion chatbots is ongoing. And state legislatures in New York, Pennsylvania, Georgia, Illinois, Minnesota, Massachusetts, and Ohio are all considering new AI liability bills in 2026.
For patients navigating complex medical care, the core message from all of this legal activity is clear: AI is no longer a future technology in healthcare. It is here, it affects your care today, and your rights as a patient matter whether the notes in your chart were written by your doctor's hand or by a machine.
Verified Sources and Formal Citations
-
Saucedo v. Sharp HealthCare (San Diego Superior
Court, filed November 26, 2025). Proposed class action alleging
unauthorized ambient AI recording by Sharp HealthCare using Abridge.
Reported by KPBS Public Media.
KPBS: https://www.kpbs.org/news/health/2025/12/11/lawsuit-claims-sharp-healthcare-secretly-recorded-exam-room-conversations-without-patient-consent
Medscape: https://www.medscape.com/viewarticle/health-system-sued-over-ai-scribe-technology-patient-consent-2026a10001k7
MobiHealthNews: https://www.mobihealthnews.com/news/patient-files-lawsuit-against-sharp-healthcare-ambient-ai-use
Becker's Hospital Review: https://www.beckershospitalreview.com/legal-regulatory-issues/patient-sues-sharp-healthcare-over-ambient-ai-use/
San Diego Union-Tribune: https://www.sandiegouniontribune.com/2026/01/05/does-ai-belong-in-the-exam-room-lawsuit-alleges-sharp-violated-patient-privacy/ -
Lisota v. Heartland Dental, LLC and RingCentral, Inc.
(N.D. Illinois, January 13, 2026 opinion). Federal court dismissal
(without prejudice) of wiretapping claims related to AI call analytics.
TrialSite News: https://trialsitenews.com -
Kisting-Leung et al. v. Cigna Corp. et al., Case No. 2:23-cv-01477 (E.D. California). AI-driven insurance claim denial class action.
Law360: https://www.law360.com/healthcare-authority/articles/2415514/the-high-stakes-healthcare-ai-battles-to-watch-in-2026 -
Estate of Gene B. Lokken et al. v. UnitedHealth Group Inc. et al., Case No. 0:23-cv-03514 (D. Minnesota). AI-driven insurance claim denial class action.
Law360: https://www.law360.com/healthcare-authority/articles/2415514/the-high-stakes-healthcare-ai-battles-to-watch-in-2026 -
Barrows et al. v. Humana Inc., Case No. 3:23-cv-00654 (W.D. Kentucky). AI-driven insurance claim denial class action.
Georgetown Law Health Care Litigation Tracker: https://litigationtracker.law.georgetown.edu/issues/artificial-intelligence/ -
Raine v. OpenAI (filed August 2025). Wrongful death lawsuit alleging ChatGPT contributed to the suicide of a 16-year-old.
Law360: https://www.law360.com/healthcare-authority/articles/2415514/the-high-stakes-healthcare-ai-battles-to-watch-in-2026 -
FDA Digital Health Advisory Committee, November 6, 2025:
Public meeting on "Generative Artificial Intelligence-Enabled Digital
Mental Health Medical Devices," FDA Docket No. FDA-2025-N-2338.
FDA Committee Page: https://www.fda.gov/advisory-committees/calendar-advisory-committee-meetings -
Federal Trade Commission, Section 6(b) Orders to AI Companion Chatbot Providers
(September 11, 2025). Orders issued to Alphabet/Google, Character
Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI.
FTC Press Release: https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions
FTC Order Template: https://www.ftc.gov/reports/6b-orders-file-special-report-regarding-advertising-safety-data-handling-practices-companies
DLA Piper Analysis: https://www.dlapiper.com/en-us/insights/publications/2025/09/ftc-ai-chatbots -
California AB 3030 (2024, effective January 1,
2025). GenAI notification requirements for healthcare entities
communicating patient clinical information.
Chambers & Partners Analysis: https://practiceguides.chambers.com/practice-guides/healthcare-ai-2025/usa-california/trends-and-developments -
California SB 1120 (effective January 1, 2025). Requires individualized human review for AI-driven insurance coverage decisions.
Maynard Nexsen: https://www.maynardnexsen.com/publication-the-legal-landscape-for-ai-enabled-decisions-for-health-care-claims-and-coverage-continues-to-evolve-from-litigation-to-emerging-legislation -
California SB 243 (effective January 1, 2026).
Regulates AI companion chatbot systems; requires crisis-detection
safeguards and disclosure obligations.
Davis+Gilbert LLP: https://www.dglaw.com/ftc-probes-ai-companion-chatbots-for-risks-to-minors/ -
Texas SB 1188 (effective September 1, 2025). Requires disclosure when AI is used for diagnostic purposes.
NIH/PubMed (Missouri Medicine, May–Jun 2025): https://pmc.ncbi.nlm.nih.gov/articles/PMC12309835/ -
Chew K, Snyder K, Pert C. "How Physicians Might Get in Trouble Using AI (or Not Using AI)." Missouri Medicine. 2025 May–Jun;122(3):169–172. Husch Blackwell attorneys analyze AI liability exposure for healthcare providers.
https://pmc.ncbi.nlm.nih.gov/articles/PMC12309835/ -
Aaron D. "AI and the Evolving Standard of Care" (special communication). JAMA. February 2025. University of Utah School of Law.
Summarized in: Medical Economics, February 2026: https://www.medicaleconomics.com/view/the-new-malpractice-frontier-who-s-liable-when-ai-gets-it-wrong- -
Silverboard D (Holland & Knight). "Are AI Tools Putting You at Risk for Lawsuits?" Interview. Medical Economics. February 20, 2026.
https://www.medicaleconomics.com/view/are-ai-tools-putting-you-at-risk-for-lawsuits- -
"Your AI Scribe Is Listening. Is Your Compliance Program?" Health Law Attorney Blog. February 23, 2026. (Ambient scribe market analysis; compliance risk framework.)
https://www.healthlawattorneyblog.com/your-ai-scribe-is-listening-is-your-compliance-program/ -
"The High-Stakes Healthcare AI Battles to Watch in 2026." Law360 Healthcare Authority. January 2, 2026.
https://www.law360.com/healthcare-authority/articles/2415514/the-high-stakes-healthcare-ai-battles-to-watch-in-2026 -
"New Class Action Targets Healthcare AI Recordings: 6 Steps All Businesses Should Consider." Fisher Phillips. December 2025.
https://www.fisherphillips.com/en/news-insights/new-class-action-targets-healthcare-ai-recordings.html -
"Legal Conflict Over Large Language Models in Healthcare Is No Longer Theoretical." TrialSite News. March 1, 2026.
https://trialsitenews.com -
"2026 State AI Bills That Could Expand Liability, Insurance Risk." Wiley Law. 2026.
https://www.wiley.law/article-2026-State-AI-Bills-That-Could-Expand-Liability-Insurance-Risk -
Price WN II, Gerke S, Cohen IG. "Liability for
Use of Artificial Intelligence in Medicine." University of Michigan Law
School Scholarship Repository / Edward Elgar Publishing. Open Access.
https://repository.law.umich.edu/cgi/viewcontent.cgi?article=1569&context=book_chapters
Comments
Post a Comment