- Most HIPAA incidents with AI are configuration failures or unsigned BAAs, not sophisticated attacks.
- “HIPAA-compliant vendor” is not a property of the vendor. It’s a property of your configuration of that vendor on your plan.
- Twenty checks across five categories. If you fail one, you’ve found the gap before a patient or a regulator does.
Most HIPAA incidents with AI aren't sophisticated attacks. They're the receptionist who pasted a patient’s chart into ChatGPT to summarize it faster. The dental office that set up a slick Google Forms intake page without realizing it isn't HIPAA-eligible. The billing vendor your practice hired who quietly enabled an AI “suggestion” feature without a signed BAA.
Those aren't hypotheticals. They're the three most common HIPAA-adjacent mistakes we see in Springfield practices every year.
And the stakes are rising. In 2024, 725 healthcare data breaches exposed roughly 185 million records — the worst year on record, and a 10% jump over 2023.[1] Locally, the Ozarks felt it directly: in 2023, a CoxHealth vendor was breached through a third-party file-transfer tool, exposing data on more than 200,000 Ozarks patients.[2] Most of those breaches began somewhere very mundane — a vendor relationship that wasn't scoped tightly enough.
Healthcare records exposed per year (U.S.)
Breaches of 500+ records reported to HHS Office for Civil Rights
This checklist exists to keep your practice out of that pile. Twenty questions across five categories. Ten minutes to work through. If you hit any “no,” you've found the gap before a patient or a regulator does. It's the same list we walk through with every new healthcare client before a single byte of PHI moves.
1. Vendor & Contracts
Before a single byte of PHI touches a tool, the paperwork has to be real. “They told me on a sales call it's HIPAA compliant” is not a control.
- Signed BAA on file with every AI vendor in the workflow — not just the primary platform. If your intake form hits Zapier before reaching your EHR, Zapier also needs a BAA. (The CoxHealth vendor breach landed exactly here — an upstream file-transfer tool nobody had mapped.[2])
- BAA explicitly names the AI features you're using. Some vendor BAAs cover “core services” but exclude AI add-ons that were bolted on later. Check the scope language.
- Subcontractor disclosure. If your vendor pipes data through OpenAI, Anthropic, or any foundation-model host, that host also needs a BAA. Ask for the list.
- Data processing location is confirmed in writing. Some vendors default to processing regions that aren't covered under your compliance posture. Get it in the contract.
2. Data Handling
The AI industry moves fast. Retention and training policies change without loud announcements. Lock down what happens to PHI at rest, in transit, and after processing.
- Encryption in transit (TLS 1.2+) on every call into and out of the AI service, including webhooks back to your systems.
- Encryption at rest anywhere PHI is stored — including intake logs, transcripts, temporary file buckets, and any staging database.
- Zero training on your data. The setting is explicit in the admin console or in the BAA. “We probably don't” is not acceptable.
- Retention policy documented. How long does the vendor keep PHI? Where does it live? How do you trigger deletion? You want a written SLA.
The question isn’t whether your AI tool is “HIPAA compliant.” It’s whether a specific workflow, on a specific plan, with specific settings, meets the Security Rule. Compliance is configuration.
3. Access Controls & Audit
HIPAA's Security Rule requires access control and audit logging (45 CFR § 164.312).[3] AI doesn't get a pass.
- Unique user accounts for every person who can see AI-processed PHI. No shared logins — not for the front desk, not for billing.
- Role-based access. A receptionist viewing AI-drafted appointment reminders doesn't need to see the AI's full chart summary. Limit by role.
- Audit log covers AI actions. Every AI-generated record has a who/what/when trail. If the vendor can't show you a log, they can't satisfy the Security Rule.
- Access revocation tested. When a staff member leaves, do you have a 1-click way to kill their AI access across every connected tool? Run the drill once a year.
4. Patient-Facing Output
The highest-risk surface is anything the AI says back to the patient — text reminders, intake follow-ups, chat replies. One wrong auto-generated detail and you're not just non-compliant; you've harmed a patient.
- Human review on any clinical output. AI can draft, a licensed person signs off. No exceptions on anything touching diagnosis, medication, or treatment.
- Messaging channel is HIPAA-eligible. Default SMS through a consumer gateway is usually not. Confirm your messaging provider (Twilio with BAA, TigerConnect, etc.) is in scope.
- Consent captured for automated communication. Patients have opted in to AI-generated reminders or chat replies, and the consent record is retrievable.
- Minimum necessary in every message. “Your appointment is tomorrow at 2pm” — not “Your appointment with Dr. Smith to follow up on your diabetes panel is tomorrow at 2pm.”
5. Documentation & Incident Response
When something goes wrong, the question isn't “did it happen” — it's “can you show you had a plan.” Unplanned is what turns a small incident into a breach notification.
- Written AI policy covering approved tools, prohibited tools (including consumer ChatGPT / Gemini / Copilot), and what staff do when they're unsure.
- Staff training documented with dates, attendees, and content. “We told them at the staff meeting” doesn't defend against an audit.
- Breach response playbook that names who calls the compliance officer, who notifies the vendor, and who drafts patient notifications within the 60-day window.
- Quarterly review scheduled. AI vendors change their defaults. Retention settings get silently updated. A 30-minute quarterly check keeps you aligned.
Consumer vs. BAA-Covered Tools — At a Glance
The most expensive mistakes we see are consumer tools being used for “just this one thing.” The differences look small in the UI and enormous in the contract.
| Feature | Consumer ChatGPT / Gemini | Azure OpenAI / AWS Bedrock (with BAA) |
|---|---|---|
| BAA available | No | Yes, with enterprise plan |
| Data used for training | Often yes, unless opted out | No, contractually prohibited |
| Audit logs | Limited | Full, retrievable via API |
| Enterprise SSO / access control | No | Yes |
| Acceptable for PHI | Never | Yes, when configured properly |
How to Use This Checklist
Walk it end-to-end for every AI tool your practice currently uses — not just the flashy ones. Include your EHR's new AI features, your intake platform, your scheduling reminders, your billing vendor's “smart” add-ons. Each tool gets its own pass.
If you hit a “no,” you have three options: fix the configuration, swap the tool, or formally accept the risk in writing with your compliance officer. Doing nothing is not an option.
If working through this surfaces more questions than it answers — which is common — book a free 30-minute call. We'll walk through your specific stack and tell you, plainly, which controls are live and which need work. For the broader context on how Springfield practices stand up HIPAA-eligible AI in the first place, see Healthcare AI Without the HIPAA Headache, or run the Front Desk Recovery Scorecard to see what HIPAA-safe automation would be worth to your practice specifically.
Frequently Asked Questions
Consumer ChatGPT is not HIPAA-eligible. OpenAI does not sign a BAA for the consumer product. ChatGPT Enterprise and API access with a signed BAA can be used with PHI when configured correctly — but these are different products on different contracts.
No. A BAA is a contractual commitment to handle PHI according to HIPAA. Technical controls, configuration, access policies, and staff training all still have to be in place. A BAA with a misconfigured product is still a breach waiting to happen.
At least quarterly for each production AI tool, and immediately when: (a) a vendor announces new AI features, (b) a staff member with admin access leaves, (c) you change EHR or practice management software, or (d) you receive a breach notification from any vendor in your stack.
Subcontractor BAAs. Your primary AI vendor has a BAA — good — but routes processing through a foundation-model host that doesn't, or through a file-transfer tool that does. The CoxHealth/ITx incident followed exactly this pattern.[2] Always ask for the full subcontractor list.
For the first one, yes — your healthcare counsel or a HIPAA-focused compliance firm can review against your existing agreements. After that, the terms tend to repeat; your compliance officer can handle subsequent ones with the original as template. Budget for an hour of legal review the first time you adopt a new AI category.
- HIPAA Journal, “2024 Healthcare Data Breach Report.” hipaajournal.com/2024-healthcare-data-breach-report
- KY3, “On Your Side: CoxHealth data breach impacts potentially thousands of patients.” June 21, 2023. ky3.com/2023/06/21/coxhealth-data-breach
- HHS.gov, HIPAA Security Rule — Technical Safeguards (45 CFR § 164.312). hhs.gov/hipaa/for-professionals/security/laws-regulations
- HHS.gov press release, “HHS Office for Civil Rights Imposes a $1,500,000 Civil Money Penalty Against Warby Parker.” hhs.gov/press-room/penalty-against-warby-parker
Want a Second Set of Eyes on Your AI Stack?
We'll review every AI touchpoint in your practice — EHR, intake, scheduling, billing — and give you a clear pass/fail against this checklist. Springfield practices only.