HIPAA + Privacy Act Dual Compliance for Claude Deployments
Architect HIPAA + Privacy Act compliant Claude deployments. Mandatory controls, data flows, and non-negotiable decisions for health-tech vendors.
Table of Contents
- Why Dual Compliance Matters for Claude Deployments
- HIPAA Fundamentals: Privacy Rule, Security Rule, and Breach Notification
- Australian Privacy Act and APPs: The Parallel Framework
- The Architectural Conflict: Where HIPAA and Privacy Act Diverge
- Data Classification and Segmentation Strategy
- Securing Claude Deployments: Non-Negotiable Controls
- Business Associate Agreements and Vendor Management
- De-Identification and Minimum Necessary Standards
- Audit Logging, Monitoring, and Incident Response
- Practical Implementation Roadmap
- Common Pitfalls and How to Avoid Them
- Next Steps: Building Your Compliance Program
Why Dual Compliance Matters for Claude Deployments
If you’re building health-tech software that serves both US and Australian customers, you’re already living in a compliance grey zone. Add Claude—or any large language model—to the mix, and that grey zone becomes a minefield.
Here’s the problem: HIPAA and Australia’s Privacy Act (specifically the Australian Privacy Principles, or APPs) were written in different eras, for different regulatory philosophies, and with fundamentally different assumptions about data sovereignty and third-party processing. When you deploy Claude as part of your product architecture, you’re not just adopting a language model; you’re inheriting two parallel compliance obligations that don’t always align.
The stakes are concrete. A single breach of protected health information (PHI) under HIPAA can trigger fines up to $1.5 million per violation category per year. Breaches of Australian health information under the Privacy Act can result in penalties up to AUD 50 million or 30% of adjusted turnover—whichever is higher. And that’s before civil litigation, regulatory investigation costs, and the operational shutdown that follows a major incident.
But compliance isn’t just about avoiding fines. It’s about building customer trust. Health-tech customers—whether they’re hospitals, clinics, or individual practitioners—need to know that their patient data is handled with rigour. When you can demonstrate dual compliance, you’re not just ticking boxes; you’re signalling that you’ve thought through the hard architectural problems.
Claude deployments introduce specific compliance challenges because LLMs process, store, and potentially learn from data in ways that traditional software doesn’t. Understanding those challenges—and the architectural decisions that resolve them—is the focus of this guide.
HIPAA Fundamentals: Privacy Rule, Security Rule, and Breach Notification
HIPAA, enacted in 1996 and significantly updated in 2013, establishes three core rules that govern how health-tech vendors must handle protected health information.
The Privacy Rule
The Privacy Rule defines what PHI is, who can access it, and under what conditions. PHI includes any health information that can be linked to an individual—names, medical record numbers, dates of service, diagnoses, treatment plans, and even de-identified data if the linkage is possible.
Under the Privacy Rule, you must:
- Limit access to the minimum necessary. If a Claude deployment only needs patient names and diagnoses to generate a clinical summary, it shouldn’t have access to billing records or insurance information. This principle, called “minimum necessary,” is non-negotiable.
- Obtain explicit authorisation for uses beyond treatment, payment, and operations. If you’re using Claude to train models on patient data, you need explicit consent—not just a checkbox in a terms-of-service agreement.
- Maintain a Notice of Privacy Practices that clearly explains what you do with PHI and how individuals can exercise their rights.
As outlined in detailed HIPAA compliance guidance for AI systems, the Privacy Rule applies even if you’re using a third-party AI vendor. You remain responsible for ensuring that Claude—and Anthropic, as your vendor—complies with minimum necessary standards.
The Security Rule
The Security Rule is where most Claude deployments actually fail. It mandates technical, administrative, and physical safeguards for electronic PHI (ePHI).
The Security Rule requires:
- Access controls. Multi-factor authentication (MFA), role-based access control (RBAC), and audit trails. If a Claude deployment accepts patient data, every access must be logged, time-stamped, and attributable to a specific user or system.
- Encryption in transit and at rest. Data flowing to Claude’s API must use TLS 1.2 or higher. Data stored in your systems must be encrypted with AES-256 or equivalent. Encryption keys must be managed separately from encrypted data.
- Audit controls. You must maintain comprehensive logs of who accessed what, when, and why. These logs must be retained for at least six years and must be tamper-evident.
- Incident response procedures. A documented plan for detecting, investigating, and responding to security incidents involving PHI.
A critical question: Does Claude’s API itself meet HIPAA Security Rule requirements? The answer is conditional. Anthropic does not automatically treat Claude API inputs as PHI unless you’ve negotiated a Business Associate Agreement (BAA). Without a BAA, sending PHI to Claude’s API is a breach, full stop. With a BAA, Anthropic commits to specific security controls and audit obligations. We’ll cover BAAs in depth later.
Breach Notification Rule
If PHI is accessed, acquired, used, or disclosed in a manner not permitted by HIPAA, you must notify affected individuals, the US Department of Health and Human Services (HHS), and potentially the media. Notification must occur within 60 days of discovery of the breach. The cost of notification—legal fees, credit monitoring, regulatory fines—often exceeds the direct cost of the breach itself.
Claude deployments increase breach risk because:
- Data flows to a third-party service. Even with a BAA, your PHI is transmitted to Anthropic’s infrastructure.
- Model training and fine-tuning. If you’re fine-tuning Claude on patient data, that data enters Anthropic’s systems in ways that standard API calls don’t.
- Prompt injection attacks. An attacker could craft a prompt designed to extract PHI from Claude’s context window, or to cause the model to output PHI in unexpected ways.
As privacy officers have noted in comprehensive HIPAA guidance for AI in healthcare, the risk isn’t theoretical. It requires architectural safeguards from day one.
Australian Privacy Act and APPs: The Parallel Framework
Australia’s Privacy Act, last substantially updated in 2022, establishes 13 Australian Privacy Principles (APPs) that govern how organisations handle personal information, including health information.
Unlike HIPAA, which is prescriptive and heavily focused on technical controls, the Privacy Act is principles-based and outcome-focused. This creates both flexibility and ambiguity.
Key APPs for Health-Tech Vendors
APP 1: Open and Transparent Management of Personal Information
You must have a clear, accessible privacy policy that explains what personal information you collect, why, how you use it, and who you disclose it to. For Claude deployments, this means explicitly stating that patient data may be processed by Anthropic and other third parties.
APP 5: Notification
You must notify individuals about the collection of personal information, unless it’s unreasonable to do so. For health information collected via Claude-powered tools, notification is almost always necessary.
APP 6: Use and Disclosure
You can only use or disclose personal information for the primary purpose for which it was collected, or a directly related secondary purpose, unless the individual consents. This is similar to HIPAA’s minimum necessary principle but phrased differently. Under APP 6, using patient data to train a general-purpose model without explicit consent is likely a breach.
APP 11: Security of Personal Information
You must take reasonable steps to protect personal information from misuse, loss, unauthorised access, modification, or disclosure. “Reasonable steps” is deliberately vague—it’s a principles-based standard, not a checklist. However, the Office of the Australian Information Commissioner (OAIC) has issued guidance suggesting that reasonable steps include encryption, access controls, and audit logging.
APP 13: Correction of Personal Information
Individuals have the right to request correction of their personal information. If a Claude deployment generates inaccurate clinical summaries or diagnoses, individuals must be able to request correction, and you must have a process to investigate and remediate.
The Health Records Amendment
In 2022, the Privacy Act was amended to strengthen protections for health information. The amendments introduced a new definition of “health information” that’s broader than HIPAA’s PHI. It includes genetic information, biometric information, and information about health status or disability.
Critically, the amendments also introduced a mandatory data breach notification scheme. If there’s a serious data breach involving personal information, you must notify affected individuals and the OAIC. This is similar to HIPAA’s Breach Notification Rule but with a lower threshold for what constitutes a “serious” breach.
The Architectural Conflict: Where HIPAA and Privacy Act Diverge
Now we get to the hard part. HIPAA and the Privacy Act don’t always say the same thing, and when you’re deploying Claude across both jurisdictions, you can’t pick and choose.
Consent and Purpose Limitation
HIPAA’s approach: HIPAA assumes that healthcare providers collect PHI for treatment, payment, and operations. Uses beyond those purposes require explicit authorisation. However, HIPAA allows for some flexibility around research and public health activities.
Privacy Act’s approach: The Privacy Act is stricter. APP 6 (Use and Disclosure) requires that personal information be used only for the primary purpose or a directly related secondary purpose. The bar for “directly related” is high. Using patient data to train a Claude model, even if it improves clinical decision support, likely requires explicit consent.
The conflict: If you’re using Claude to analyse patient data for quality improvement purposes, HIPAA might permit this under the “operations” exception. But the Privacy Act might require explicit consent. Your safest approach: always obtain explicit consent, even if HIPAA doesn’t strictly require it.
Data Localisation and Sovereignty
HIPAA’s approach: HIPAA doesn’t require data to remain in the United States. However, it does require that any third party processing PHI (including cloud providers and AI vendors) comply with HIPAA Security Rule standards. In practice, HIPAA assumes that you’ll use US-based vendors with US-based infrastructure.
Privacy Act’s approach: The Privacy Act doesn’t explicitly require data to remain in Australia. However, APP 1 (Open and Transparent Management) requires that you disclose to individuals where their personal information is held and who has access to it. If you’re disclosing that patient data is sent to Anthropic’s US infrastructure, you must be transparent about this. Additionally, the OAIC has warned that sending personal information overseas without strong safeguards may breach APP 1.
The conflict: Claude’s API is hosted in the United States. When you send patient data to Claude, that data crosses the border. HIPAA is fine with this (as long as you have a BAA). The Privacy Act requires transparency and “reasonable” safeguards, which you can meet through encryption and contractual controls. But you must be explicit with patients: their data will be sent to the US and processed by Anthropic.
Breach Notification Thresholds
HIPAA’s approach: Any unauthorised access, use, or disclosure of PHI that poses a significant risk of harm requires notification. The bar is “significant risk,” which is somewhat subjective but generally interpreted broadly. A breach affecting even a small number of individuals can trigger notification requirements.
Privacy Act’s approach: Notification is required for “serious” data breaches. The Privacy Act defines this as a breach that’s likely to result in serious harm. This is a higher bar than HIPAA. A minor breach affecting a few individuals might not meet the “serious harm” threshold under the Privacy Act but would still trigger HIPAA notification.
The conflict: You must notify under whichever standard is stricter. If you have both US and Australian customers, you must assume HIPAA’s lower threshold applies. This means notifying more broadly than the Privacy Act would require.
Data Classification and Segmentation Strategy
The first architectural decision you must make is: which data will Claude actually process?
This is where many health-tech vendors make their first mistake. They assume that because Claude can process any text, they should feed it all available patient data. This is backwards. You should feed Claude the minimum necessary data to accomplish your specific use case.
Classifying Data by Sensitivity
Start by classifying all patient data into tiers:
Tier 1: Core Clinical Data
- Patient demographics (name, date of birth, medical record number)
- Chief complaints and presenting symptoms
- Current diagnoses
- Current medications
- Vital signs
This is the minimum necessary for most Claude use cases (clinical documentation, treatment recommendations, patient summaries).
Tier 2: Extended Clinical Data
- Detailed lab results and imaging reports
- Pathology and histology findings
- Specialist consultation notes
- Surgical reports
Include this tier only if your specific use case requires it. For example, if Claude is generating differential diagnoses, extended clinical data is necessary. If Claude is just summarising current medications, it’s not.
Tier 3: Sensitive Metadata
- Insurance information and billing codes
- Mental health diagnoses and treatment notes
- Substance abuse treatment records
- HIV/AIDS status
- Genetic information
- Reproductive health information
Do not send Tier 3 data to Claude unless absolutely necessary. These categories carry heightened sensitivity under both HIPAA and the Privacy Act. If you must process them, use additional safeguards (see below).
Tier 4: Non-Clinical Personal Information
- Contact details (phone, email, address)
- Employment information
- Emergency contact details
- Social security numbers or tax file numbers
Never send Tier 4 data to Claude unless it’s directly necessary for the use case. For most clinical applications, it isn’t.
Implementing Data Segmentation
Once you’ve classified your data, implement segmentation at the application level:
-
Create separate data pipelines for each tier. Tier 1 data flows to Claude. Tier 2 data is available to Claude only if explicitly enabled for a specific use case. Tier 3 and Tier 4 data never flow to Claude.
-
Use database views and access controls to enforce segmentation. A Claude integration module should only have access to the specific data columns and rows it needs. If your system has a “get patient record” function, create a separate “get patient record for Claude” function that strips out unnecessary fields.
-
Document the rationale for including each data element. Why does Claude need the patient’s name? (For context and personalisation.) Why does it need the date of birth? (For age-appropriate recommendations.) Why doesn’t it need insurance information? (Not necessary for clinical decision support.) This documentation will be critical during compliance audits.
-
Implement field-level encryption for Tier 2 and Tier 3 data. Even if this data is included in Claude prompts, encrypt it in your database so that a database breach doesn’t automatically expose sensitive information.
As highlighted in HIPAA compliance guidance for AI in digital health, the minimum necessary standard is foundational. You’re not being paranoid by excluding unnecessary data; you’re meeting a legal requirement.
Securing Claude Deployments: Non-Negotiable Controls
Once you’ve decided what data Claude will process, you need to secure the pipeline. These controls are not optional. They’re mandated by HIPAA’s Security Rule and required by the Privacy Act’s APP 11.
Network Security and Encryption
Transport Layer Security (TLS)
All communication between your application and Claude’s API must use TLS 1.2 or higher. This is non-negotiable. TLS 1.0 and 1.1 are deprecated and must not be used.
Implementation:
- Configure your HTTP client library to enforce TLS 1.2 minimum.
- Validate SSL certificates and implement certificate pinning if possible.
- Use strong cipher suites (no RC4, no DES, no MD5).
- Test your TLS configuration using tools like SSL Labs.
Data Encryption at Rest
Any PHI stored in your systems (whether it’s prompts sent to Claude, responses received from Claude, or cached data) must be encrypted at rest using AES-256 or equivalent.
Implementation:
- Use your cloud provider’s native encryption (AWS KMS, Azure Key Vault, Google Cloud KMS).
- Store encryption keys separately from encrypted data. Never hardcode keys in application code.
- Implement key rotation policies. Keys should be rotated at least annually.
- For sensitive data (Tier 2 and Tier 3), consider field-level encryption in addition to database-level encryption.
Authentication and Access Control
Multi-Factor Authentication (MFA)
Every user who can access the Claude integration, or who can view data that Claude has processed, must authenticate using MFA. This includes developers, clinicians, administrators, and support staff.
Implementation:
- Require MFA for all user accounts, not just privileged accounts.
- Support multiple MFA methods (TOTP, hardware tokens, push notifications).
- Enforce MFA for API access. If your Claude integration uses API keys, those keys should be managed through a secrets management system (HashiCorp Vault, AWS Secrets Manager) and rotated regularly.
- Implement conditional MFA: require additional authentication for high-risk actions (accessing patient records, downloading data exports, changing system configuration).
Role-Based Access Control (RBAC)
Not all users should have access to all data or all functions. Implement granular RBAC:
- Clinician role: Can view patient records and request Claude-generated summaries for their own patients.
- Administrator role: Can manage user accounts and system configuration but cannot view patient data.
- Data analyst role: Can access de-identified or aggregated data for quality improvement purposes.
- Support role: Can access logs and system status but cannot view patient data unless explicitly investigating a specific incident.
Implementation:
- Define roles based on job function, not seniority.
- Implement the principle of least privilege: each role should have the minimum permissions necessary.
- Review and audit role assignments quarterly.
- Disable access immediately when a user leaves the organisation.
As detailed in security best practices for HIPAA-compliant AI, MFA and RBAC are foundational. You cannot meet HIPAA or Privacy Act requirements without them.
Audit Logging and Monitoring
Comprehensive Audit Trails
Every interaction with patient data must be logged. This includes:
- Who accessed the data (user ID, IP address, timestamp)
- What data was accessed (patient ID, specific fields, date range)
- How the data was accessed (via web interface, API, batch process)
- What action was taken (viewed, exported, analysed by Claude, deleted)
- The outcome (success or failure)
Implementation:
- Use your application’s native logging framework or a dedicated audit logging service (e.g., Splunk, Datadog, ELK Stack).
- Log at the application layer (which user accessed which data) and the infrastructure layer (which server processed the request).
- Ensure logs are immutable. Once written, logs should not be modifiable or deletable.
- Retain logs for at least six years, as required by HIPAA.
- Encrypt logs in transit and at rest.
Real-Time Monitoring and Alerting
Don’t just log data access; actively monitor for suspicious activity:
- Unusual access patterns: A clinician accessing patient records outside their normal clinic hours, or accessing records for patients they don’t treat.
- Bulk data exports: Someone downloading large amounts of patient data.
- Failed authentication attempts: Multiple failed login attempts from the same IP address.
- Configuration changes: Someone disabling audit logging or modifying access controls.
Implementation:
- Set up alerts for suspicious activities. These should trigger immediate investigation.
- Use machine learning-based anomaly detection if possible (your cloud provider likely offers this).
- Establish an incident response team and a clear escalation path.
- Test your alerting system regularly. Run simulated breach scenarios to ensure your team responds correctly.
API Key Management
If your Claude integration uses API keys (which it likely does), treat these keys like passwords. A compromised API key is equivalent to a compromised database.
Implementation:
- Never hardcode API keys in source code. Use environment variables or a secrets management system.
- Rotate API keys regularly (at least annually, and immediately if compromised).
- Use separate API keys for different environments (development, staging, production).
- Implement API key scoping if Anthropic supports it. Each key should have the minimum necessary permissions.
- Monitor API key usage. If a key is used in an unexpected way (e.g., from an unexpected IP address), investigate immediately.
Business Associate Agreements and Vendor Management
Here’s the legal reality: you cannot send PHI to Claude’s API without a Business Associate Agreement (BAA) with Anthropic.
This is not negotiable under HIPAA. If you send PHI to Claude without a BAA, you’re in breach, regardless of what security controls you have in place.
What a BAA Is and Why It Matters
A Business Associate Agreement is a contract between a HIPAA-covered entity (or a business associate) and a vendor (the “business associate”). The BAA specifies:
-
What the vendor can do with PHI. The vendor can use PHI only as necessary to provide the contracted service, and only for purposes permitted by HIPAA.
-
How the vendor must safeguard PHI. The vendor must implement the same security controls required of HIPAA-covered entities (or equivalent controls).
-
What happens if there’s a breach. The vendor must notify the covered entity of any breach, and the covered entity remains liable to notify individuals.
-
What happens when the contract ends. The vendor must return or destroy all PHI, or securely dispose of it.
Without a BAA, you have no contractual assurance that Anthropic will protect your PHI. You’re relying on Anthropic’s general terms of service, which are not designed for healthcare.
Negotiating a BAA with Anthropic
As of early 2025, Anthropic does offer BAAs for customers processing PHI via Claude’s API. However, the process requires explicit negotiation. You cannot assume that a standard API account includes BAA coverage.
Steps:
-
Contact Anthropic’s enterprise sales team and explicitly state that you need to process PHI and require a BAA.
-
Provide your organisation’s BAA template. Most healthcare organisations have a standard BAA template. Anthropic will likely counter with their own template. Be prepared to negotiate.
-
Key terms to negotiate:
- Scope of services. Clearly define what Anthropic will do with your data (e.g., “process prompts containing PHI to generate clinical summaries”).
- Data location. Confirm that data will be processed in the United States (or specify if you have other requirements). This matters for Privacy Act compliance (see below).
- Retention and deletion. Confirm that Anthropic will not retain PHI longer than necessary to provide the service, and will securely delete it upon request.
- Audit rights. Confirm that you have the right to audit Anthropic’s security controls and request evidence of compliance.
- Subcontractors. Confirm that Anthropic will not use subcontractors to process PHI without your explicit consent, and that any subcontractors must also sign BAAs.
- Breach notification. Confirm that Anthropic will notify you of any suspected breaches within a specific timeframe (typically 24-48 hours).
- Termination and transition. Confirm that Anthropic will securely return or destroy all PHI upon termination, and will cooperate with transition to another vendor.
-
Legal review. Have your legal team review the BAA before signing. BAAs are contracts, and the terms matter.
Vendor Risk Assessment
Before signing a BAA, you should conduct a vendor risk assessment. This is required by HIPAA’s Security Rule and is a best practice under the Privacy Act.
Your assessment should cover:
- Security controls. Does Anthropic implement MFA, encryption, audit logging, and other required controls?
- Compliance certifications. Does Anthropic have SOC 2 Type II certification? ISO 27001 certification? These provide independent verification of security controls.
- Incident response. Does Anthropic have a documented incident response plan? What’s their track record for responding to security incidents?
- Financial stability. Is Anthropic financially stable? If Anthropic went out of business, would your data be secure?
- Regulatory history. Has Anthropic been investigated or sanctioned by regulators for security or privacy violations?
As discussed in AI risk management for HIPAA compliance, vendor risk assessment is not a one-time exercise. You should reassess your vendors annually and whenever there’s a significant change (new product, merger, security incident).
Multiple Vendors and Subcontractors
Your Claude deployment might involve multiple vendors:
- Anthropic (Claude API)
- Your cloud provider (AWS, Azure, Google Cloud)
- Your EHR or practice management system vendor
- Your logging and monitoring vendor (Splunk, Datadog, etc.)
- Your security vendor (for vulnerability scanning, penetration testing, etc.)
Each of these vendors might have access to PHI, either directly or indirectly. Each one needs a BAA (or equivalent contract) with you. And you need to ensure that each vendor’s subcontractors also have appropriate agreements in place.
Implement vendor management:
-
Maintain a vendor inventory. Document every vendor that has access to PHI or to systems that process PHI.
-
Track BAAs. For each vendor, document the BAA status, expiration date, and any outstanding compliance items.
-
Conduct periodic assessments. At least annually, reassess each vendor’s security controls and compliance status.
-
Establish exit criteria. If a vendor fails to meet compliance requirements, have a plan to transition to another vendor or bring the function in-house.
De-Identification and Minimum Necessary Standards
Here’s a question that sounds simple but isn’t: Can you use Claude on de-identified patient data without a BAA?
The answer is: it depends. If the data is truly de-identified according to HIPAA’s de-identification standard, then technically it’s no longer PHI, and HIPAA doesn’t apply. But there are significant caveats.
HIPAA’s De-Identification Standard
HIPAA defines two ways to de-identify data:
Expert Determination Method
A qualified statistician or other expert must determine, using statistical methods, that the risk of re-identification is very small. This is complex and requires expertise. Most organisations don’t use this method.
Safe Harbor Method
You remove 18 specific identifiers:
- Names
- Geographic subdivisions (cities, counties, postcodes) smaller than state level
- Dates (except year) related to the individual
- Telephone numbers
- Fax numbers
- Email addresses
- Social security numbers
- Medical record numbers
- Health insurance beneficiary numbers
- Account numbers
- Certificate or license numbers
- Vehicle identifiers and serial numbers
- Device identifiers and serial numbers
- Web URLs
- Internet protocol (IP) addresses
- Biometric identifiers (fingerprints, facial recognition data)
- Full-face photographic images and any comparable images
- Any other unique identifying number, characteristic, or code
If you remove all 18 identifiers, the data is considered de-identified under HIPAA.
The Problem: Re-Identification Risk
Here’s where it gets tricky. Even if you remove all 18 identifiers, it might still be possible to re-identify individuals by combining the de-identified data with other publicly available information.
For example, consider a dataset of de-identified patient records with diagnoses, medications, and admission dates. If you know that a specific person was admitted to a specific hospital on a specific date (information that might be public or semi-public), you could potentially link that person to their de-identified record.
Claude, being a language model, is particularly good at inferring information. If you feed Claude a de-identified clinical note that mentions “a 45-year-old male with a rare genetic condition who works as a software engineer in Sydney,” Claude might be able to infer the patient’s identity or generate information that could be used to re-identify them.
Moreover, if Claude is fine-tuned on de-identified data, that fine-tuning process might inadvertently encode identifying information.
Practical De-Identification Strategy
If you want to use Claude on de-identified data without a BAA, follow these steps:
-
Apply Safe Harbor de-identification to your source data. Remove all 18 identifiers.
-
Assess re-identification risk. Even after Safe Harbor de-identification, could an attacker re-identify individuals? If there’s significant risk, apply additional de-identification techniques (aggregation, generalisation, noise addition).
-
Document the de-identification process. Keep records of what was removed, how it was removed, and why you believe the data is sufficiently de-identified.
-
Limit Claude’s context. Don’t feed Claude entire patient records. Feed it specific clinical summaries or aggregated data.
-
Avoid fine-tuning. Fine-tuning Claude on de-identified data increases re-identification risk. Stick to API calls with de-identified prompts.
-
Monitor Claude’s outputs. If Claude generates outputs that could re-identify individuals, review and redact them before displaying to users.
But here’s the honest assessment: de-identification is hard, and it’s easy to get wrong. Unless you have strong statistical expertise and a clear use case, it’s safer to assume your data is PHI and to obtain a BAA. The BAA is your legal protection if something goes wrong.
Minimum Necessary and Claude Prompts
Even with a BAA, the minimum necessary principle applies. You should not send unnecessary PHI to Claude.
Example: You want Claude to generate a clinical summary for a patient. You might be tempted to send:
Patient: John Smith
DOB: 1965-03-15
MRN: 12345
Diagnosis: Type 2 diabetes, hypertension, hyperlipidemia
Current medications: Metformin 1000mg BID, Lisinopril 10mg daily, Atorvastatin 20mg daily
Last HbA1c: 7.2% (3 months ago)
Last BP: 138/88 (2 weeks ago)
But do you really need the patient’s name, date of birth, and MRN? For generating a clinical summary, probably not. A more minimal prompt would be:
Summary request for patient MRN 12345:
Diagnosis: Type 2 diabetes, hypertension, hyperlipidemia
Current medications: Metformin 1000mg BID, Lisinopril 10mg daily, Atorvastatin 20mg daily
Last HbA1c: 7.2% (3 months ago)
Last BP: 138/88 (2 weeks ago)
Please generate a brief clinical summary.
Or even more minimal:
Diagnosis: Type 2 diabetes, hypertension, hyperlipidemia
Current medications: Metformin 1000mg BID, Lisinopril 10mg daily, Atorvastatin 20mg daily
Last HbA1c: 7.2%, Last BP: 138/88
Generate clinical summary.
You can always add the patient identifier (name, MRN) to the response after Claude generates it, without sending it to Claude in the first place.
This is not just compliance theatre. Reducing the amount of PHI sent to Claude reduces breach risk and limits the potential harm if Claude’s API is compromised.
Audit Logging, Monitoring, and Incident Response
You’ve implemented security controls. Now you need to verify that they’re working, detect when they fail, and respond quickly when something goes wrong.
Comprehensive Audit Logging
Your audit logs should capture:
Application Layer
- User authentication (login, logout, MFA challenges)
- Data access (which user accessed which patient record, when, why)
- Claude API calls (what was sent, when, by whom)
- Claude API responses (what was received, when)
- Data exports and reports generated
- System configuration changes
Infrastructure Layer
- Network traffic (source IP, destination IP, port, protocol)
- Database queries and modifications
- File system access
- API gateway logs
- Load balancer logs
Security Layer
- Firewall rules and changes
- Encryption key access and rotation
- Vulnerability scans and results
- Security patches applied
Implementation:
-
Centralise logging. Use a centralised logging service (Splunk, ELK Stack, Datadog, CloudWatch) so that all logs are in one place and searchable.
-
Ensure log immutability. Once logs are written, they should not be modifiable or deletable. Use append-only storage or write logs to immutable storage (e.g., AWS S3 with object lock).
-
Encrypt logs. Logs contain sensitive information (who accessed what data). Encrypt logs in transit and at rest.
-
Retain logs for 6+ years. HIPAA requires retention of records for at least six years. This includes logs.
-
Test log integrity. Periodically verify that logs haven’t been tampered with. Use checksums or digital signatures.
Real-Time Monitoring and Alerting
Logging is forensics. Monitoring is prevention. You should actively monitor for:
Access Anomalies
- A user accessing patient records outside their normal hours or location
- A user accessing records for patients they don’t treat
- A user accessing unusually large amounts of data
- A user accessing data via an unusual method (e.g., direct database query instead of web interface)
API Anomalies
- Unusual volume of Claude API calls
- Claude API calls from unexpected IP addresses
- Claude API calls with unusual payloads (e.g., very large prompts, unusual data patterns)
- Failed Claude API calls with suspicious error messages
Infrastructure Anomalies
- Database connection from unexpected IP address
- Unusual database queries (e.g., SELECT * FROM all tables)
- Firewall rule changes
- Encryption key access
- Failed authentication attempts
Implementation:
-
Define baselines. What’s normal for your system? How many Claude API calls per day? What’s the typical data volume? What are normal user access patterns?
-
Set up alerts. Use your monitoring tool to alert on deviations from baseline. Alerts should be specific (not generic “something happened” alerts) and actionable.
-
Establish escalation paths. Who gets alerted first? Who escalates to management? Who contacts legal? Who contacts the OAIC or HHS if there’s a breach? Document this.
-
Test alerting. Run simulated breach scenarios. Verify that your alerts fire correctly and that your team responds appropriately.
Incident Response Plan
Despite your best efforts, incidents will happen. You need a documented plan for responding to them.
Your incident response plan should cover:
-
Detection and reporting. How do you detect a potential breach? Who can report a suspected incident? What’s the reporting mechanism?
-
Initial response. Upon detection of a potential breach:
- Isolate affected systems (take them offline if necessary)
- Preserve evidence (don’t delete logs, don’t modify affected data)
- Notify your incident response team
- Notify your legal team and your cyber insurance provider
-
Investigation.
- Determine the scope of the breach (what data was affected, how many individuals)
- Determine the cause (was it a technical vulnerability, a configuration error, human error, or a deliberate attack?)
- Determine the timeline (when did it start, when was it discovered, when was it stopped?)
- Preserve evidence for potential litigation or regulatory investigation
-
Notification. If the breach meets the threshold for notification:
- Notify affected individuals within 60 days (HIPAA) or as soon as practicable (Privacy Act)
- Notify the HHS Office for Civil Rights (HIPAA)
- Notify the OAIC (Privacy Act)
- Consider notifying media if the breach affects more than 500 individuals in a jurisdiction (HIPAA requirement)
-
Remediation.
- Fix the underlying vulnerability or misconfiguration
- Implement additional controls to prevent recurrence
- Monitor for signs of ongoing compromise
- Consider offering credit monitoring or other remediation services to affected individuals
-
Post-incident review. After the incident is resolved:
- Conduct a thorough post-mortem
- Identify what went wrong and why
- Implement process improvements to prevent recurrence
- Update your incident response plan based on lessons learned
As outlined in AI risk management for HIPAA compliance, incident response is not optional. It’s a required component of your security program.
Practical Implementation Roadmap
Now that you understand the requirements, how do you actually build a compliant Claude deployment? Here’s a phased roadmap.
Phase 1: Assessment and Planning (Weeks 1-4)
Week 1-2: Compliance Assessment
- Document all patient data your system will process
- Classify data by sensitivity (Tier 1-4, as described earlier)
- Identify all systems that touch patient data
- Identify all third-party vendors
- Review existing security controls
- Identify gaps between current state and HIPAA/Privacy Act requirements
Week 3-4: Vendor Assessment and BAA Negotiation
- Contact Anthropic to discuss BAA requirements
- Assess Anthropic’s security controls and compliance certifications
- Obtain Anthropic’s BAA template
- Engage legal team to negotiate BAA terms
- Assess other vendors (cloud provider, logging service, etc.) and ensure they have BAAs in place
Deliverables:
- Compliance gap analysis
- Data classification matrix
- Vendor risk assessment
- Draft BAA with Anthropic
Phase 2: Architecture and Design (Weeks 5-8)
Week 5: Architecture Design
- Design data pipeline from source system to Claude
- Design data segmentation (which data goes to Claude, which doesn’t)
- Design encryption strategy (in transit, at rest, key management)
- Design authentication and access control (MFA, RBAC)
- Design audit logging (what to log, where to store logs, how to protect them)
Week 6-7: Security Control Design
- Design network security (firewalls, VPCs, API gateways)
- Design API key management
- Design incident response procedures
- Design monitoring and alerting
- Design backup and disaster recovery
Week 8: Documentation
- Document all architectural decisions and the rationale behind them
- Create security control matrices
- Create data flow diagrams
- Create system diagrams
Deliverables:
- Architecture design document
- Security control matrix
- Data flow diagrams
- System diagrams
Phase 3: Implementation (Weeks 9-16)
Week 9-10: Infrastructure Setup
- Set up cloud infrastructure (VPC, subnets, security groups)
- Set up encryption (TLS certificates, KMS keys, database encryption)
- Set up authentication (identity provider, MFA)
- Set up logging and monitoring (log aggregation, alerting)
Week 11-12: Application Development
- Implement data pipeline
- Implement Claude API integration
- Implement audit logging
- Implement access controls
- Implement error handling and rate limiting
Week 13-14: Security Testing
- Vulnerability scanning
- Penetration testing
- Code review
- Security configuration review
Week 15-16: User Acceptance Testing
- Test with clinicians or other end users
- Test with sample patient data
- Test incident response procedures
- Test backup and disaster recovery
Deliverables:
- Deployed infrastructure
- Implemented application
- Vulnerability assessment report
- Penetration test report
- Test results
Phase 4: Compliance Verification (Weeks 17-20)
Week 17-18: Internal Audit
- Verify that all security controls are implemented as designed
- Review logs and monitoring
- Review access controls
- Review BAAs and vendor contracts
- Conduct tabletop incident response exercise
Week 19: External Audit (Optional but Recommended)
- Engage external auditor to conduct SOC 2 Type II audit
- Engage external auditor to conduct HIPAA risk assessment
- Engage external auditor to conduct Privacy Act assessment
Week 20: Remediation and Sign-Off
- Address any findings from internal or external audit
- Update documentation
- Obtain sign-off from compliance officer, legal team, and security team
- Obtain sign-off from Anthropic (BAA execution)
Deliverables:
- Internal audit report
- External audit report (if conducted)
- Remediation plan
- Compliance sign-off
Phase 5: Ongoing Compliance (Ongoing)
Monthly:
- Review access logs for anomalies
- Review security alerts
- Update vulnerability assessments
- Monitor vendor compliance
Quarterly:
- Conduct access control review
- Conduct configuration review
- Conduct incident response drill
- Assess new threats and vulnerabilities
Annually:
- Conduct full compliance assessment
- Conduct vendor risk assessment
- Conduct penetration testing
- Conduct SOC 2 audit
- Review and update security policies
Common Pitfalls and How to Avoid Them
Based on our experience working with health-tech vendors, here are the most common mistakes—and how to avoid them.
Pitfall 1: Assuming Claude’s API Is Automatically HIPAA-Compliant
The mistake: “Claude is a popular AI service. It must be HIPAA-compliant.”
The reality: Claude’s API is not automatically HIPAA-compliant. You need a BAA with Anthropic. Without a BAA, sending PHI to Claude is a breach, regardless of what security controls you have in place.
How to avoid it:
- Explicitly contact Anthropic and request a BAA before sending any PHI
- Have legal review the BAA before signing
- Document the BAA in your vendor management system
- Don’t assume that a standard API account includes BAA coverage
Pitfall 2: Over-Relying on De-Identification
The mistake: “We de-identified the data, so we don’t need a BAA.”
The reality: De-identification is hard, and re-identification is possible. Even if you follow HIPAA’s Safe Harbor method, sophisticated attackers might be able to re-identify individuals. Moreover, Claude’s ability to infer information increases re-identification risk.
How to avoid it:
- Assume your data is PHI unless you have strong statistical evidence that it’s not
- Conduct a thorough re-identification risk assessment
- When in doubt, obtain a BAA
- Don’t fine-tune Claude on de-identified data
Pitfall 3: Insufficient Data Segmentation
The mistake: “We’re sending all patient data to Claude because it’s easier.”
The reality: Sending unnecessary data to Claude increases breach risk and violates the minimum necessary principle. If a breach occurs, the harm is proportional to the amount of data exposed.
How to avoid it:
- Classify data by sensitivity
- Implement data segmentation at the application level
- Document the rationale for including each data element
- Regularly audit what data is being sent to Claude
- Remove unnecessary fields from prompts
Pitfall 4: Weak Access Controls
The mistake: “We’re using usernames and passwords, so access control is fine.”
The reality: Passwords are weak. Users reuse passwords, write them down, and share them. MFA is mandatory under HIPAA and required by the Privacy Act.
How to avoid it:
- Implement MFA for all users, not just privileged users
- Implement RBAC so that users only have access to data they need
- Disable access immediately when users leave
- Regularly review access logs for anomalies
- Conduct quarterly access control reviews
Pitfall 5: Inadequate Audit Logging
The mistake: “We have some logs, but we’re not actively monitoring them.”
The reality: Logs are only useful if you review them. If you don’t actively monitor for suspicious activity, you might not detect a breach until weeks or months later. By then, significant damage might have occurred.
How to avoid it:
- Implement comprehensive audit logging
- Centralise logs in a searchable system
- Set up real-time alerting for suspicious activities
- Regularly review logs (at least weekly)
- Conduct monthly log reviews to identify trends
- Retain logs for at least six years
Pitfall 6: Inadequate Incident Response Planning
The mistake: “We’ll figure out what to do if there’s a breach.”
The reality: A breach is stressful, and you won’t think clearly under pressure. If you don’t have a plan, you’ll make mistakes. You might miss the 60-day notification deadline. You might fail to preserve evidence. You might inadvertently destroy logs. Each mistake increases your liability.
How to avoid it:
- Document your incident response plan before you need it
- Conduct tabletop exercises at least quarterly
- Ensure that your team knows their roles and responsibilities
- Establish clear escalation paths
- Maintain contact information for legal, cyber insurance, and regulatory bodies
- Test your incident response plan regularly
Pitfall 7: Vendor Lock-In
The mistake: “We’re using Claude, so we’re locked in to Anthropic.”
The reality: If Anthropic breaches your BAA, or if your relationship deteriorates, you need to be able to switch to another vendor. If you’ve architected your system so that Claude is tightly integrated, switching will be painful and expensive.
How to avoid it:
- Design your system so that Claude is pluggable. You should be able to swap Claude for another LLM (GPT-4, Gemini, LLaMA) with minimal code changes.
- Use abstraction layers. Don’t call Claude’s API directly from your application code. Use a wrapper that abstracts the LLM.
- Maintain vendor-agnostic data formats. Don’t design your data pipeline around Claude’s specific API.
- Regularly assess alternative vendors.
Pitfall 8: Ignoring Privacy Act Divergence
The mistake: “We’re HIPAA-compliant, so we’re Privacy Act-compliant.”
The reality: HIPAA and the Privacy Act don’t always say the same thing. Being HIPAA-compliant doesn’t automatically make you Privacy Act-compliant.
How to avoid it:
- Conduct separate compliance assessments for HIPAA and the Privacy Act
- Identify areas where they diverge
- Implement controls that satisfy the stricter requirement
- Consult with Australian legal experts who understand the Privacy Act
- When in doubt, be more restrictive (e.g., obtain explicit consent even if HIPAA doesn’t strictly require it)
Next Steps: Building Your Compliance Program
If you’re building a health-tech product that serves both US and Australian customers, dual HIPAA + Privacy Act compliance is not optional. It’s foundational.
But compliance is not a one-time project. It’s an ongoing program. Here’s how to get started:
Immediate Actions (This Week)
-
Audit your current Claude usage. If you’re already using Claude, document what data is being sent, how it’s being sent, and whether you have a BAA in place.
-
Contact Anthropic. If you’re planning to use Claude with PHI, reach out to Anthropic’s enterprise sales team and request a BAA discussion.
-
Engage legal. Bring your legal team into the conversation. They need to understand your compliance obligations and review any contracts with vendors.
-
Assess your current controls. What security controls do you already have? Where are the gaps?
Short-Term Actions (This Month)
-
Conduct a compliance assessment. Document your current compliance status relative to HIPAA and the Privacy Act. Identify gaps.
-
Classify your data. Determine what data is PHI, what data is sensitive, and what data can be shared more freely.
-
Negotiate your BAA. Work with Anthropic to finalise a BAA that protects your interests.
-
Design your architecture. Based on your data classification and compliance requirements, design your Claude deployment architecture.
Medium-Term Actions (This Quarter)
-
Implement security controls. Build the infrastructure and application code to enforce your architecture.
-
Test and verify. Conduct security testing and compliance verification.
-
Document everything. Create comprehensive documentation of your architecture, controls, and compliance program.
-
Train your team. Ensure that everyone who touches patient data understands their compliance obligations.
Long-Term Actions (Ongoing)
-
Monitor and audit. Continuously monitor your systems for security incidents and compliance violations.
-
Assess and improve. Regularly assess your controls and look for opportunities to improve.
-
Stay informed. Keep up with regulatory changes. Both HIPAA and the Privacy Act are evolving. New guidance is published regularly.
-
Engage experts. Consider engaging external experts (compliance consultants, security auditors, legal advisors) to help you navigate the complexity.
If you’re building health-tech in Sydney or Australia more broadly, you’re in a unique position. You have the opportunity to build products that serve global markets while meeting the highest compliance standards. That’s a competitive advantage.
At PADISO, we partner with health-tech founders and operators to navigate exactly these challenges. We’ve helped teams implement AI automation for healthcare diagnostic tools and patient care, design compliant data pipelines, and pass SOC 2 and ISO 27001 audits. If you’re building health-tech and you need fractional CTO leadership, architectural guidance, or co-build support, let’s talk.
The complexity of dual compliance is real. But it’s solvable. Start with clear requirements, design for compliance from the beginning, and maintain rigorous controls. Your patients—and your regulators—will thank you.
Appendix: Regulatory References and Further Reading
For deeper dives into specific topics, consult these resources:
HIPAA Guidance:
Privacy Act Guidance:
- Office of the Australian Information Commissioner (OAIC) Privacy Act Resources
- OAIC Guidance on Australian Privacy Principles
AI-Specific Compliance:
- HIPAA Compliance for AI in Digital Health
- AI Risk Management for HIPAA Privacy Rule Compliance
- HIPAA Compliant AI Development & Security Guidelines
Vendor Management and BAAs:
- AI & HIPAA: What It Means and How to Automate Compliance
- AI and HIPAA Compliance: The Risks and How to Reduce Your Exposures
For health-tech teams in Australia looking to build compliant AI systems, PADISO offers AI agency consultation Sydney and AI automation agency Sydney services. We specialise in helping founders navigate compliance, design secure architectures, and ship products that meet regulatory standards. We also provide platform engineering and design services for health-tech vendors modernising their stacks with AI and automation.
Whether you’re a seed-stage founder or an established health-tech operator, we’re here to help you build compliant, secure, and scalable Claude deployments. Let’s talk about your specific requirements.