PADISO.ai: AI Agent Orchestration Platform - Launching May 2026
Back to Blog
Guide 24 mins

EMR Integration via MCP: Cerner, Epic, and Best Practice Patterns

Master EMR integration via MCP servers. Learn Cerner, Epic, and Best Practice patterns for SMART-on-FHIR, auth, audit logging, and AU compliance.

The PADISO Team ·2026-04-17

EMR Integration via MCP: Cerner, Epic, and Best Practice Patterns

Table of Contents

  1. What is EMR Integration via MCP?
  2. Understanding MCP Servers and SMART-on-FHIR
  3. Cerner Integration Architecture
  4. Epic Integration Architecture
  5. Best Practice Patterns for AU Healthcare
  6. Authentication and Authorisation Frameworks
  7. Audit Logging and Compliance
  8. Australian-Specific Integration Patterns
  9. Real-World Implementation Challenges
  10. Building Secure, Audit-Ready EMR Integrations
  11. Next Steps and Getting Started

What is EMR Integration via MCP?

Electronic Medical Record (EMR) integration via Model Context Protocol (MCP) servers represents a paradigm shift in how healthcare organisations expose clinical data to AI agents and external applications. Rather than building custom point-to-point integrations for each new tool, MCP servers act as a standardised gateway—exposing EMR data through a well-defined protocol that Claude agents and other AI systems can safely consume.

The core value proposition is straightforward: reduce integration friction, standardise security controls, and ship AI-powered clinical workflows in weeks instead of months. Instead of your engineering team reverse-engineering Cerner or Epic APIs for each new use case, you define MCP server resources once, audit them once, and reuse them across multiple AI agents and applications.

For healthcare organisations in Australia and beyond, this approach aligns perfectly with modern compliance frameworks. When you expose EMR data through MCP servers with proper authentication, audit logging, and FHIR compliance, you’re building toward SOC 2 Type II and ISO 27001 readiness—critical for health systems managing patient data under Privacy Act 1988 and state-based health privacy legislation.

PADISO has worked with health tech founders and enterprise operators to architect these integrations at scale. The pattern is consistent: define your MCP server contract, implement SMART-on-FHIR authentication, log every access, and let Claude agents operate within guardrails. The result is faster AI deployment, better audit trails, and reduced regulatory friction.

Understanding MCP Servers and SMART-on-FHIR

MCP (Model Context Protocol) servers are lightweight, protocol-compliant applications that expose resources and tools via a standardised interface. In the EMR context, an MCP server sits between your Cerner or Epic system and your AI agents, translating clinical data into a format that large language models can safely understand and act upon.

How MCP Servers Work

An MCP server exposes three core primitives:

  • Resources: Read-only or read-write data objects (e.g., patient records, lab results, medications).
  • Tools: Callable functions that agents can invoke (e.g., “fetch patient labs for the last 30 days”, “create a new note”).
  • Prompts: Pre-defined templates that guide agent behaviour (e.g., “summarise this patient’s medication history”).

When Claude or another AI agent interacts with an MCP server, it negotiates capabilities at connection time, receives a manifest of available tools and resources, and then makes requests within those constraints. The server handles authentication, validates scopes, logs access, and returns data.

For EMR integration, this means your MCP server becomes the authoritative point of control for:

  • Who can access which patient data (identity and role-based access control).
  • What data they can read or write (resource-level permissions).
  • When they access it (audit trail with timestamps).
  • Why they accessed it (purpose codes, clinical workflow context).

SMART-on-FHIR: The Standards Foundation

SMART-on-FHIR is an open standard that layers OAuth 2.0 and OpenID Connect on top of FHIR APIs. It’s the lingua franca for healthcare app integration, and both Cerner and Epic support it natively.

When you build an MCP server that wraps Cerner or Epic, you’re typically:

  1. Obtaining SMART-on-FHIR credentials from the EHR vendor (client ID, client secret, token endpoint).
  2. Implementing OAuth 2.0 flows (typically client credentials or authorisation code flow) to request access tokens.
  3. Calling FHIR APIs with those tokens to fetch or write data.
  4. Translating FHIR resources into MCP-compatible formats.
  5. Enforcing scope-based access control (e.g., “patient/Patient.read” means read-only access to patient demographics).

The FHIR Specification from HL7 International defines the data models and RESTful patterns. The FHIR Foundation provides implementation guides and community support. Most modern healthcare integrations—whether in the US, Australia, or Europe—assume FHIR as the baseline interoperability layer.

When you’re designing an MCP server for Cerner or Epic, you’re essentially building a FHIR-native adapter that adds a layer of AI-safe guardrails on top.

Cerner Integration Architecture

Cerner (now part of Oracle Health) is one of the two dominant EHR platforms in English-speaking markets. Cerner’s interoperability strategy centers on Cerner’s Interoperability Solutions, which expose clinical data via FHIR APIs and HL7 v2 feeds.

Cerner FHIR API Overview

Cerner’s FHIR APIs are available in two deployment models:

  • Cerner Code (cloud-hosted): Fully managed, modern FHIR implementation with built-in OAuth 2.0 and SMART-on-FHIR support.
  • On-Premise Cerner: Requires additional middleware (Cerner Millennium Integration Engine) to expose FHIR endpoints.

For MCP integration, the cloud-hosted Cerner Code environment is significantly simpler. You register your application, obtain OAuth credentials, and call FHIR endpoints directly.

Authentication Flow for Cerner

The typical OAuth 2.0 client credentials flow for Cerner looks like this:

1. Your MCP server requests an access token from Cerner's token endpoint.
2. Cerner validates your client credentials and returns a JWT token (typically valid for 1 hour).
3. Your MCP server includes the token in the Authorization header of FHIR API calls.
4. Cerner validates the token, checks scopes, and returns patient data (or denies access).
5. Your MCP server translates the FHIR response and exposes it to Claude.

Key Cerner-specific considerations:

  • Scopes: Cerner uses SMART-on-FHIR scopes like patient/Patient.read, patient/Observation.read, patient/MedicationStatement.read. Define these carefully—overly broad scopes increase audit risk.
  • Patient Context: Cerner requires you to specify the patient ID in your FHIR queries (e.g., /Patient/{id}/Observation). This is enforced at the API level.
  • Rate Limiting: Cerner enforces rate limits (typically 100 requests per minute for standard integrations). Your MCP server should implement caching and backoff logic.
  • Token Refresh: Tokens expire after 1 hour. Your server must handle refresh token flows or request new tokens on each session.

Cerner Data Models and MCP Resource Mapping

When designing your MCP server for Cerner, you’ll typically expose resources like:

  • Patient: Demographics, contact info, identifiers.
  • Observation: Lab results, vital signs, assessment scores.
  • MedicationStatement: Current and historical medications.
  • Condition: Active and resolved diagnoses.
  • Encounter: Visits, admissions, clinic appointments.
  • DocumentReference: Clinical notes, reports, imaging reports.

Each resource maps to a FHIR endpoint in Cerner’s API. Your MCP server wraps each endpoint with:

  • Input validation: Ensure patient IDs and date ranges are valid.
  • Scope checking: Verify the requesting agent has permission to read that resource type.
  • Data transformation: Convert FHIR JSON to a format Claude understands (usually simplified JSON or markdown).
  • Audit logging: Record who accessed what, when, and why.

Epic Integration Architecture

Epic is the other dominant EHR vendor, with strong presence in large health systems. Epic’s interoperability approach is documented at Epic Interoperability, with a focus on FHIR APIs, HL7 v2, and custom integration tools.

Epic FHIR API and SMART-on-FHIR

Epic’s FHIR APIs are available through:

  • Epic’s App Orchard: A marketplace for third-party apps that integrate with Epic via SMART-on-FHIR.
  • Direct FHIR API access: For enterprise integrations, you can request direct FHIR API credentials.
  • Care Everywhere: Epic’s proprietary data exchange network (less relevant for MCP integration, but worth noting).

For MCP server integration, you’ll typically:

  1. Register your application in Epic’s developer portal and request FHIR API credentials.
  2. Implement SMART-on-FHIR OAuth 2.0 (Epic supports both client credentials and authorisation code flows).
  3. Call Epic’s FHIR endpoints to fetch patient data.
  4. Map Epic-specific extensions to standard FHIR (Epic sometimes uses custom extensions for fields not in the base FHIR spec).

Authentication Flow for Epic

Epic’s OAuth 2.0 flow is similar to Cerner’s but with some Epic-specific nuances:

1. Your MCP server requests an access token from Epic's OAuth server.
2. Epic validates credentials and returns a JWT token (typically 1 hour validity).
3. Your MCP server calls Epic FHIR endpoints with the token.
4. Epic enforces patient context (you must specify patient ID in queries).
5. Epic returns FHIR resources, sometimes with Epic-specific extensions.

Key Epic-specific considerations:

  • Patient ID Format: Epic uses internal patient IDs (MRNs or system identifiers). You need to map between your internal patient identifiers and Epic’s.
  • Scopes: Epic supports SMART-on-FHIR scopes but also has custom Epic-specific scopes. Clarify which scopes your integration needs.
  • Rate Limiting: Epic typically enforces 10–100 requests per minute depending on your contract tier.
  • Delay in Data Availability: Epic’s FHIR APIs sometimes lag behind real-time data by a few minutes. Design your MCP server to handle eventual consistency.
  • Extensions: Epic adds custom extensions to FHIR resources (e.g., http://open.epic.com/fhir/StructureDefinition/...). Parse these carefully in your MCP server.

Epic Data Models and MCP Resource Mapping

Similar to Cerner, you’ll expose Epic FHIR resources through your MCP server:

  • Patient: Demographics, identifiers, contact info.
  • Observation: Lab results, vital signs, assessment scores.
  • Medication: Active medications, medication history.
  • Condition: Diagnoses (active and resolved).
  • Encounter: Appointments, inpatient stays, ED visits.
  • DocumentReference: Clinical notes, discharge summaries, imaging reports.
  • Procedure: Surgical procedures, interventions.
  • AllergyIntolerance: Documented allergies and intolerances.

Epic’s FHIR implementation is comprehensive, and most standard FHIR resources are well-supported. The main challenge is handling Epic-specific extensions and ensuring your MCP server gracefully degrades if an extension is missing.

Best Practice Patterns for AU Healthcare

Integrating EMR systems in Australia requires understanding local compliance frameworks and healthcare workflows. Unlike the US, where HIPAA is the primary privacy regulator, Australian healthcare organisations must comply with the Privacy Act 1988, state-based health privacy laws, and increasingly, the Australian Information Security Manual (ISM) for government health systems.

Pattern 1: Layered Authentication with Role-Based Access Control

Instead of a single OAuth token for your entire MCP server, implement layered authentication:

  1. Service-to-Service Auth: Your MCP server authenticates to Cerner/Epic using OAuth 2.0 client credentials.
  2. User/Agent-to-MCP Auth: Claude agents and other clients authenticate to your MCP server using API keys or JWT tokens.
  3. Role-Based Access Control (RBAC): Your MCP server checks the requesting agent’s role (e.g., “clinician”, “admin”, “audit”) and enforces resource-level permissions.

For example:

  • A clinician agent can read patient observations but cannot write medications.
  • An administrative agent can read audit logs but cannot access patient data.
  • An audit agent can read all access logs but cannot modify them.

This pattern aligns with Privacy Act principles and makes audit-readiness simpler.

Pattern 2: Purpose-Based Data Filtering

In Australia, the Privacy Act requires that personal information be collected and used only for the primary purpose (or a related secondary purpose). When your MCP server exposes patient data to Claude agents, it should filter based on the stated purpose.

For example:

  • If Claude is being used for “discharge summary generation”, it should access only recent observations, medications, and conditions—not historical psychiatric notes or sensitive flags.
  • If Claude is being used for “medication reconciliation”, it should access medication lists but not lab results.

Implement this by:

  1. Tagging MCP tools with purpose: Each tool in your MCP server declares its intended use (e.g., purpose: "discharge_summary").
  2. Filtering resources by purpose: When Claude calls a tool, your server filters the returned data to match the declared purpose.
  3. Logging purpose: Always log the purpose along with access records for Privacy Act compliance.

Pattern 3: Caching with Audit Trails

Calling Cerner or Epic APIs for every Claude query is expensive (in terms of latency and rate limits). Instead, implement a caching layer:

  1. Cache FHIR responses in a local database (Redis, PostgreSQL) with a TTL (time-to-live) of 5–30 minutes depending on data freshness requirements.
  2. Log cache hits and misses to distinguish between real-time API calls and cached data.
  3. Invalidate cache when data is written (e.g., when a clinician updates a medication).

This pattern improves performance and makes audit logs clearer (you can see which queries hit the live API vs. cache).

Pattern 4: Structured Error Handling and Graceful Degradation

When Cerner or Epic is unavailable or returns errors, your MCP server should:

  1. Return meaningful error messages to Claude (e.g., “Patient data temporarily unavailable due to API timeout”).
  2. Log the error with context (which endpoint failed, error code, timestamp).
  3. Implement circuit breakers to avoid cascading failures (if Cerner is down, stop making requests for 30 seconds).
  4. Provide fallback data if available (e.g., cached data with a staleness warning).

This prevents Claude from hallucinating data or making incorrect clinical decisions due to integration failures.

Authentication and Authorisation Frameworks

Secure EMR integration depends on robust authentication and authorisation. Here’s how to implement these patterns in your MCP server.

OAuth 2.0 Client Credentials Flow

For service-to-service authentication (your MCP server to Cerner/Epic), use OAuth 2.0 client credentials:

POST /oauth/token HTTP/1.1
Host: cerner-or-epic-auth-server
Content-Type: application/x-www-form-urlencoded

grant_type=client_credentials
&client_id=YOUR_CLIENT_ID
&client_secret=YOUR_CLIENT_SECRET
&scope=patient/Patient.read patient/Observation.read

The response is a JWT access token:

{
  "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...",
  "token_type": "Bearer",
  "expires_in": 3600
}

Your MCP server should:

  1. Store credentials securely: Use environment variables or a secrets manager (e.g., AWS Secrets Manager, HashiCorp Vault).
  2. Implement token refresh logic: Before the token expires, request a new one.
  3. Handle token expiration gracefully: If a token expires mid-request, retry with a fresh token.

SMART-on-FHIR Scopes

When requesting OAuth tokens, specify scopes that limit what data your MCP server can access. Common SMART-on-FHIR scopes include:

  • patient/Patient.read: Read patient demographics.
  • patient/Observation.read: Read observations (lab results, vital signs).
  • patient/MedicationStatement.read: Read medication history.
  • patient/Condition.read: Read diagnoses.
  • patient/Encounter.read: Read visit/appointment data.
  • patient/DocumentReference.read: Read clinical documents.

Never request overly broad scopes. If your MCP server only needs to read lab results, request only patient/Observation.read, not patient/*.read. This principle—least privilege—is fundamental to audit-readiness and Privacy Act compliance.

Role-Based Access Control (RBAC) in Your MCP Server

Once your MCP server has authenticated to Cerner/Epic, it needs to enforce RBAC for incoming requests from Claude agents. Implement this by:

  1. Defining roles: Create a role hierarchy (e.g., clinician, nurse, admin, audit).
  2. Mapping roles to resource permissions: Define which roles can read/write which resources.
  3. Checking roles on every request: Before returning data, verify the requesting agent’s role.

Example:

{
  "roles": {
    "clinician": {
      "can_read": ["Patient", "Observation", "MedicationStatement", "Condition"],
      "can_write": ["DocumentReference", "Note"]
    },
    "nurse": {
      "can_read": ["Patient", "Observation"],
      "can_write": []
    },
    "admin": {
      "can_read": ["*"],
      "can_write": ["*"]
    }
  }
}

When Claude requests patient observations, your MCP server checks: “Is the requesting agent a clinician or admin? If yes, return data. If no, return an error.”

Audit Logging and Compliance

Audit logging is non-negotiable for healthcare integrations. Every access to patient data must be logged with sufficient detail for Privacy Act compliance and regulatory audits.

What to Log

For each EMR data access, log:

  1. Timestamp: When the access occurred (UTC, precise to milliseconds).
  2. User/Agent ID: Who accessed the data (Claude agent ID, clinician ID, etc.).
  3. Resource Type: What was accessed (Patient, Observation, MedicationStatement, etc.).
  4. Patient ID: Which patient’s data was accessed.
  5. Action: Read, write, delete, etc.
  6. Scope/Purpose: What the agent was authorised to do (e.g., “discharge_summary”).
  7. Result: Success or failure (and failure reason if applicable).
  8. Data Sensitivity: Was this sensitive data (e.g., psychiatric notes, HIV status)?

Example audit log entry:

{
  "timestamp": "2025-01-15T10:23:45.123Z",
  "agent_id": "claude-discharge-summary-v1",
  "resource_type": "Observation",
  "patient_id": "PAT-12345",
  "action": "read",
  "scope": "patient/Observation.read",
  "purpose": "discharge_summary",
  "result": "success",
  "data_sensitivity": "standard",
  "query_parameters": { "date_from": "2025-01-01", "date_to": "2025-01-15" }
}

Storing and Protecting Audit Logs

Audit logs are themselves sensitive and must be protected:

  1. Immutable storage: Use append-only logs (e.g., AWS CloudTrail, Azure Monitor, or a local append-only database).
  2. Encryption: Encrypt audit logs at rest and in transit.
  3. Access control: Only authorised staff (compliance, security, auditors) can read audit logs.
  4. Retention: Retain audit logs for at least 7 years (or per your organisation’s policy).
  5. Regular review: Periodically review logs for anomalies (e.g., unusual access patterns, failed authentication attempts).

Audit-Readiness via Vanta

For Australian health systems pursuing SOC 2 Type II or ISO 27001 compliance, tools like Vanta can automate audit log collection and evidence gathering. When you design your MCP server with structured audit logging, Vanta can ingest those logs and help demonstrate compliance.

Key audit-readiness principles:

  • Centralise logs: Send all MCP server logs to a central logging platform (e.g., ELK stack, Splunk, AWS CloudWatch).
  • Use standard formats: Log in JSON or structured format so Vanta and other tools can parse them.
  • Enable alerting: Set up alerts for suspicious access patterns (e.g., access outside business hours, access to sensitive data by non-clinicians).
  • Document controls: Write down your access control policies and how they’re enforced in code.

Australian-Specific Integration Patterns

Australia’s healthcare landscape has unique characteristics that affect EMR integration design.

Best Practice, MedicalDirector, and Genie Integrations

While Cerner and Epic dominate in large health systems, many Australian general practices and smaller health networks use:

  • Best Practice: A cloud-based practice management and EMR system widely used in Australian general practice.
  • MedicalDirector: Another Australian practice management system with EMR capabilities.
  • Genie: A cloud-based GP system increasingly popular in Australia.

These systems often have different APIs and interoperability models compared to Cerner/Epic. When building MCP servers for these systems:

  1. Check FHIR support: Best Practice and MedicalDirector have FHIR APIs, but coverage may be incomplete. Verify which resources are available.
  2. Use HL7 v2 as fallback: If FHIR is limited, you may need to parse HL7 v2 messages (older but still widely used in Australia).
  3. Handle Australian identifiers: Ensure your MCP server correctly maps Australian healthcare identifiers (HI numbers, local MRNs, etc.).
  4. Respect GP confidentiality: Australian GPs are often independent contractors, not employees. When integrating with general practice systems, respect data ownership and confidentiality agreements.

Privacy Act and State Health Privacy Laws

When designing MCP servers for Australian health organisations, consider:

  1. Privacy Act 1988 (Cth): Applies to most health organisations. Key principles include:

    • Collect and use personal information only for stated purposes.
    • Implement security safeguards proportionate to sensitivity.
    • Provide individuals with access to their data.
    • Allow individuals to correct inaccurate data.
  2. State health privacy laws: Some states (e.g., NSW, Victoria) have health-specific privacy legislation that may impose additional requirements.

  3. Australian Information Security Manual (ISM): For government health systems, the ISM provides mandatory security controls. If your MCP server integrates with a government health system, review ISM requirements.

Data Localisation and Residency

Some Australian health organisations require that patient data remain within Australia (for compliance or strategic reasons). When designing your MCP server:

  1. Host infrastructure in Australia: Use AWS Sydney, Azure Australia, or local data centres.
  2. Document data flows: Clearly document where data is processed and stored.
  3. Encrypt data in transit: Use TLS 1.3 for all API calls to/from Cerner, Epic, and other EMR systems.
  4. Avoid third-party processors: If you use third-party logging or monitoring services, ensure they have Australian data centres.

Real-World Implementation Challenges

Building production EMR integrations is complex. Here are common challenges and how to address them.

Challenge 1: Patient ID Mismatches

Problem: Your organisation uses internal patient IDs, but Cerner uses MRNs and Epic uses a different identifier system. When Claude needs to fetch patient data, which ID should it use?

Solution: Implement a patient ID mapping layer in your MCP server:

  1. Maintain a mapping table: Map internal patient IDs to Cerner MRNs and Epic IDs.
  2. Canonicalise on input: When Claude requests patient data, convert the internal ID to the appropriate EMR ID before calling the API.
  3. Cache mappings: Cache ID mappings to avoid repeated lookups.
  4. Log mismatches: If an ID cannot be mapped, log it and alert operators.

Challenge 2: FHIR Version Mismatches

Problem: Cerner supports FHIR R4, but Epic is still on FHIR STU3. Your MCP server needs to handle both.

Solution:

  1. Detect FHIR version: When connecting to an EMR, detect which FHIR version it supports.
  2. Implement version-specific parsers: Write separate parsing logic for R4 and STU3 resources.
  3. Normalise to a canonical format: Convert both R4 and STU3 resources to a canonical internal format before exposing to Claude.
  4. Document version constraints: Make it clear to Claude which EMR systems support which operations.

Challenge 3: Real-Time vs. Eventual Consistency

Problem: Cerner and Epic don’t always have real-time data. A clinician updates a medication in Epic, but the FHIR API returns stale data for 2–5 minutes.

Solution:

  1. Accept eventual consistency: Design your MCP server and Claude workflows to tolerate stale data.
  2. Implement refresh strategies: If Claude needs fresh data, provide a tool to force a refresh (with appropriate rate limiting).
  3. Use timestamps: Always include data timestamps in responses so Claude knows how fresh the data is.
  4. Document SLAs: Clearly document expected data freshness (e.g., “Observation data is typically fresh within 5 minutes”).

Challenge 4: Scope Creep and Token Expiration

Problem: Your MCP server requests a token with broad scopes, but then you need to add a new EMR access pattern that requires additional scopes. Redeploying is slow.

Solution:

  1. Request all necessary scopes upfront: Work with Cerner/Epic to identify all scopes your integration might need, and request them all at once.
  2. Implement scope negotiation: Design your MCP server to request only the scopes it currently needs, and gracefully degrade if a scope is not available.
  3. Use refresh tokens: Request refresh tokens with long expiry (weeks or months) so you can obtain new access tokens without re-authorising.

Challenge 5: Hallucinated Data and Prompt Injection

Problem: Claude hallucinates patient data (e.g., invents lab results) or attackers inject prompts to extract data they shouldn’t have access to.

Solution: This is a broader AI safety concern. See Agentic AI Production Horror Stories for detailed patterns. For EMR integration specifically:

  1. Ground Claude in real data: Always provide Claude with actual data from the EMR; never ask it to infer or guess.
  2. Enforce tool constraints: Limit which tools Claude can call and what parameters it can provide.
  3. Validate tool inputs: Before calling an EMR API, validate all parameters (patient IDs, date ranges, etc.).
  4. Monitor for anomalies: Log all Claude requests and responses; alert on unusual patterns (e.g., requests for data outside normal scope).

Building Secure, Audit-Ready EMR Integrations

Here’s a step-by-step approach to building an MCP server for Cerner or Epic that’s secure and audit-ready.

Step 1: Define Your Data Model and Scopes

Before writing code, define:

  1. Which FHIR resources your MCP server will expose (Patient, Observation, Medication, etc.).
  2. Which operations each resource supports (read, write, search).
  3. Which scopes you need from Cerner/Epic (e.g., patient/Observation.read).
  4. Which roles will have access to each resource (clinician, nurse, admin, audit).

Document this in a design document or API specification. Share it with your EMR vendor for review.

Step 2: Implement OAuth 2.0 and Token Management

  1. Register your application with Cerner and/or Epic.
  2. Securely store credentials (client ID, client secret) in a secrets manager.
  3. Implement token request logic: Request tokens, handle refresh, and manage expiration.
  4. Add error handling: Gracefully handle token errors (expired, invalid, revoked).

Step 3: Implement FHIR API Calls

  1. Use a FHIR client library (e.g., HAPI FHIR for Java, fhir-py for Python) to simplify API calls.
  2. Implement resource-specific methods: For each resource type, implement methods to read, write, and search.
  3. Add input validation: Validate patient IDs, date ranges, and other parameters before calling APIs.
  4. Handle errors: Implement retry logic, circuit breakers, and graceful degradation.

Step 4: Build the MCP Server Interface

  1. Define MCP resources: Map FHIR resources to MCP resources (e.g., a “Patient Summary” resource that aggregates Patient, Observation, and Medication data).
  2. Define MCP tools: Create tools for common operations (“Get patient labs”, “Get medication list”, “Create clinical note”).
  3. Implement RBAC: Check roles before returning data or allowing writes.
  4. Add input/output formatting: Ensure Claude receives data in a format it can understand (JSON, markdown, plain text).

Step 5: Implement Comprehensive Audit Logging

  1. Log all data access: Every read, write, and search operation.
  2. Include context: User/agent ID, patient ID, purpose, timestamp, result.
  3. Protect logs: Store in an immutable, encrypted, access-controlled system.
  4. Enable alerting: Alert on suspicious access patterns.
  5. Integrate with compliance tools: Set up Vanta or similar to ingest logs for SOC 2/ISO 27001 evidence.

Step 6: Test and Validate

  1. Unit tests: Test FHIR API calls, token management, and RBAC logic.
  2. Integration tests: Test end-to-end flows (e.g., “Fetch patient labs and generate summary”).
  3. Security tests: Test authentication failures, invalid scopes, and unauthorised access attempts.
  4. Load tests: Ensure your MCP server can handle expected traffic and respects rate limits.
  5. Audit trail tests: Verify that audit logs are complete and accurate.

Step 7: Deploy and Monitor

  1. Use infrastructure-as-code: Deploy your MCP server using Terraform, CloudFormation, or similar.
  2. Enable monitoring: Set up dashboards for API latency, error rates, and audit log volume.
  3. Set up alerting: Alert on token errors, API failures, and suspicious access patterns.
  4. Plan for updates: Cerner and Epic regularly update their APIs. Plan quarterly reviews to stay current.

Next Steps and Getting Started

If you’re building an EMR integration via MCP, here’s what to do next:

1. Assess Your Current State

  • Which EMR systems are you integrating with? (Cerner, Epic, Best Practice, MedicalDirector, Genie?)
  • What data do you need to expose to AI agents?
  • Which roles need access to which data?
  • What are your compliance requirements? (Privacy Act, SOC 2, ISO 27001?)

2. Design Your MCP Server

  • Document your data model, scopes, and roles.
  • Sketch out your authentication and authorisation architecture.
  • Plan your audit logging strategy.
  • Identify Australian-specific considerations (data residency, state privacy laws, etc.).

3. Build and Test

  • Start with a single FHIR resource (e.g., Patient demographics).
  • Implement OAuth 2.0 and test token flows.
  • Build RBAC and test role-based access.
  • Add comprehensive audit logging.
  • Test end-to-end with Claude.

4. Deploy and Monitor

  • Deploy to a secure, auditable infrastructure.
  • Enable monitoring and alerting.
  • Conduct a security review before production.
  • Plan for ongoing maintenance and updates.

5. Iterate and Scale

  • Once the initial integration is stable, add more FHIR resources.
  • Expand to additional EMR systems if needed.
  • Optimise performance (caching, rate limiting, etc.).
  • Continuously improve audit trails and compliance posture.

Getting Help

Building production EMR integrations is complex. If you need guidance, consider:

  • PADISO’s CTO as a Service: Get fractional CTO leadership to architect your EMR integration and oversee implementation. See our AI & Agents Automation services for details on how we partner with health tech founders.
  • AI Strategy & Readiness: Before diving into MCP servers, conduct an AI Readiness assessment to ensure your organisation is ready for AI-driven EMR workflows.
  • Security Audit (SOC 2 / ISO 27001): Once your MCP server is built, conduct a security audit to ensure it’s audit-ready. We help organisations achieve SOC 2 and ISO 27001 compliance via Vanta.

For healthcare-specific AI automation, see our AI Automation for Healthcare guide, which covers diagnostic tools, patient care workflows, and best practices for health tech teams.

Key Takeaways

  1. MCP servers standardise EMR integration: Instead of building custom integrations for each use case, define MCP server resources once and reuse them.
  2. SMART-on-FHIR is the foundation: Both Cerner and Epic support FHIR APIs with OAuth 2.0. This is your starting point.
  3. Authentication and audit logging are non-negotiable: Every data access must be logged with full context for Privacy Act and regulatory compliance.
  4. Australian healthcare has unique requirements: Consider data residency, state privacy laws, and local EMR systems (Best Practice, MedicalDirector, Genie).
  5. Real-world integrations are complex: Plan for patient ID mismatches, FHIR version differences, eventual consistency, and AI safety challenges.
  6. Build for audit-readiness from day one: Design your MCP server with comprehensive logging, RBAC, and compliance in mind. This pays dividends when you pursue SOC 2 or ISO 27001 certification.

With a well-designed MCP server, you can ship AI-powered clinical workflows in weeks instead of months, while maintaining the security and compliance standards that healthcare demands.


Additional Resources

For deeper dives into related topics, explore:

For healthcare-specific guidance, also see AI Automation for Healthcare and AI and ML Integration: CTO Guide.