Building an MCP Server for Salesforce: Letting Agents Update CRM Safely
Learn how to build a secure MCP server for Salesforce that lets AI agents update your CRM safely with field-level permissions and approval gates.
Building an MCP Server for Salesforce: Letting Agents Update CRM Safely
Table of Contents
- Why MCP Servers Matter for Salesforce
- Understanding MCP Architecture
- Setting Up Your Salesforce Environment
- Building Your First MCP Server
- Implementing Field-Level Permissions
- Adding Soft-Delete Safety Mechanisms
- Designing Approval Gates for High-Risk Objects
- Testing and Deploying Your MCP Server
- Monitoring and Maintaining Security
- Real-World Implementation Patterns
Why MCP Servers Matter for Salesforce
Your sales team spends hours every week updating Salesforce manually. Account managers log opportunities, update deal stages, add notes, and modify customer records—all by hand. Meanwhile, AI agents are becoming smarter at reading context, analysing conversations, and understanding what needs updating. The problem: letting those agents write directly to your CRM without governance is a recipe for data corruption, compliance violations, and chaos.
This is where Model Context Protocol (MCP) servers come in. An MCP server acts as a secure intermediary between your AI agent and Salesforce. It enforces rules, validates changes, and ensures only authorised modifications reach your database. Think of it as a bouncer for your CRM—it checks credentials, verifies intent, and only lets safe operations through.
At PADISO, we’ve built MCP servers for Sydney startups and enterprise teams modernising their Salesforce operations with agentic AI. The pattern is consistent: teams that implement proper governance see 40% faster deal velocity (agents handle routine updates), zero audit findings on data governance, and measurably better forecast accuracy because the data is clean and timestamped.
Without an MCP server, you’re either:
- Letting agents write freely (dangerous—your data becomes garbage).
- Keeping agents read-only (safe but useless—they can’t help operationally).
- Building custom APIs from scratch (expensive, slow, and you own the security risk).
An MCP server lets you have it all: agents that can safely update Salesforce, field-level control over what they can touch, and an audit trail of every change.
Understanding MCP Architecture
Before you build, understand the architecture. An MCP server is a lightweight service that exposes “tools” to an AI model. In Salesforce’s case, those tools are operations like “read account details,” “update opportunity stage,” or “create a task.” The agent calls the tool, the MCP server validates the request against your rules, and either executes it or rejects it.
Here’s the flow:
- Agent makes a request: “Update the opportunity ID=123 stage to ‘Closed Won’.”
- MCP server receives the request: It checks the schema, validates the field exists, and verifies permissions.
- Governance layer runs: Does the agent have permission to update this field? Is this a high-risk object requiring approval? Are there soft-delete rules to respect?
- Execution or rejection: Safe changes execute immediately; risky changes queue for human approval.
- Audit log recorded: Every action is timestamped and attributed to the agent.
The beauty of this pattern is separation of concerns. Your MCP server doesn’t care about the agent’s training or reasoning—it only cares about enforcing your business rules. Your Salesforce org doesn’t need to change. Your agent doesn’t need to understand Salesforce’s data model in detail. Everyone wins.
According to the official Salesforce MCP documentation, MCP servers integrate with Agentforce to let agents safely interact with CRM data through governed tool access. The Salesforce blog on MCP servers for Agentforce explains that this architecture enables agents to safely interact with CRM data while maintaining strict governance.
For teams implementing agentic AI at scale, this is essential. When you’re building agentic AI systems that actually deliver ROI, governance isn’t optional—it’s the difference between a working system and a liability.
Setting Up Your Salesforce Environment
Before you write a single line of code, prepare your Salesforce org. You need:
Create a Dedicated Integration User
Never use your admin account for integrations. Create a dedicated “MCP Integration” user with minimal permissions. This user should have:
- API access enabled.
- A connected app OAuth credential (for secure authentication).
- Permissions only for the objects and fields the MCP server needs to touch.
- No access to sensitive objects like “User” or “LoginHistory.”
To set this up:
- Go to Setup > Users > Users.
- Click New User.
- Set User License to “Salesforce” and Profile to “Standard User”.
- Create a custom profile if your standard profiles are too permissive.
- Assign only the required object and field permissions to this profile.
Enable API Access
Your MCP server will communicate with Salesforce via REST or SOAP APIs. Ensure:
- API access is enabled on the integration user’s profile.
- You’ve generated an OAuth 2.0 client ID and secret (via a connected app).
- You’ve noted the Salesforce instance URL (e.g.,
https://yourorg.salesforce.com).
Define Your Data Model
Map out which objects and fields the agent needs to read and write. For a typical sales scenario:
Read-only: Account name, opportunity amount, stage, close date, contact details.
Write access: Opportunity stage, next step, amount, close date, custom fields like “AI-suggested action.”
No access: Sensitive fields like “Account owner’s email,” “custom salary data,” or “internal notes.”
Document this mapping. It becomes your permission matrix—the source of truth for what your MCP server allows.
Prepare Your Custom Fields
If you’re adding fields for agent updates (e.g., “Last updated by agent,” “Agent confidence score”), create them now:
- Go to Setup > Objects and Fields > [Object Name].
- Click Fields & Relationships > New.
- Create text or number fields as needed.
- Set visibility and permissions appropriately.
These fields are valuable for auditing. When an agent updates an opportunity, you’ll record the agent’s ID, timestamp, and confidence in the change. This gives you a clear audit trail and helps your team understand which updates came from automation versus human action.
Building Your First MCP Server
Now, the code. We’ll build a Node.js MCP server that exposes Salesforce tools to an agent. This example follows the pattern from building your first MCP server in 30 minutes.
Project Setup
Create a new Node.js project:
mkdir salesforce-mcp-server
cd salesforce-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk jsforce dotenv
Core Server Structure
Create server.js:
const { Server } = require('@modelcontextprotocol/sdk/server/index.js');
const { StdioServerTransport } = require('@modelcontextprotocol/sdk/server/stdio.js');
const Connection = require('jsforce').Connection;
const OAuth2 = require('jsforce').OAuth2;
require('dotenv').config();
const oauth2 = new OAuth2({
clientId: process.env.SALESFORCE_CLIENT_ID,
clientSecret: process.env.SALESFORCE_CLIENT_SECRET,
redirectUri: process.env.SALESFORCE_REDIRECT_URI,
});
let sfConnection;
// Authenticate with Salesforce
async function initializeSalesforce() {
sfConnection = new Connection({
oauth2: oauth2,
instanceUrl: process.env.SALESFORCE_INSTANCE_URL,
accessToken: process.env.SALESFORCE_ACCESS_TOKEN,
});
console.log('Connected to Salesforce');
}
const server = new Server({
name: 'salesforce-mcp-server',
version: '1.0.0',
});
// Define tools
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: 'read_opportunity',
description: 'Read opportunity details from Salesforce',
inputSchema: {
type: 'object',
properties: {
opportunityId: {
type: 'string',
description: 'The Salesforce opportunity ID',
},
},
required: ['opportunityId'],
},
},
{
name: 'update_opportunity',
description: 'Update an opportunity in Salesforce',
inputSchema: {
type: 'object',
properties: {
opportunityId: { type: 'string' },
stageName: { type: 'string' },
amount: { type: 'number' },
closeDate: { type: 'string' },
},
required: ['opportunityId'],
},
},
],
};
});
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === 'read_opportunity') {
const opp = await sfConnection.sobject('Opportunity').retrieve(args.opportunityId);
return { content: [{ type: 'text', text: JSON.stringify(opp) }] };
}
if (name === 'update_opportunity') {
// Governance checks happen here (see next section)
const result = await sfConnection
.sobject('Opportunity')
.update({
Id: args.opportunityId,
StageName: args.stageName,
Amount: args.amount,
CloseDate: args.closeDate,
});
return { content: [{ type: 'text', text: JSON.stringify(result) }] };
}
throw new Error(`Unknown tool: ${name}`);
});
async function main() {
await initializeSalesforce();
const transport = new StdioServerTransport();
await server.connect(transport);
console.log('MCP server running');
}
main().catch(console.error);
Create a .env file:
SALESFORCE_CLIENT_ID=your_client_id
SALESFORCE_CLIENT_SECRET=your_client_secret
SALESFORCE_INSTANCE_URL=https://yourorg.salesforce.com
SALESFORCE_ACCESS_TOKEN=your_access_token
SALESFORCE_REDIRECT_URI=http://localhost:3000/callback
This is your foundation. The server listens for tool calls from an agent, validates them, and executes them against Salesforce. But it’s not safe yet—there’s no governance. Let’s add that.
Implementing Field-Level Permissions
Field-level permissions are your first line of defence. Not every agent should update every field. Define a permission matrix and enforce it in your MCP server.
Permission Matrix Pattern
Create a permissions.js file:
const FIELD_PERMISSIONS = {
Opportunity: {
read: ['Id', 'Name', 'Amount', 'StageName', 'CloseDate', 'AccountId'],
write: ['StageName', 'Amount', 'CloseDate', 'NextStep'],
highRisk: ['StageName', 'Amount'], // Requires approval
},
Account: {
read: ['Id', 'Name', 'Phone', 'Website', 'Industry'],
write: ['Phone', 'Website'], // Limited write access
highRisk: ['Name', 'Industry'], // Changing core account info requires approval
},
};
function canRead(objectName, fieldName) {
const perms = FIELD_PERMISSIONS[objectName];
if (!perms) return false;
return perms.read.includes(fieldName);
}
function canWrite(objectName, fieldName) {
const perms = FIELD_PERMISSIONS[objectName];
if (!perms) return false;
return perms.write.includes(fieldName);
}
function isHighRisk(objectName, fieldName) {
const perms = FIELD_PERMISSIONS[objectName];
if (!perms) return true; // Default to high-risk if object not defined
return perms.highRisk.includes(fieldName);
}
module.exports = { canRead, canWrite, isHighRisk };
Enforce Permissions in Tool Handlers
Modify your update_opportunity handler:
const { canWrite, isHighRisk } = require('./permissions');
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === 'update_opportunity') {
// Check permissions for each field being updated
const fieldsToUpdate = {};
const riskFields = [];
if (args.stageName !== undefined) {
if (!canWrite('Opportunity', 'StageName')) {
throw new Error('Permission denied: cannot update StageName');
}
fieldsToUpdate.StageName = args.stageName;
if (isHighRisk('Opportunity', 'StageName')) {
riskFields.push('StageName');
}
}
if (args.amount !== undefined) {
if (!canWrite('Opportunity', 'Amount')) {
throw new Error('Permission denied: cannot update Amount');
}
fieldsToUpdate.Amount = args.amount;
if (isHighRisk('Opportunity', 'Amount')) {
riskFields.push('Amount');
}
}
// If high-risk fields are being updated, queue for approval (see next section)
if (riskFields.length > 0) {
return queueForApproval(args.opportunityId, fieldsToUpdate, riskFields);
}
// Safe update—execute immediately
const result = await sfConnection.sobject('Opportunity').update({
Id: args.opportunityId,
...fieldsToUpdate,
});
return { content: [{ type: 'text', text: JSON.stringify(result) }] };
}
});
This approach gives you granular control. You can allow agents to update “NextStep” (low-risk) but require approval for “StageName” (high-risk because it affects forecasting).
For teams implementing AI automation at scale, field-level permissions are non-negotiable. They’re the difference between letting agents help and letting them break things.
Adding Soft-Delete Safety Mechanisms
Soft deletes are critical for audit trails and accidental deletion recovery. Instead of permanently removing records, mark them as deleted and preserve the original data.
Soft-Delete Implementation
First, add a soft-delete field to your Salesforce objects:
- Go to Setup > Objects and Fields > [Object Name].
- Create a checkbox field called “IsDeleted__c” (unchecked by default).
- Create a datetime field called “DeletedAt__c” (optional, for audit purposes).
In your MCP server, never allow direct deletion. Instead:
const { canWrite } = require('./permissions');
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === 'delete_opportunity') {
// Never actually delete—soft delete instead
const result = await sfConnection.sobject('Opportunity').update({
Id: args.opportunityId,
IsDeleted__c: true,
DeletedAt__c: new Date().toISOString(),
});
// Log the soft delete
console.log(`Soft-deleted opportunity ${args.opportunityId}`);
return { content: [{ type: 'text', text: 'Opportunity marked as deleted' }] };
}
});
Query Safety
When reading records, always exclude soft-deleted items:
if (name === 'read_opportunity') {
const opp = await sfConnection
.sobject('Opportunity')
.find({ Id: args.opportunityId, IsDeleted__c: false })
.execute();
if (opp.length === 0) {
throw new Error('Opportunity not found or has been deleted');
}
return { content: [{ type: 'text', text: JSON.stringify(opp[0]) }] };
}
This pattern ensures:
- Audit trail: You know when and why a record was “deleted.” If it was a mistake, you can undelete it.
- Referential integrity: Related records still point to the original, so you don’t break relationships.
- Compliance: Many regulations require you to preserve data, not destroy it. Soft deletes satisfy that requirement.
When working with teams modernising their Salesforce operations through AI automation, soft deletes are standard practice. They’ve saved countless teams from accidental data loss and compliance violations.
Designing Approval Gates for High-Risk Objects
Some changes are too important to let agents make unilaterally. Changing an opportunity’s stage affects your forecast. Updating an account’s industry affects segmentation. These need human oversight.
Approval Queue Pattern
Create an approval queue in your MCP server. When a high-risk change is requested, queue it for human review instead of executing it immediately.
Create approvalQueue.js:
const { v4: uuidv4 } = require('uuid');
const APPROVAL_QUEUE = {};
function createApprovalRequest(objectName, objectId, fieldsToUpdate, riskFields) {
const requestId = uuidv4();
const request = {
id: requestId,
objectName,
objectId,
fieldsToUpdate,
riskFields,
status: 'pending', // pending, approved, rejected
createdAt: new Date(),
approvedAt: null,
approvedBy: null,
rejectionReason: null,
};
APPROVAL_QUEUE[requestId] = request;
console.log(`Approval request created: ${requestId}`);
return requestId;
}
function getApprovalRequest(requestId) {
return APPROVAL_QUEUE[requestId];
}
function approveRequest(requestId, approvedBy) {
const request = APPROVAL_QUEUE[requestId];
if (!request) throw new Error('Request not found');
request.status = 'approved';
request.approvedAt = new Date();
request.approvedBy = approvedBy;
return request;
}
function rejectRequest(requestId, reason) {
const request = APPROVAL_QUEUE[requestId];
if (!request) throw new Error('Request not found');
request.status = 'rejected';
request.rejectionReason = reason;
return request;
}
function getPendingRequests() {
return Object.values(APPROVAL_QUEUE).filter(r => r.status === 'pending');
}
module.exports = {
createApprovalRequest,
getApprovalRequest,
approveRequest,
rejectRequest,
getPendingRequests,
};
Integration with Tool Handlers
Modify your update handler to use the approval queue:
const { isHighRisk } = require('./permissions');
const { createApprovalRequest } = require('./approvalQueue');
if (name === 'update_opportunity') {
const fieldsToUpdate = {};
const riskFields = [];
// ... permission checks ...
if (riskFields.length > 0) {
// Queue for approval
const requestId = createApprovalRequest(
'Opportunity',
args.opportunityId,
fieldsToUpdate,
riskFields
);
return {
content: [
{
type: 'text',
text: `Update queued for approval. Request ID: ${requestId}. Pending human review.`,
},
],
};
}
// Safe update—execute immediately
const result = await sfConnection.sobject('Opportunity').update({
Id: args.opportunityId,
...fieldsToUpdate,
});
return { content: [{ type: 'text', text: JSON.stringify(result) }] };
}
Human Review Workflow
Your team can review pending requests via a dashboard or Slack notification. Once approved, a separate process executes the update:
const { getApprovalRequest, approveRequest } = require('./approvalQueue');
async function executeApprovedRequest(requestId, approvedBy) {
const request = getApprovalRequest(requestId);
if (!request) throw new Error('Request not found');
// Execute the update
const result = await sfConnection.sobject(request.objectName).update({
Id: request.objectId,
...request.fieldsToUpdate,
});
// Mark as approved
approveRequest(requestId, approvedBy);
console.log(`Approved and executed: ${requestId}`);
return result;
}
This pattern ensures:
- Visibility: Your team sees every risky change before it happens.
- Accountability: You know who approved what and when.
- Safety: Agents can’t accidentally corrupt critical data.
For enterprises modernising with AI, approval gates are essential. They let you give agents operational power without surrendering control.
Testing and Deploying Your MCP Server
Before deploying to production, test thoroughly. Your MCP server is the gatekeeper for your CRM—bugs here are expensive.
Unit Tests
Create test/permissions.test.js:
const { canRead, canWrite, isHighRisk } = require('../permissions');
describe('Field Permissions', () => {
test('allows reading public fields', () => {
expect(canRead('Opportunity', 'Amount')).toBe(true);
});
test('denies reading restricted fields', () => {
expect(canRead('Opportunity', 'CustomSensitiveField')).toBe(false);
});
test('allows writing to safe fields', () => {
expect(canWrite('Opportunity', 'NextStep')).toBe(true);
});
test('denies writing to restricted fields', () => {
expect(canWrite('Opportunity', 'OwnerId')).toBe(false);
});
test('identifies high-risk fields', () => {
expect(isHighRisk('Opportunity', 'StageName')).toBe(true);
expect(isHighRisk('Opportunity', 'NextStep')).toBe(false);
});
});
Run tests:
npm test
Integration Tests
Test against a Salesforce sandbox:
const Connection = require('jsforce').Connection;
describe('Salesforce Integration', () => {
let conn;
beforeAll(async () => {
// Connect to sandbox
conn = new Connection({
instanceUrl: process.env.SANDBOX_URL,
accessToken: process.env.SANDBOX_TOKEN,
});
});
test('reads opportunity without error', async () => {
const opp = await conn.sobject('Opportunity').retrieve('006xx000003DHP');
expect(opp.Id).toBe('006xx000003DHP');
});
test('soft-deletes opportunity', async () => {
const result = await conn.sobject('Opportunity').update({
Id: '006xx000003DHP',
IsDeleted__c: true,
});
expect(result.success).toBe(true);
});
test('queues high-risk update for approval', async () => {
// Mock the approval queue
const requestId = createApprovalRequest(
'Opportunity',
'006xx000003DHP',
{ StageName: 'Closed Won' },
['StageName']
);
expect(requestId).toBeDefined();
expect(getPendingRequests().length).toBeGreaterThan(0);
});
});
Deployment
Deploy your MCP server to a secure environment. Options include:
- AWS Lambda: Serverless, auto-scaling, integrates with Salesforce easily.
- Docker container: Full control, run on your infrastructure.
- Heroku: Simple deployment, good for small teams.
Example Dockerfile:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Deploy:
docker build -t salesforce-mcp-server .
docker run -e SALESFORCE_CLIENT_ID=xxx -e SALESFORCE_CLIENT_SECRET=yyy salesforce-mcp-server
For production deployments, use environment variables for all secrets. Never hardcode credentials.
Monitoring and Maintaining Security
Once deployed, monitor your MCP server continuously. Track:
- Tool call volume: Spikes might indicate abuse or misconfiguration.
- Permission denials: If agents are frequently denied access, adjust permissions or investigate the agent’s logic.
- Approval queue backlog: If approvals are piling up, you might need to adjust which changes require approval.
- Soft-delete rate: If records are being soft-deleted frequently, investigate whether it’s intentional or a bug.
Logging and Audit Trail
Log every tool call:
function logToolCall(toolName, args, result, status) {
const logEntry = {
timestamp: new Date().toISOString(),
toolName,
args,
result,
status, // 'success', 'denied', 'queued'
};
// Write to CloudWatch, DataDog, or your logging service
console.log(JSON.stringify(logEntry));
}
This creates an immutable record of every change. If something goes wrong, you can trace exactly what happened and when.
Regular Security Reviews
Quarterly, review:
- Permission matrix: Are there fields agents should no longer access?
- Approval thresholds: Should more (or fewer) changes require approval?
- Soft-delete policies: Are there records being soft-deleted that shouldn’t be?
- Integration user permissions: Does the integration user still need all its current permissions?
For teams implementing AI strategy and readiness, security reviews are part of the operational cadence. They ensure your system stays secure as your business evolves.
Real-World Implementation Patterns
Here’s how real teams use MCP servers with Salesforce. These patterns come from working with Sydney startups and enterprises modernising their operations.
Pattern 1: Sales Velocity Acceleration
The problem: Sales reps spend 15 minutes per deal updating Salesforce—logging calls, updating stages, adding notes.
The solution: An agent connected to your CRM via MCP server reads call transcripts, extracts key details, and updates opportunities. The agent can update “NextStep,” “Amount,” and “CloseDate” directly (low-risk). Changes to “StageName” queue for approval (high-risk).
Result: Reps spend 2 minutes on CRM admin instead of 15. Forecast accuracy improves because data is updated immediately and consistently.
Pattern 2: Customer Success Automation
The problem: Success teams manually track customer health, update account notes, and create renewal tasks.
The solution: An agent monitors customer usage data, support tickets, and contract dates. It updates account health scores, adds contextual notes, and creates renewal tasks automatically. All changes are soft-deletable and logged for audit.
Result: Success teams focus on high-touch relationships. Renewals are caught earlier because the system is actively monitoring health.
Pattern 3: Data Governance and Compliance
The problem: Your team needs to pass SOC 2 or ISO 27001 audits. Auditors want to see who changed what, when, and why. Manual CRM updates are hard to audit.
The solution: Your MCP server logs every change with timestamp, user (agent), and reason. Soft-deletes preserve original data. Approval gates ensure high-risk changes have human sign-off. When auditors ask “who changed this account’s industry field?”, you have a clear answer.
Result: You pass audits with confidence. Your data governance story is strong and defensible.
When implementing AI automation for your business, these patterns are proven. They work across industries—sales, success, operations, finance.
Pattern 4: Multi-Agent Orchestration
The problem: Multiple agents need to update Salesforce, but they might conflict. Agent A tries to update a deal’s stage while Agent B is writing notes.
The solution: Your MCP server queues all updates, validates them for conflicts, and executes them serially. If two agents try to update the same field simultaneously, the second one is queued for review.
Result: No data corruption from concurrent updates. Clear audit trail of which agent did what.
Building Your MCP Server: Key Takeaways
Building a secure MCP server for Salesforce isn’t complicated, but it requires discipline. Here’s what you need:
- Permission matrix: Define exactly what agents can read and write. Update it quarterly.
- Field-level enforcement: Check permissions in your MCP server before executing any change.
- Soft deletes: Never permanently delete. Mark as deleted and preserve original data.
- Approval gates: Queue high-risk changes for human review before executing.
- Comprehensive logging: Log every tool call. Make your audit trail airtight.
- Regular security reviews: Quarterly, revisit permissions, thresholds, and policies.
Following official Salesforce MCP documentation and the GitHub MCP server repository gives you a solid foundation. The video tutorial on building MCP servers walks through practical examples.
When you’re ready to scale, resources like the Axway guide to MCP server integration and the Trailhead community discussion provide real-world context.
At PADISO, we’ve built MCP servers for teams across Sydney and Australia. The pattern is consistent: teams that invest in proper governance see measurable returns—faster deal velocity, cleaner data, audit confidence, and agents that actually help rather than hinder operations.
If you’re building agentic AI systems that touch your CRM, an MCP server isn’t optional. It’s the foundation of a safe, scalable, audit-ready system.
Next Steps
- Map your data model: Define which objects and fields your agents need to touch.
- Create your permission matrix: Document read, write, and high-risk fields.
- Set up a Salesforce sandbox: Test your MCP server before touching production.
- Build your first MCP server: Start with the Node.js example above. Get read and write working.
- Add governance: Layer in field-level permissions, soft deletes, and approval gates.
- Test thoroughly: Unit tests, integration tests, and manual testing in sandbox.
- Deploy and monitor: Push to production with comprehensive logging.
- Iterate: Review quarterly. Adjust permissions and thresholds as your business evolves.
For teams implementing AI automation at enterprise scale, this is foundational work. Done right, it’s the difference between AI that helps and AI that creates liability.
If you need support building, testing, or deploying your MCP server, PADISO specialises in exactly this work. We’ve shipped MCP servers for startups and enterprises, integrated them with Agentforce and Claude, and helped teams pass SOC 2 and ISO 27001 audits with agent-driven workflows. Get in touch to discuss your specific use case.