MCP Resources vs Tools: When to Use Each
Master the MCP resources vs tools decision framework. Learn when to use each approach with real examples from D23.io, Snowflake, Salesforce, and Linear.
MCP Resources vs Tools: When to Use Each
The Model Context Protocol (MCP) has fundamentally changed how AI agents interact with external systems. Yet many teams building agentic AI solutions still struggle with a core architectural decision: should you expose capabilities as resources or as tools?
This isn’t academic. The choice directly affects latency, cost, security posture, and whether your AI agent can actually ship to production. Get it wrong, and you’re either building fragile systems that hallucinate tool calls, or paying 10x more in API tokens than necessary.
This guide cuts through the confusion. We’ll walk you through the decision framework, show you when each approach wins, and share concrete examples from real MCP implementations—including D23.io’s Snowflake, Salesforce, and Linear servers—so you can apply this to your own architecture.
Table of Contents
- What Are MCP Resources and Tools?
- Resources: Passive Data Access for Context
- Tools: Active Functions for Agent Decisions
- The Core Tension: Token Cost vs Agent Autonomy
- Decision Framework: How to Choose
- Real-World Examples: Snowflake, Salesforce, and Linear
- Hybrid Patterns: Resources + Tools Together
- Common Mistakes and How to Avoid Them
- Implementation Checklist
- Next Steps
What Are MCP Resources and Tools?
Before we talk strategy, let’s define terms clearly. If you’ve read the MCP Demystified guide or browsed the Awesome MCP repository, you’ve seen these concepts mentioned. But the distinction matters more than the definitions.
Resources are passive, read-only data structures. They live in MCP servers and are made available to the AI model as context. The model reads them—it doesn’t call them. Common examples include:
- A list of available database tables and their schemas
- Documentation or knowledge bases
- Configuration files or environment settings
- Historical data or reference datasets
- API endpoint definitions or capability inventories
Resources are analogous to the File System Access API in web development—they’re data you can read, not functions you can invoke.
Tools are active, executable functions. The AI model decides to call them, passes parameters, and receives results. They represent actions the agent can take. Examples include:
- Execute a SQL query against a database
- Create a new record in a CRM
- Send an email or Slack message
- Trigger a deployment pipeline
- Fetch real-time data from an API
The distinction maps cleanly to the Anthropic tool use model, where Claude decides which tools to invoke and when. MCP generalises this pattern—your AI client (Claude, GPT, or another model) receives the list of available tools and makes autonomous decisions about which to call.
Here’s the key insight: resources inform decisions; tools execute them. Resources are context. Tools are agency.
Resources: Passive Data Access for Context
Resources shine when you want to ground your AI agent in factual, static, or slowly-changing information without burning tokens on every request.
Why Resources Matter
Every token costs money. When you pass data to an AI model as part of the prompt or context window, you’re paying for:
- Input tokens: The data itself, read once per request
- Output tokens: The model’s reasoning about that data
- Latency: Larger context windows = slower responses
If you expose that same data as a tool (by making the model call a function to fetch it), you’re also paying for:
- Tool invocation tokens: The function call itself
- Model reasoning: The model deciding whether to call it
- Round-trip latency: Waiting for the function to return
Resources bypass most of this. The MCP server sends them once when the client connects (or on-demand, depending on implementation). The model reads them as part of its context window. No extra function calls. No extra decision-making.
When to Use Resources
Use resources when:
- Data changes infrequently: Schema definitions, documentation, configuration, reference data
- The agent always needs this context: Every decision the agent makes is informed by this data
- Token efficiency matters: You’re running high-volume agent loops and cost is a concern
- Real-time updates aren’t critical: The data can be 5 minutes, 1 hour, or 1 day stale
- The data is read-only: The agent doesn’t need to modify it
For example, if you’re building an AI agent that queries a data warehouse, you’d expose the table schema, column names, and data types as resources. The model reads these once, understands the structure, and then uses tools to execute queries. You’re not paying for the model to re-read the schema on every query.
Resource Implementation Patterns
MCP resources can be:
- Static files: Served directly from the MCP server (e.g., a JSON schema document)
- Dynamic, cached: Generated once and refreshed on a schedule (e.g., database schema pulled hourly)
- Streamed: Sent incrementally to the model if they’re large
The AWS Prescriptive Guidance on MCP strategies covers these patterns in detail. The key is that resources are declarative—the model receives them as-is and incorporates them into its reasoning.
Tools: Active Functions for Agent Decisions
Tools are where agency lives. They’re the actions your AI agent can take in the world.
Why Tools Matter
Tools enable autonomous decision-making. Instead of the model saying “I think you should run this SQL query,” it runs the query itself and gets the result. This creates a feedback loop:
- Model sees available tools
- Model decides which tool to call based on the user’s request
- Tool executes and returns a result
- Model incorporates the result and decides next steps
- Loop repeats until the goal is achieved
This is fundamentally different from resources. Resources are one-way information flow. Tools enable dialogue between the model and external systems.
When to Use Tools
Use tools when:
- The agent needs to make decisions about whether to take action: “Should I execute this query? Should I create this record?”
- Real-time data is critical: The agent needs fresh information to make good decisions
- The action depends on context: Different user requests lead to different tool calls
- You need audit trails and control: Each tool invocation can be logged, approved, or rate-limited
- The data changes frequently: Caching isn’t viable; the agent needs live data
- The action has side effects: Creating records, sending messages, triggering deployments
For example, when building an AI agent for customer support, you’d expose tools like “search customer records,” “create a ticket,” and “send an email.” The model decides which to call based on the customer’s problem. You’re not pre-loading all customer records as resources—you’re letting the model decide which to fetch.
Tool Implementation Patterns
Tools in MCP follow the pattern established by OpenAI’s tools guide. Each tool has:
- Name: A clear identifier (e.g.,
query_database) - Description: What it does, when to use it
- Input schema: JSON schema defining parameters
- Implementation: The actual function that runs
The model uses the description and schema to decide whether to call the tool. You control this by writing clear, specific descriptions.
The Core Tension: Token Cost vs Agent Autonomy
Here’s where the real trade-off lives.
Resources are cheap but passive. You pay once (or on refresh) to load them. The model reads them as context. But the model can’t decide dynamically whether to use them. It’s all-or-nothing.
Tools are expensive but active. Every tool call costs tokens. The model must reason about whether to call them. But the model gets fresh data and can make autonomous decisions.
Many teams try to optimise for token cost and end up building brittle systems. They expose everything as resources, hoping the model will “just know” what to do. But without tools, the model can’t act. It can only describe what should happen.
Conversely, some teams expose everything as tools, running up massive bills because the model is calling functions constantly—many of them unnecessary.
The winning approach is hybrid: resources for context, tools for decisions.
Here’s a concrete example. Imagine you’re building an AI agent for a sales team. You have:
- Customer database: 100,000 records, each ~2KB of data
- Deal pipeline: 5,000 active deals
- Product catalogue: 500 products
If you expose all of this as resources, you’re loading 200MB+ into the model’s context window on every request. That’s:
- Massive token cost (200MB = ~50 million tokens)
- Terrible latency (model has to parse all that data)
- Inflexible (model can’t decide what to load)
If you expose all of this as tools, the model will call them constantly:
- “Get customer by ID”
- “List deals for this customer”
- “Get product details”
- “Update deal status”
Each call costs tokens. If the model makes 10 calls per request, and you handle 1,000 requests per day, that’s 10,000 tool invocations. At scale, this becomes expensive.
The hybrid approach:
- Resources: Product catalogue (static, small, always relevant)
- Tools: Search customers (dynamic, large, called on-demand), update deal (action with side effects)
Now the model has the product data for context. It decides whether to search for a customer. If it does, it gets fresh data. It can then decide whether to update a deal. Token cost is optimised. Agent autonomy is preserved.
Decision Framework: How to Choose
Here’s the decision tree. For each piece of data or capability you’re considering exposing:
Step 1: Does the Agent Always Need This?
Yes → Consider a resource
No → Go to Step 2
If the agent needs this information for every decision, it’s a candidate for resources. Examples: product catalogue, company configuration, team member list.
If the agent only sometimes needs this, it’s a candidate for tools. Examples: customer records, historical data, real-time metrics.
Step 2: How Large Is This Data?
Small (< 10KB) → Consider a resource
Large (> 10KB) → Go to Step 3
Small datasets are cheap to include in every request. Large datasets aren’t.
Step 3: How Frequently Does It Change?
Infrequently (hours/days) → Consider a resource with caching
Frequently (minutes/seconds) → Use a tool
If data changes slowly, you can refresh the resource on a schedule. If it changes fast, the model needs to fetch it on-demand via a tool.
Step 4: Does the Agent Need to Act on This?
No (read-only) → Use a resource
Yes (create, update, delete) → Use a tool
If the agent only reads data, it can be a resource. If the agent modifies data, it must be a tool (so you can log, audit, and control the action).
Step 5: Is This Sensitive or Regulated?
Yes → Use a tool (with access control)
No → Either resource or tool based on steps 1-4
Sensitive data (PII, financial records, security configs) should always go through tools, so you can enforce access control, audit trails, and approval workflows. When building systems that need to pass SOC 2 or ISO 27001 compliance, this becomes critical. At PADISO, we help teams implement Security Audit readiness via Vanta to ensure these controls are in place before agents touch sensitive data.
Real-World Examples: Snowflake, Salesforce, and Linear
Let’s look at how real MCP implementations handle this decision.
D23.io Snowflake Server
The Snowflake MCP server (built by D23.io) exposes:
Resources:
- Database schema (tables, columns, data types)
- Warehouse configuration
- Role and permission definitions
Tools:
- Execute SQL query
- Create table
- Drop table
- Load data
The logic is clear. Schema is static (or changes on a schedule). The model reads it once as context. But executing queries is dynamic—the model decides which queries to run based on the user’s request. Creating or dropping tables are actions with side effects, so they’re tools (with audit trails).
This is the right pattern. The model has the context it needs (schema) without burning tokens on every query. It can autonomously execute queries and manage tables.
D23.io Salesforce Server
The Salesforce MCP server exposes:
Resources:
- Custom object definitions
- Field mappings
- Validation rules
- Org configuration
Tools:
- Query records (SOQL)
- Create record
- Update record
- Delete record
- Search
Again, the distinction is clean. Configuration is static (resources). Operations are dynamic (tools). The model has the context to understand the data model, then decides which records to query or modify.
This aligns with how teams building AI automation solutions structure their integrations. Static metadata as resources. Dynamic queries and mutations as tools.
D23.io Linear Server
The Linear MCP server exposes:
Resources:
- Team structure and members
- Project definitions
- Workflow states
- Custom fields
Tools:
- Create issue
- Update issue
- Search issues
- Comment on issue
- Assign issue
Linear’s schema is relatively stable (resources). But issue operations are frequent and context-dependent (tools). The model reads the team structure once, then decides which issues to create or update based on conversation.
Notice a pattern? All three implementations follow the same logic:
- Metadata and configuration → Resources
- Queries and searches → Tools
- Create/update/delete → Tools
This isn’t coincidence. It’s the optimal pattern for token cost and agent autonomy.
Hybrid Patterns: Resources + Tools Together
The most sophisticated MCP implementations combine resources and tools strategically. Here are patterns that work.
Pattern 1: Context + Action
Expose metadata as resources. Expose actions as tools.
Resources:
- Database schema
- Available tables
- Column definitions
Tools:
- Execute query
- Insert row
- Update row
The model reads the schema (context), then decides which queries to run (action). This is what the Snowflake server does.
Pattern 2: Static Reference + Dynamic Search
Expose reference data as resources. Expose searches as tools.
Resources:
- Product catalogue
- Pricing tiers
- Feature matrix
Tools:
- Search customers
- Search deals
- Get customer history
The model has product knowledge (resources), then searches for relevant customers (tools). This is common in sales and support AI agents.
Pattern 3: Configuration + Operations
Expose configuration as resources. Expose operations as tools.
Resources:
- Team members
- Project settings
- Workflow definitions
Tools:
- Create task
- Assign task
- Update status
- Comment
The model understands the team structure (resources), then creates and manages tasks (tools). This is what the Linear server does.
Pattern 4: Tiered Access
Expose read-only data as resources. Expose write operations as tools with access control.
Resources:
- Public product info
- Published documentation
- Team directory
Tools:
- Create internal record (requires auth)
- Update customer data (requires auth + audit)
- Delete record (requires approval)
This is critical when building agents that handle sensitive data. Resources are public. Tools enforce permissions. This is how teams building agentic AI solutions maintain security posture.
Common Mistakes and How to Avoid Them
Mistake 1: Treating All Data as Resources
Problem: Teams expose large datasets (customer records, transaction history, logs) as resources, hoping to avoid tool calls. Result: massive context windows, high token cost, slow responses.
Solution: Use the decision framework. If data is large and changes frequently, it should be a tool. Let the model decide when to fetch it.
Mistake 2: Treating All Capabilities as Tools
Problem: Teams expose schema, configuration, and metadata as tools. The model must call functions to understand the data structure before it can query it. Result: unnecessary round-trips, high latency, inflated token cost.
Solution: Expose static metadata as resources. The model reads it once. Then it can intelligently use tools.
Mistake 3: Confusing Resources with Tool Descriptions
Problem: Teams write detailed tool descriptions but don’t use resources for supporting context. The model must infer structure from descriptions alone.
Solution: Use resources for schema and metadata. Use tool descriptions for intent and usage. Both are needed.
Mistake 4: Not Versioning Resources
Problem: You update a resource (e.g., new database schema), but the model is still using the old version. Agents make incorrect queries or calls.
Solution: Version your resources. Include a version field. Update clients when resources change. This is especially important when building agentic AI systems in production—stale schema data can cause hallucinated queries and runtime errors.
Mistake 5: Exposing Sensitive Data as Resources
Problem: You expose customer records, API keys, or financial data as resources. Every client that connects gets access. No audit trail.
Solution: Sensitive data should go through tools, with access control and logging. Resources are for public or non-sensitive data. When building systems that need SOC 2 compliance, this is non-negotiable.
Mistake 6: Forgetting Tool Descriptions
Problem: You implement tools but give them vague descriptions. The model doesn’t know when to call them. It either calls them constantly or never calls them.
Solution: Write clear, specific tool descriptions. Include:
- What the tool does
- When to use it
- What parameters it needs
- What it returns
- Any side effects or limitations
Example:
{
"name": "query_customer_database",
"description": "Search for customers by name, email, or ID. Use this when you need to find specific customer records. Returns customer ID, name, email, phone, company, and account status. Does not return transaction history—use 'get_customer_transactions' for that.",
"input_schema": {
"type": "object",
"properties": {
"search_term": {"type": "string"},
"limit": {"type": "integer", "default": 10}
}
}
}
A good description helps the model use the tool correctly. A bad description leads to hallucinated calls and errors.
Implementation Checklist
When designing your MCP server, use this checklist:
Planning Phase
- List all data and capabilities you want to expose
- For each item, answer: Is it always needed? How large? How often does it change? Does the agent act on it? Is it sensitive?
- Classify each as a resource or tool
- Identify hybrid patterns (e.g., schema as resource, queries as tools)
- Document the reasoning for each classification
Resource Design
- Define resource structure (JSON schema)
- Set refresh strategy (static, periodic, on-demand)
- Include version information
- Write clear resource descriptions
- Consider caching and performance
- Plan for large resources (streaming, pagination)
Tool Design
- Define tool name, description, input schema, output schema
- Implement error handling (what if the tool fails?)
- Add access control (who can call this tool?)
- Implement audit logging (what calls were made, when, by whom?)
- Rate-limit if needed (prevent abuse)
- Test edge cases (empty results, large results, errors)
Security
- Identify sensitive data (PII, credentials, financial records)
- Ensure sensitive data goes through tools, not resources
- Implement authentication and authorisation
- Add audit trails for all tool calls
- Plan for compliance (SOC 2, ISO 27001, GDPR, etc.)
- Document security assumptions
Testing
- Test that models can read resources correctly
- Test that models call tools appropriately
- Test error handling (what if a tool fails?)
- Test performance (token cost, latency)
- Test security (can unauthorised users access sensitive data?)
- Test at scale (does it work with many resources/tools?)
Monitoring
- Log all tool calls (name, parameters, result, timestamp, user)
- Track token usage (how much are resources and tools costing?)
- Monitor error rates (are tools failing?)
- Alert on suspicious activity (unusual tool calls, access patterns)
- Review logs regularly (are agents using tools as expected?)
Practical Guidance for Your Architecture
If you’re building agentic AI systems, here’s how to apply this framework:
For Founders and CEOs
You’re evaluating whether to build AI agents or use a platform. Ask your technical team:
- “How will we expose our data to the AI agent? As resources or tools?”
- “What’s the token cost? Have you calculated it?”
- “What happens if the agent calls a tool incorrectly? Do we have safeguards?”
- “Can we audit and control what the agent does?”
If they can’t answer these clearly, they haven’t thought through the architecture. This is where fractional CTO support helps—experienced operators can validate the design before you build it.
For Engineers Building MCP Servers
- Start with the decision framework. Classify your data and capabilities.
- Implement resources for metadata and configuration. Test that they load correctly.
- Implement tools for actions. Write clear descriptions. Add access control.
- Test with a real AI model (Claude, GPT, etc.). See how it uses resources and tools.
- Measure token cost. Optimize if needed.
- Add monitoring and audit trails.
For Teams Modernising Existing Systems
If you’re adding AI agents to legacy systems, consider:
- Don’t expose your entire database as resources. That’s expensive and inflexible.
- Expose schema and metadata as resources. The agent needs to understand your data structure.
- Expose queries and mutations as tools. The agent decides what to fetch and modify.
- Add access control to tools. Not every agent should access every record.
- Implement audit trails. You need to know what the agent did and why.
When building platform engineering solutions that integrate AI, this pattern is standard. Resources for context. Tools for action. Access control and audit trails for safety.
Next Steps
You now understand the resources vs tools decision. Here’s how to move forward:
1. Audit Your Current MCP Implementations
If you’re already using MCP servers, review them:
- Are resources and tools classified correctly?
- Is there data that should be a resource but is a tool (and vice versa)?
- What’s the token cost? Can you optimise it?
- Is sensitive data protected?
Look at the Awesome MCP repository for examples. See how other teams structured their servers. Learn from their patterns.
2. Design Your Next MCP Server
If you’re building new integrations, use the decision framework:
- List all data and capabilities
- Classify each as resource or tool
- Design resources for metadata and configuration
- Design tools for actions and queries
- Add access control and audit trails
- Test with a real AI model
- Measure token cost and latency
- Iterate based on results
Read the AWS MCP strategies guide for deeper technical guidance.
3. Build Safely
When deploying agentic AI to production, safety is non-negotiable. This is where agentic AI production patterns matter. Common issues:
- Runaway loops: Tools calling tools calling tools, spiralling out of control
- Hallucinated calls: Model invents tool names or parameters that don’t exist
- Cost blowouts: Unexpected token usage from excessive tool calls
- Security breaches: Agents accessing data they shouldn’t
Mitigate these by:
- Rate-limiting tool calls
- Validating tool parameters before execution
- Implementing access control on sensitive tools
- Monitoring and alerting on unusual patterns
- Having a kill switch (ability to stop a runaway agent)
4. Get Expert Review
If you’re building mission-critical agentic AI systems, get a second opinion. This is where AI strategy and readiness assessments help. Experienced operators can:
- Review your resource vs tool classification
- Identify security gaps
- Estimate token cost and latency
- Suggest optimisations
- Validate your architecture before you ship
At PADISO, we work with founders and engineering teams building agentic AI solutions. We help validate architectures, implement best practices, and ship safely. If you’re building AI agents and want expert guidance, let’s talk.
5. Learn From Real Implementations
Study how real MCP servers handle this:
- D23.io Snowflake server: Database schema as resources, queries as tools
- D23.io Salesforce server: Object definitions as resources, CRUD operations as tools
- D23.io Linear server: Team structure as resources, issue operations as tools
Read their source code. Understand their design decisions. Apply those patterns to your own systems.
Summary
The resources vs tools debate ends when you understand the trade-off:
- Resources are cheap, passive, one-way. Use them for metadata, configuration, and reference data that doesn’t change often and that the agent always needs.
- Tools are active, expensive, two-way. Use them for queries, searches, and actions where the agent needs to decide dynamically and where you need audit trails.
The winning approach is hybrid: resources for context, tools for decisions.
Use the decision framework:
- Does the agent always need this? (Yes → resource)
- How large is it? (Small → resource; large → tool)
- How often does it change? (Infrequently → resource; frequently → tool)
- Does the agent act on it? (No → resource; yes → tool)
- Is it sensitive? (Yes → tool with access control)
Apply this to your MCP server design. Classify data and capabilities. Implement resources for metadata. Implement tools for actions. Add access control and audit trails. Test with real AI models. Measure token cost. Iterate.
When building agentic AI systems, this discipline matters. It’s the difference between systems that ship safely and systems that fail in production. It’s the difference between token costs that make sense and costs that spiral. It’s the difference between agents that are trustworthy and agents that are dangerous.
Get this right, and you’ve built the foundation for intelligent, autonomous systems that scale. Get it wrong, and you’ll spend months debugging hallucinated tool calls and cost overruns.
The framework is simple. The execution is where expertise lives. If you’re building agentic AI and want to get it right, that’s where PADISO’s AI strategy and engineering expertise comes in. We help teams architect, build, and deploy agentic AI systems that work.
Now go build something great.