GitHub - mendableai/firecrawl-mcp-server: Official Firecrawl MCP Server - Adds powerful web scraping to Cursor, Claude and any other LLM clients. (original) (raw)

Firecrawl MCP Server

A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.

Big thanks to @vrknetha, @knacklabs for the initial implementation!

Features

Play around with our MCP Server on MCP.so's playground or on Klavis AI.

Installation

Running with npx

env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp

Manual Installation

npm install -g firecrawl-mcp

Running on Cursor

Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers:Cursor MCP Server Configuration Guide

To configure Firecrawl MCP in Cursor v0.48.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add new global MCP server"
  4. Enter the following code:
    {
    "mcpServers": {
    "firecrawl-mcp": {
    "command": "npx",
    "args": ["-y", "firecrawl-mcp"],
    "env": {
    "FIRECRAWL_API_KEY": "YOUR-API-KEY"
    }
    }
    }
    }

To configure Firecrawl MCP in Cursor v0.45.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add New MCP Server"
  4. Enter the following:
    • Name: "firecrawl-mcp" (or your preferred name)
    • Type: "command"
    • Command: env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp

If you are using Windows and are running into issues, try cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"

Replace your-api-key with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys

After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.

Running on Windsurf

Add this to your ./codeium/windsurf/model_config.json:

{ "mcpServers": { "mcp-server-firecrawl": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR_API_KEY" } } } }

Running with SSE Local Mode

To run the server using Server-Sent Events (SSE) locally instead of the default stdio transport:

env SSE_LOCAL=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp

Use the url: http://localhost:3000/sse

Installing via Smithery (Legacy)

To install Firecrawl for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude

Running on VS Code

For one-click installation, click one of the install buttons below...

Install with NPX in VS Code Install with NPX in VS Code Insiders

For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P and typing Preferences: Open User Settings (JSON).

{ "mcp": { "inputs": [ { "type": "promptString", "id": "apiKey", "description": "Firecrawl API Key", "password": true } ], "servers": { "firecrawl": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "${input:apiKey}" } } } } }

Optionally, you can add it to a file called .vscode/mcp.json in your workspace. This will allow you to share the configuration with others:

{ "inputs": [ { "type": "promptString", "id": "apiKey", "description": "Firecrawl API Key", "password": true } ], "servers": { "firecrawl": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "${input:apiKey}" } } } }

Configuration

Environment Variables

Required for Cloud API

Optional Configuration

Retry Configuration

Credit Usage Monitoring

Configuration Examples

For cloud API usage with custom retry and credit monitoring:

Required for cloud API

export FIRECRAWL_API_KEY=your-api-key

Optional retry configuration

export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff

Optional credit monitoring

export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits

For self-hosted instance:

Required for self-hosted

export FIRECRAWL_API_URL=https://firecrawl.your-domain.com

Optional authentication for self-hosted

export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth

Custom retry configuration

export FIRECRAWL_RETRY_MAX_ATTEMPTS=10 export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

{ "mcpServers": { "mcp-server-firecrawl": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",

    "FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
    "FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
    "FIRECRAWL_RETRY_MAX_DELAY": "30000",
    "FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",

    "FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
    "FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
  }
}

} }

System Configuration

The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:

const CONFIG = { retry: { maxAttempts: 3, // Number of retry attempts for rate-limited requests initialDelay: 1000, // Initial delay before first retry (in milliseconds) maxDelay: 10000, // Maximum delay between retries (in milliseconds) backoffFactor: 2, // Multiplier for exponential backoff }, credit: { warningThreshold: 1000, // Warn when credit usage reaches this level criticalThreshold: 100, // Critical alert when credit usage reaches this level }, };

These configurations control:

  1. Retry Behavior
    • Automatically retries failed requests due to rate limits
    • Uses exponential backoff to avoid overwhelming the API
    • Example: With default settings, retries will be attempted at:
      * 1st retry: 1 second delay
      * 2nd retry: 2 seconds delay
      * 3rd retry: 4 seconds delay (capped at maxDelay)
  2. Credit Usage Monitoring
    • Tracks API credit consumption for cloud API usage
    • Provides warnings at specified thresholds
    • Helps prevent unexpected service interruption
    • Example: With default settings:
      * Warning at 1000 credits remaining
      * Critical alert at 100 credits remaining

Rate Limiting and Batch Processing

The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:

How to Choose a Tool

Use this guide to select the right tool for your task:

Quick Reference Table

Tool Best for Returns
scrape Single page content markdown/html
batch_scrape Multiple known URLs markdown/html[]
map Discovering URLs on a site URL[]
crawl Multi-page extraction (with limits) markdown/html[]
search Web search for info results[]
extract Structured data from pages JSON
deep_research In-depth, multi-source research summary, sources
generate_llmstxt LLMs.txt for a domain text

Available Tools

1. Scrape Tool (firecrawl_scrape)

Scrape content from a single URL with advanced options.

Best for:

Not recommended for:

Common mistakes:

Prompt Example:

"Get the content of the page at https://example.com."

Usage Example:

{ "name": "firecrawl_scrape", "arguments": { "url": "https://example.com", "formats": ["markdown"], "onlyMainContent": true, "waitFor": 1000, "timeout": 30000, "mobile": false, "includeTags": ["article", "main"], "excludeTags": ["nav", "footer"], "skipTlsVerification": false } }

Returns:

2. Batch Scrape Tool (firecrawl_batch_scrape)

Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.

Best for:

Not recommended for:

Common mistakes:

Prompt Example:

"Get the content of these three blog posts: [url1, url2, url3]."

Usage Example:

{ "name": "firecrawl_batch_scrape", "arguments": { "urls": ["https://example1.com", "https://example2.com"], "options": { "formats": ["markdown"], "onlyMainContent": true } } }

Returns:

{ "content": [ { "type": "text", "text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress." } ], "isError": false }

3. Check Batch Status (firecrawl_check_batch_status)

Check the status of a batch operation.

{ "name": "firecrawl_check_batch_status", "arguments": { "id": "batch_1" } }

4. Map Tool (firecrawl_map)

Map a website to discover all indexed URLs on the site.

Best for:

Not recommended for:

Common mistakes:

Prompt Example:

"List all URLs on example.com."

Usage Example:

{ "name": "firecrawl_map", "arguments": { "url": "https://example.com" } }

Returns:

Search the web and optionally extract content from search results.

Best for:

Not recommended for:

Common mistakes:

Usage Example:

{ "name": "firecrawl_search", "arguments": { "query": "latest AI research papers 2023", "limit": 5, "lang": "en", "country": "us", "scrapeOptions": { "formats": ["markdown"], "onlyMainContent": true } } }

Returns:

Prompt Example:

"Find the latest research papers on AI published in 2023."

6. Crawl Tool (firecrawl_crawl)

Starts an asynchronous crawl job on a website and extract content from all pages.

Best for:

Not recommended for:

Warning: Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.

Common mistakes:

Prompt Example:

"Get all blog posts from the first two levels of example.com/blog."

Usage Example:

{ "name": "firecrawl_crawl", "arguments": { "url": "https://example.com/blog/*", "maxDepth": 2, "limit": 100, "allowExternalLinks": false, "deduplicateSimilarURLs": true } }

Returns:

{ "content": [ { "type": "text", "text": "Started crawl for: https://example.com/* with job ID: 550e8400-e29b-41d4-a716-446655440000. Use firecrawl_check_crawl_status to check progress." } ], "isError": false }

7. Check Crawl Status (firecrawl_check_crawl_status)

Check the status of a crawl job.

{ "name": "firecrawl_check_crawl_status", "arguments": { "id": "550e8400-e29b-41d4-a716-446655440000" } }

Returns:

8. Extract Tool (firecrawl_extract)

Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.

Best for:

Not recommended for:

Arguments:

When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.Prompt Example:

"Extract the product name, price, and description from these product pages."

Usage Example:

{ "name": "firecrawl_extract", "arguments": { "urls": ["https://example.com/page1", "https://example.com/page2"], "prompt": "Extract product information including name, price, and description", "systemPrompt": "You are a helpful assistant that extracts product information", "schema": { "type": "object", "properties": { "name": { "type": "string" }, "price": { "type": "number" }, "description": { "type": "string" } }, "required": ["name", "price"] }, "allowExternalLinks": false, "enableWebSearch": false, "includeSubdomains": false } }

Returns:

{ "content": [ { "type": "text", "text": { "name": "Example Product", "price": 99.99, "description": "This is an example product description" } } ], "isError": false }

9. Deep Research Tool (firecrawl_deep_research)

Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.

Best for:

Not recommended for:

Arguments:

Prompt Example:

"Research the environmental impact of electric vehicles versus gasoline vehicles."

Usage Example:

{ "name": "firecrawl_deep_research", "arguments": { "query": "What are the environmental impacts of electric vehicles compared to gasoline vehicles?", "maxDepth": 3, "timeLimit": 120, "maxUrls": 50 } }

Returns:

10. Generate LLMs.txt Tool (firecrawl_generate_llmstxt)

Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.

Best for:

Not recommended for:

Arguments:

Prompt Example:

"Generate an LLMs.txt file for example.com."

Usage Example:

{ "name": "firecrawl_generate_llmstxt", "arguments": { "url": "https://example.com", "maxUrls": 20, "showFullText": true } }

Returns:

Logging System

The server includes comprehensive logging:

Example log messages:

[INFO] Firecrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...

Error Handling

The server provides robust error handling:

Example error response:

{ "content": [ { "type": "text", "text": "Error: Rate limit exceeded. Retrying in 2 seconds..." } ], "isError": true }

Development

Install dependencies

npm install

Build

npm run build

Run tests

npm test

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Run tests: npm test
  4. Submit a pull request

Thanks to contributors

Thanks to @vrknetha, @cawstudios for the initial implementation!

Thanks to MCP.so and Klavis AI for hosting and @gstarwd, @xiangkaiz and @zihaolin96 for integrating our server.

License

MIT License - see LICENSE file for details