GitHub - AIDC-AI/Pixelle-MCP: An Open-Source Multimodal AIGC Solution based on ComfyUI + MCP + LLM https://pixelle.ai (original) (raw)

✨ An AIGC solution based on the MCP protocol, supporting both local ComfyUI and cloud ComfyUI (RunningHub) modes, seamlessly converting workflows into MCP tools with zero code.

Pixelle MCP adopts a unified architecture design, integrating MCP server, web interface, and file services into one application, providing:

🏃‍♂️ Quick Start

Choose the deployment method that best suits your needs, from simple to complex:

🎯 Method 1: One-click Experience

💡 Zero configuration startup, perfect for quick experience and testing

🚀 Temporary Run

First you need to install the uv environment

Start with one command, no system installation required

uvx pixelle@latest

📚 View uvx CLI Reference →

📦 Persistent Installation

Here you need to install it in the python3.11 environment

Install to system

pip install -U pixelle

Start service

pixelle

📚 View pip CLI Reference →

After startup, it will automatically enter the configuration wizard to guide you through execution engine selection (ComfyUI/RunningHub) and LLM configuration.

🛠️ Method 2: Local Development Deployment

💡 Supports custom workflows and secondary development

📥 1. Get Source Code

git clone https://github.com/AIDC-AI/Pixelle-MCP.git cd Pixelle-MCP

🚀 2. Start Service

Interactive mode (recommended)

uv run pixelle

📚 View Complete CLI Reference →

🔧 3. Add Custom Workflows (Optional)

Copy example workflows to data directory (run this in your desired project directory)

cp -r workflows/* ./data/custom_workflows/

⚠️ Important: Make sure to test workflows in ComfyUI first to ensure they run properly, otherwise execution will fail.

🐳 Method 3: Docker Deployment

💡 Suitable for production environments and containerized deployment

📋 1. Prepare Configuration

git clone https://github.com/AIDC-AI/Pixelle-MCP.git cd Pixelle-MCP

Create environment configuration file

cp .env.example .env

Edit .env file to configure your ComfyUI address and LLM settings

🚀 2. Start Container

Start all services in background

docker compose up -d

View logs

docker compose logs -f

🌐 Access Services

Regardless of which method you use, after startup you can access via:

💡 Port Configuration: Default port is 9004, can be customized via environment variable PORT=your_port.

⚙️ Initial Configuration

On first startup, the system will automatically detect configuration status:

  1. 🚀 Execution Engine Selection: Choose between local ComfyUI or RunningHub cloud service
  2. 🤖 LLM Configuration: Configure at least one LLM provider (OpenAI, Ollama, etc.)
  3. 📁 Workflow Directory: System will automatically create necessary directory structure

🌐 RunningHub Cloud Mode Advantages

🏠 Local ComfyUI Mode Advantages

🆘 Need Help? Join community groups for support (see Community section below)

🛠️ Add Your Own MCP Tool

⚡ One workflow = One MCP Tool, supports two addition methods:

📋 Method 1: Local ComfyUI Workflow - Export API format workflow files 📋 Method 2: RunningHub Workflow ID - Use cloud workflow IDs directly

🎯 1. Add the Simplest MCP Tool

🔌 2. Add a Complex MCP Tool

The steps are the same as above, only the workflow part differs (Download workflow: UI format and API format)

Note: When using RunningHub, you only need to input the corresponding workflow ID, no need to download and upload workflow files.

🔧 ComfyUI Workflow Custom Specification

🎨 Workflow Format

The system supports ComfyUI workflows. Just design your workflow in the canvas and export it as API format. Use special syntax in node titles to define parameters and outputs.

📝 Parameter Definition Specification

In the ComfyUI canvas, double-click the node title to edit, and use the following DSL syntax to define parameters:

$<param_name>.[~]<field_name>[!][:<description>]

🔍 Syntax Explanation:

💡 Example:

Required parameter example:

URL upload processing example:

📝 Note: LoadImage, VHS_LoadAudioUpload, VHS_LoadVideo and other nodes have built-in functionality, no need to add ~ marker

🎯 Type Inference Rules

The system automatically infers parameter types based on the current value of the node field:

📤 Output Definition Specification

🤖 Method 1: Auto-detect Output Nodes

The system will automatically detect the following common output nodes:

🎯 Method 2: Manual Output Marking

Usually used for multiple outputs Use $output.var_name in any node title to mark output:

📄 Tool Description Configuration (Optional)

You can add a node titled MCP in the workflow to provide a tool description:

  1. Add a String (Multiline) or similar text node (must have a single string property, and the node field should be one of: value, text, string)
  2. Set the node title to: MCP
  3. Enter a detailed tool description in the value field

⚠️ Important Notes

  1. 🔒 Parameter Validation: Optional parameters (without !) must have default values set in the node
  2. 🔗 Node Connections: Fields already connected to other nodes will not be parsed as parameters
  3. 🏷️ Tool Naming: Exported file name will be used as the tool name, use meaningful English names
  4. 📋 Detailed Descriptions: Provide detailed parameter descriptions for better user experience
  5. 🎯 Export Format: Must export as API format, do not export as UI format

💬 Community

Scan the QR codes below to join our communities for latest updates and technical support:

Discord Community WeChat Group
Discord Community WeChat Group

🤝 How to Contribute

We welcome all forms of contribution! Whether you're a developer, designer, or user, you can participate in the project in the following ways:

🐛 Report Issues

💡 Feature Suggestions

🔧 Code Contributions

📋 Contribution Process

  1. 🍴 Fork this repo to your GitHub account
  2. 🌿 Create a feature branch: git checkout -b feature/your-feature-name
  3. 💻 Develop and add corresponding tests
  4. 📝 Commit changes: git commit -m "feat: add your feature"
  5. 📤 Push to your repo: git push origin feature/your-feature-name
  6. 🔄 Create a Pull Request to the main repo

🎨 Code Style

🧩 Contribute Workflows

🙏 Acknowledgements

❤️ Sincere thanks to the following organizations, projects, and teams for supporting the development and implementation of this project.

License

This project is released under the MIT License (LICENSE, SPDX-License-identifier: MIT).

⭐ Star History

Star History Chart