ComfyUIをAPIとして実行

プロダクショングレード、ゼロオペレーション、自動スケーリング

ComfyUI Interface
ComfyUI API Request

ComfyUI APIの実行方法

1. ComfyUI Cloudで構築/テスト

クラウドで独自のComfyUIワークフローを作成し、ワークフローのAPI JSONをエクスポートして、実行時に調整したいパラメータを選択します。

次に、Cloud Saveを使用してノード、モデル、依存関係、ランタイムを1つの再現可能なコンテナにバンドルし、プロダクショングレードのComfyUI APIとしてデプロイする準備を整えます。

Build/Test in ComfyUI Cloud

2. ワークフローをAPIとしてデプロイ

保存されたワークフローを選択し、必要なハードウェアを選択し、シンプルな自動スケーリングルールを設定します。デプロイされると、ComfyUI APIは、アプリがリクエストを送信するために使用できる一意のdeployment_idを取得します。

パフォーマンスを監視し、必要に応じてスケールアップまたはダウンし、複数のAPIバージョンをシームレスに管理します。

Deploy Workflows as an API

3. オンデマンドスケール

ComfyUI APIは、リクエストが来ると自動的にスケールアップし、静かになるとゼロまでスケールダウンします。追加の作業は必要ありません。

デプロイ後、APIエンドポイントを使用してリクエストの送信、進行状況の確認、結果の取得、またはジョブのキャンセルができます。

cURL
curl --request POST \ --url https://api.runcomfy.net/prod/v1/deployments/{DEPLOYMENT_ID}/inference \ --header "Authorization: Bearer YOUR_API_KEY" \ --header "Content-Type: application/json" \ --data '{' "overrides": { "6": { "inputs": { "text": "futuristic cityscape" } } "189": { "inputs": { "image": "https://example.com/new-image.jpg" } } } }

ComfyUI APIを使用する最も簡単な方法

手間のかからないデプロイ

Cloud SaveからワンクリックでComfyUI APIを起動します。Docker、CUDA設定、Kubernetesは不要です。保存した正確なノード、モデル、ライブラリですべてが実行されるため、結果は常に一貫しています。

高性能GPU

必要なGPU性能を選択してください。16GB(T4/A4000)から80GB(A100/H100)、最大141GB(H200)まで、重いモデルをスムーズかつ確実に実行できます。

オンデマンドスケール

APIは、トラフィックバーストに対して自動的にスケールアップし、アイドル時にはゼロまでスケールダウンします。キューサイズとキープウォーム設定を制御して、レイテンシを低く保ち、コストを抑制します。

ワークフローバージョニング

安心してアップデートしてください。ワークフローバージョンを管理し、ローリングアップデートを使用して、実行中のジョブを中断することなく機能を追加またはロールバックします。

リアルタイム監視

ライブダッシュボードでパフォーマンスを把握し続けます。リクエスト数、キュー時間、コールドスタート、実行速度、使用パターンを確認して、セットアップを最適化します。

200以上のデプロイ準備完了テンプレート

200以上の既製コミュニティワークフローで素早くスタート。ニーズに合わせて探索・カスタマイズし、バージョンをクラウドに保存し、わずか数分で独自のComfyUI APIとしてデプロイします。

プロトタイプから本番まで、RunComfyがComfyUI APIをこれまで以上に簡単にします。

よくある質問

RunComfyとは何ですか、ComfyUI APIにおいてローカルComfyUIとどう違いますか?

RunComfy Serverless API turns your ComfyUI workflows into production-grade ComfyUI APIs with auto-scaling and no operations needed. This lets you focus on building generative AI without infrastructure worries. Unlike local ComfyUI setups that require hardware management, CUDA setup, and ongoing monitoring, RunComfy Serverless API handles deployment, scaling, and consistency in the cloud. Your ComfyUI API runs reliably on high-performance GPUs, making it easy to go from prototype to production. For more details, please read the RunComfy Serverless API documentation.

ComfyUIワークフローをComfyUI APIサービスとしてデプロイするにはどうすればよいですか?

To deploy a ComfyUI workflow as a ComfyUI API service on RunComfy, start by building it in ComfyUI Cloud and saving it along with your nodes, models, and dependencies. Then, select GPU hardware, set autoscaling rules, and deploy with a few clicks. This creates a serverless ComfyUI API that scales automatically, processes requests asynchronously, and provides endpoints for inferences. You'll have a ready-to-use ComfyUI API without dealing with Docker, Kubernetes, or manual configurations, everything is reproducible and consistent.

ComfyUIワークフローをComfyUI APIとしてデプロイし始めるにはどうすればよいですか?

To deploy your ComfyUI workflow as a ComfyUI API on RunComfy, start in ComfyUI Cloud where you can easily create or edit your workflow. Once it's ready, export it as a simple API JSON file and pick the parts you want to tweak during runs, like prompts or seeds—this keeps things flexible. From there, just click Cloud Save. RunComfy takes care of the rest by bundling your workflow, nodes, models, and full setup into a ready-to-use container, so you skip all the technical headaches. Finally, deploy it by selecting your preferred GPU and basic scaling options. You'll instantly get a unique deployment ID to connect your ComfyUI API to your apps or projects. The whole thing is designed to be quick and hassle-free, letting you focus on your creative ideas while getting a scalable ComfyUI API without any DevOps work. For more details, check RunComfy Serverless API - Quickstart documentation.

ComfyUIワークフローをComfyUI API形式でエクスポートするにはどうすればよいですか?

For the latest version of ComfyUI, open the ComfyUI interface, locate the Workflow menu in the upper-left corner, and select "Export (API)" from the options. This will generate a JSON file that includes all your nodes, inputs, default values, and connections. For older versions, you need to enable dev mode in the settings (click the gear icon next to Queue Size or in the menu box, then check the "Enable Dev mode Options" box), which will make the "Save (API Format)" button appear in the menu.

ComfyUI APIで利用可能なGPUは何ですか、ワークフローに適したものをどう選べばよいですか?

RunComfy offers a range of high-performance GPUs for your ComfyUI API deployments, with VRAM from 16GB for basic workflows to 141GB for intensive models. To choose the right one for your ComfyUI API workflow, consider your model's size and memory needs, start with around 48GB (like X-Large or X-Large Plus) for most typical tasks to ensure smooth performance, then scale up or down based on testing. Monitor usage in the dashboard to optimize. For full details, visit the RunComfy Pricing page.

デプロイされたComfyUI APIでカスタムノード、モデル、または依存関係を使用できますか?

Yes, you can easily include custom nodes, models, or dependencies in your deployed ComfyUI API. Simply add them when saving your workflow in ComfyUI Cloud, such as custom nodes, models, or specific libraries, and they'll be bundled into the container. RunComfy automatically recreates your exact environment for consistent, reliable results every time. No extra setup is required after deployment, so you can build advanced ComfyUI APIs that fit your specific needs.

RunComfyテンプレートを使用してComfyUI APIをデプロイでき、カスタマイズできますか?

Yes, RunComfy's 200+ templates let you deploy a ComfyUI API quickly, providing workflows corresponding to the latest models. Browse community workflows, fork one, tweak nodes or parameters, and save it as your own. Then deploy it as a customized ComfyUI API. All your changes stay private.

ComfyUI APIをデプロイした後のAPIエンドポイントは何ですか、どう使用しますか?

After deploying your ComfyUI API, you have endpoints for key actions: POST to queue inferences, GET to check job status or progress, GET to retrieve results like images or videos, and POST to cancel jobs. Use your deployment_id in HTTP/REST requests, with API keys for security. This asynchronous design keeps your ComfyUI API efficient, so you can track jobs easily. For full details, visit the RunComfy Serverless API - Async Queue Endpoints documentation.

ComfyUI APIを既存の技術スタックと統合できますか?

Yes, you can easily integrate the ComfyUI API with your existing tech stack. It uses simple HTTP/REST calls and JSON data, so it works with common tools like curl, Python, or JavaScript. Check the Quickstart for ready-to-use code snippets to get started fast.

ComfyUI APIの自動スケーリングはどう機能し、コスト管理のために制御できますか?

Auto-scaling for your ComfyUI API increases instances during busy times and scales to zero when idle, keeping things efficient. You can set min/max instances, queue sizes, and keep-warm times to fine-tune latency and costs. You're only charged for active GPU time, with no fees for downtime. This flexible control helps you run a cost-effective ComfyUI API that matches your traffic patterns.

ComfyUI APIのパフォーマンスをどう監視・最適化できますか?

You can monitor your ComfyUI API with a real-time dashboard that shows request counts, queue times, cold starts, execution speeds, and usage patterns. You can also review billing data in the dashboard to track and optimize costs based on GPU time. Use these insights to adjust GPUs, scaling rules. This helps you keep your ComfyUI API running smoothly, fix issues fast, and manage expenses effectively.

ダウンタイムなしでComfyUIワークフローを更新する必要がある場合はどうなりますか?

To update your ComfyUI workflow without downtime, save changes as a new version under the same name, this bundles the updates into a fresh container while keeping your live ComfyUI API running on the current version. When ready, edit the deployment to switch to the new version, which rolls out gradually: existing jobs complete on the old one, and new requests use the update. Roll back anytime by selecting a previous version. This ensures your ComfyUI API stays stable and available. For more details, refer to RunComfy Serverless API - Workflow Versions and RunComfy Serverless API - Edit a Deployment.

RunComfyで私のデータはどのように安全に保たれますか?

Your workflows run on dedicated, isolated GPUs, which guarantees complete resource separation so that no processes or memory are ever shared with other users. This ensures that your computation environment remains private and independent, providing both stability and security. Each ComfyUI execution environment, including the operating system, Python runtime, ComfyUI core, workflow definitions, models, and custom nodes, is encapsulated in its own secure cloud container. These containers are persistent, allowing your entire setup to be reliably reproduced across sessions while remaining fully private to you. Access to these environments is strictly controlled: only you can manage or expose your containerized setup, and no third party, including RunComfy, can access it unless you explicitly choose to share.

ComfyUIワークフローの複雑さやComfyUI APIの使用に制限はありますか?

Most ComfyUI workflows run smoothly with the ComfyUI API. However, very large models may require GPUs with higher VRAM to avoid memory-related issues. The number of concurrent jobs you can run depends on your scaling configuration, and queue limits can be adjusted to fit your workload. For high-volume or specialized needs, enterprise support is available, please reach out to us at hi@runcomfy.com.

ComfyUI APIの請求はどう機能しますか?

Billing for the ComfyUI API follows a pay-per-use model. You are only charged for the exact number of seconds your GPU is actively running, giving you full cost efficiency and flexibility. For more details, please see the RunComfy Serverless API - Billing documentation.

ComfyUI APIで問題が発生した場合、どのようなサポートが利用できますか?

If you encounter issues while using the ComfyUI API, we recommend first checking the official documentation RunComfy Serverless API – Error Handling, which covers common error codes and troubleshooting steps. If the problem persists or you need additional assistance, you can always contact us at hi@runcomfy.com.

企業やチーム向けのサービスを提供していますか?

Yes, we provide solutions tailored for enterprises and teams. For more details and customized support, please contact us directly at hi@runcomfy.com.
RunComfy
著作権 2025 RunComfy. All Rights Reserved.

RunComfyは最高の ComfyUI プラットフォームです。次のものを提供しています: ComfyUIオンライン 環境とサービス、および ComfyUIワークフロー 魅力的なビジュアルが特徴です。 RunComfyはまた提供します AI Playground, アーティストが最新のAIツールを活用して素晴らしいアートを作成できるようにする。