ComfyUI > Nodes > Comfyui-Z-Image-Utilities > Z-Image API Config

ComfyUI Node: Z-Image API Config

Class Name

Z_ImageAPIConfig

Category
Z-Image
Author
Koko-boya (Account age: 2346days)
Extension
Comfyui-Z-Image-Utilities
Latest Updated
2025-12-22
Github Stars
0.1K

How to Install Comfyui-Z-Image-Utilities

Install this extension via the ComfyUI Manager by searching for Comfyui-Z-Image-Utilities
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Comfyui-Z-Image-Utilities in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Z-Image API Config Description

Facilitates LLM API configuration in ComfyUI, supporting cloud, local, and direct model loading.

Z-Image API Config:

The Z_ImageAPIConfig node is designed to facilitate the configuration of Large Language Model (LLM) API connections within the ComfyUI framework. This node supports a variety of backend options, including cloud-based services like OpenRouter API, local API servers such as Ollama and LM Studio, and direct model loading from HuggingFace with quantization capabilities. The primary goal of this node is to streamline the setup process for users, allowing them to easily connect and configure their preferred LLM backend for enhanced image processing tasks. By providing a flexible and comprehensive configuration interface, the Z_ImageAPIConfig node enables users to leverage advanced AI models for image enhancement and prompt generation, while also offering features like smart VRAM management, model caching, and streaming support.

Z-Image API Config Input Parameters:

provider

The provider parameter specifies the backend service or API that you wish to connect to for LLM processing. This could be a cloud-based service like OpenRouter or a local server such as Ollama. Selecting the appropriate provider is crucial as it determines the source of the LLM capabilities and can impact the performance and features available for your image processing tasks.

model

The model parameter allows you to specify the particular LLM model you want to use within the chosen provider. This could be a specific model hosted on a local server or a model available through a cloud service. The choice of model affects the quality and type of image enhancements or prompt generations you can achieve, and it is important to select a model that aligns with your specific needs and goals.

local_endpoint

The local_endpoint parameter is used to define the URL of the local server if you are using a local API for LLM processing. This is particularly useful when you have a custom setup or are using a local instance of a service like LM Studio. The default value is http://localhost:11434/v1, but it can be adjusted to match your local server configuration.

quantization

The quantization parameter determines the level of quantization applied to the model, which can affect both performance and memory usage. Options include different quantization levels, with the default being Q4. Quantization can help manage VRAM usage and improve processing speed, especially on systems with limited resources.

device

The device parameter specifies the hardware device on which the model will be loaded and executed. Options include auto, cuda, cpu, and mps, with auto being the default setting. This parameter is important for optimizing performance based on your system's capabilities, as it allows you to leverage GPU acceleration if available.

Z-Image API Config Output Parameters:

config

The config output parameter provides a comprehensive configuration object that encapsulates all the settings and options specified through the input parameters. This configuration is essential for establishing and managing the connection to the LLM API, ensuring that all specified preferences and settings are applied correctly for subsequent image processing tasks.

Z-Image API Config Usage Tips:

  • Ensure that the provider and model parameters are correctly set to match your desired LLM backend and model, as this will directly impact the capabilities and performance of your image processing tasks.
  • Utilize the quantization parameter to manage VRAM usage effectively, especially if you are working on a system with limited resources. Adjusting the quantization level can help balance performance and memory consumption.
  • If you are using a local server, double-check the local_endpoint parameter to ensure it matches your server's URL, as incorrect settings can lead to connection issues.

Z-Image API Config Common Errors and Solutions:

"Connection failed to the specified local endpoint"

  • Explanation: This error occurs when the local_endpoint URL is incorrect or the local server is not running.
  • Solution: Verify that the local server is active and the local_endpoint URL is correctly set to match your server's address.

"Model not found on the specified provider"

  • Explanation: This error indicates that the specified model is not available on the chosen provider.
  • Solution: Double-check the model name and ensure it is supported by the selected provider. You may need to select a different model or provider.

"Insufficient VRAM for the selected quantization level"

  • Explanation: This error suggests that the current quantization setting requires more VRAM than is available on your system.
  • Solution: Adjust the quantization parameter to a higher level (e.g., from Q4 to Q8) to reduce VRAM usage, or consider upgrading your hardware if possible.

Z-Image API Config Related Nodes

Go back to the extension to check out more related nodes.
Comfyui-Z-Image-Utilities
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.