Turn photos into expressive videos with synced voice motion.




Turn photos into expressive videos with synced voice motion.
AI-driven tool for seamless object separation and smooth video compositing.
Create identity-stable motions from photos using fast, alignment-free motion retargeting for designers and animators.
Generate high quality videos from text prompts using Kling 1.6 Pro.
Create cinematic clips in seconds with Veo 3.1 Fast, built for instant text-driven motion and creative control.
Create rich cinematic clips from images or text with Veo 3.1 Fast.
Hunyuan Video is an open-source AI video generation model developed by Tencent, boasting 13 billion parameters. It transforms detailed text prompts into high-quality videos, delivering smooth scene transitions, realistic cuts, and consistent motion. This makes Hunyuan Video ideal for crafting compelling visual narratives.
Hunyuan Video is typically used through ComfyUI (or similar interfaces) to generate videos from text (T2V) or images (I2V). RunComfy offers several workflows for this, including the Hunyuan Text-to-Video workflow, Hunyuan Image-to-Video workflow, Hunyuan Video-to-Video workflow, and Hunyuan LoRA workflows.
If you're not using ComfyUI, you can still experience Hunyuan Video effortlessly on RunComfy AI Playground, which offers a user-friendly interface—no setup required!
You can try Hunyuan Video for free on the RunComfy AI Playground, where you're given some free credits to explore Hunyuan Video tools along with other AI models and workflows.
Hunyuan video duration is determined by the "num_frames" and "frame rate" parameters, with the duration calculated as num_frames divided by frame rate. For example, if num_frames is set to 85 and the frame rate is set to 16 fps, the video will be approximately 5 seconds long.
To generate a longer video, increase the num_frames value while keeping the frame rate constant, or adjust both parameters to balance duration and smoothness. Keep in mind that longer videos require more computational resources and VRAM.
RunComfy provides a variety of Hunyuan Video workflows for you to explore, including Hunyuan Text-to-Video workflow, Hunyuan Image-to-Video workflow, Hunyuan Video-to-Video workflow, and Hunyuan LoRA workflows.
The maximum video length you can produce with HunyuanVideo is 129 frames. At 24 fps, this results in approximately 5 seconds of video. If you lower the frame rate to 16 fps, the maximum duration extends to approximately 8 seconds.
1. Install Hunyuan Video Locally Step 1: Install or update to the latest version of ComfyUI. Step 2: Download the required model files (diffusion model, text encoders, VAE) from official sources like Tencent’s GitHub or Hugging Face. Step 3: Place the downloaded files in their correct directories (refer to installation guides for folder structure). Step 4: Download and load the Hunyuan Video workflow JSON file into ComfyUI. Step 5: Install any missing custom nodes using ComfyUI Manager if required. Step 6: Restart ComfyUI and generate a test video to confirm everything works properly.
2. Use Hunyuan Video online via RunComfy AI Playground You can run Hunyuan Video online without installation via the RunComfy AI Playground, where you can access Hunyuan along with other AI tools.
3. Use Hunyuan Video online via RunComfy ComfyUI For a seamless workflow experience in ComfyUI, explore the following ready-to-use workflows on RunComfy: Hunyuan Text-to-Video workflow Hunyuan Image-to-Video workflow Hunyuan Video-to-Video workflow Hunyuan LoRA workflows
The VRAM requirements for Hunyuan AI Video vary depending on model configuration, output length, and quality. A minimum of 10–12 GB VRAM is needed for basic workflows, while 16 GB or more is recommended for smoother performance and higher-quality outputs, especially for longer videos. Exact requirements may vary based on specific settings and model variants.
Hunyuan LoRA files should be placed in the dedicated LoRA folder of your installation. In many local setups using ComfyUI or Stable Diffusion, this is typically a subfolder within your models directory (e.g., “models/lora”). This ensures that the system automatically detects and loads the LoRA files.
Creating effective prompts is crucial for generating high-quality videos with Hunyuan AI. A well-crafted prompt typically includes the following elements:
Skyreels Hunyuan is a specialized variant of the Hunyuan video model, designed for cinematic and stylized video generation. Fine-tuned on over 10 million high-quality film and television clips from the Hunyuan base model, Skyreels excels at producing realistic human movements and expressions. Experience Skyreels AI's capabilities firsthand and start creating with Skyreels here!
Hunyuan Video is primarily a text-to-video (T2V) model developed by Tencent, designed to generate high-quality videos from textual descriptions. To expand its capabilities, Tencent introduced HunyuanVideo-I2V, an image-to-video (I2V) extension that transforms static images into dynamic videos. This extension employs a token replacement technique to effectively reconstruct and incorporate reference image information into the video generation process.
Here's a detailed tutorial on how to use Hunyuan I2V in ComfyUI
Hunyuan-DiT is a diffusion transformer variant focusing on text-to-image tasks. It shares core technology with Hunyuan Video, utilizing similar transformer-based methods to merge text or image inputs with video generation, providing a unified approach across modalities.
Yes, Hunyuan Video supports 3D content creation. Tencent has expanded its AI capabilities by releasing tools that convert text and images into 3D visuals. These open-source models, based on Hunyuan3D-2.0 technology, can generate high-quality 3D visuals rapidly, enhancing the scope of creative projects. For a seamless experience in creating 3D content from static images, you can utilize the Hunyuan3D-2 Workflow through RunComfy’s ComfyUI platform.
You can install it locally within ComfyUI by ensuring you have the latest version of ComfyUI, then downloading the required model files and the Hunyuan3D-2 workflow JSON from Tencent’s official sources. After placing these files in their designated folders and installing any missing custom nodes via ComfyUI Manager, simply restart ComfyUI to test your setup. Alternatively, you can use the online Hunyuan3D-2 workflow at RunComfy, a hassle-free, ready-to-use solution for generating 3D assets from images. This online workflow lets you explore the full potential of Hunyuan3D-2 without the need for local installation or setup.
To run Hunyuan Video locally on your system, you’ll need to download the official model weights from Tencent’s GitHub repository and set it up within your local ComfyUI environment. If you're using a MacBook, ensure that your system meets the hardware and software requirements to handle the model effectively.
Alternatively, you can run Hunyuan Video online without the need for installation via RunComfy AI Playground. It allows you to access Hunyuan and many other AI tools directly, offering a more convenient option if you prefer not to set up the model locally.
The Hunyuan Video wrapper is a ComfyUI node developed by kijai, enabling seamless integration of the Hunyuan Video model within ComfyUI. To generate videos using Hunyuan Video model, you can explore various workflows, such as: Hunyuan Text-to-Video workflow Hunyuan Image-to-Video workflow Hunyuan Video-to-Video workflow Hunyuan LoRA workflows
Explore Hunyuan Video in ComfyUI with these ready-to-use workflows. Each workflow comes pre-configured and includes a detailed guide to help you get started. Simply choose the one that fits your needs: Hunyuan Text-to-Video workflow Hunyuan Image-to-Video workflow Hunyuan Video-to-Video workflow Hunyuan LoRA workflows
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.