ComfyUI > Nodes > ComfyUI-FramePack-HY > FramePack Sampler (HY)

ComfyUI Node: FramePack Sampler (HY)

Class Name

FramePackDiffusersSampler_HY

Category
FramePack
Author
CY-CHENYUE (Account age: 737days)
Extension
ComfyUI-FramePack-HY
Latest Updated
2025-05-08
Github Stars
0.02K

How to Install ComfyUI-FramePack-HY

Install this extension via the ComfyUI Manager by searching for ComfyUI-FramePack-HY
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-FramePack-HY in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

FramePack Sampler (HY) Description

Facilitates video sampling with diffusion models, optimized for HunyuanVideo format, enhancing quality and fluidity efficiently.

FramePack Sampler (HY):

The FramePackDiffusersSampler_HY node is designed to facilitate the sampling process in video generation tasks using diffusion models. It leverages the HunyuanVideo latent format, which is optimized for handling video data with a specific focus on maintaining a consistent number of channels across frames. This node is particularly beneficial for AI artists looking to create smooth and coherent video sequences, as it integrates advanced sampling techniques to enhance the quality and fluidity of generated content. By utilizing a multiplier for sampling settings and managing memory usage efficiently, the node ensures that the video generation process is both effective and resource-conscious. The node also includes mechanisms for memory management, such as actively clearing GPU memory and offloading transformers, which are crucial for maintaining performance during intensive tasks.

FramePack Sampler (HY) Input Parameters:

video_fps

This parameter represents the frames per second (fps) of the video, which is typically obtained from the CreateKeyframes node. It has the highest priority in determining the temporal resolution of the generated video. The fps value directly influences the smoothness and speed of the video playback, with higher values resulting in smoother motion.

shift

The shift parameter affects the amplitude of motion within the video. It is a floating-point value with a default of 0.0, a minimum of 0.0, and a maximum of 10.0, adjustable in steps of 0.1. A higher shift value results in more pronounced movements, allowing for dynamic and expressive video content.

use_teacache

This boolean parameter, defaulting to True, determines whether the teacache is used to accelerate the sampling process. Enabling teacache can significantly reduce computation time by caching intermediate results, thus speeding up the overall video generation workflow.

teacache_thresh

The teacache_thresh parameter is a floating-point value that sets the relative L1 loss threshold for the teacache. It has a default value of 0.15, with a range from 0.0 to 1.0 and adjustable in steps of 0.01. This threshold determines the sensitivity of the cache to changes in the input, influencing the balance between speed and accuracy.

denoise_strength

Although currently unused, the denoise_strength parameter is intended for future use in controlling the denoising intensity in Image-to-Video (I2V) mode. It is a floating-point value with a default of 1.0, ranging from 0.0 to 1.0, adjustable in steps of 0.01. This parameter will eventually help in refining the clarity and quality of the generated video frames.

window_size

An optional parameter that specifies the size of the sampling context window. If not connected, a default value is used. This parameter controls the length of historical information considered by the model during generation, impacting the coherence and continuity of the video.

FramePack Sampler (HY) Output Parameters:

LATENT

The output of the FramePackDiffusersSampler_HY node is a latent representation of the video, which encapsulates the essential features and dynamics of the generated content. This latent output is crucial for further processing or rendering into a final video format, as it contains the encoded information necessary for reconstructing the video frames with the desired characteristics and quality.

FramePack Sampler (HY) Usage Tips:

  • To achieve smoother video transitions, ensure that the video_fps parameter is set to a higher value, which will enhance the fluidity of motion between frames.
  • Utilize the shift parameter to introduce dynamic movements in your video. Experiment with different values to find the right balance between subtlety and expressiveness.
  • Enable use_teacache to speed up the sampling process, especially when working with large video projects, as it can significantly reduce computation time.

FramePack Sampler (HY) Common Errors and Solutions:

"无法导入关键帧辅助节点"

  • Explanation: This error indicates that the keyframe helper nodes could not be imported, which might be due to missing files or incorrect paths.
  • Solution: Ensure that all necessary files are present in the correct directories and that the paths specified in the code are accurate. Reinstalling or updating the node package might also resolve the issue.

"卸载transformer时出错"

  • Explanation: This error occurs when there is a problem offloading the transformer to a specified device, possibly due to device compatibility or resource availability issues.
  • Solution: Check the device configuration and ensure that the target device has sufficient resources available. Verify that the device is correctly specified and accessible by the system.

FramePack Sampler (HY) Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-FramePack-HY
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.