Self Forcing is an advanced keyframe-driven video generation model. Self Forcing enables smooth, high-quality video synthesis by generating motion between a start and end keyframe, guided by descriptive text prompts.
Built upon autoregressive video diffusion architectures with KV caching, Self Forcing excels at generating temporally consistent, identity-preserving motion across frames. The Self Forcing joint keyframe-text approach allows for fluid transitions, while maintaining subject structure and style throughout the generated video.
Self Forcing offers:
Whether you're generating animations, cinematic sequences, or identity-consistent AI videos, Self Forcing gives you full creative control while ensuring smooth and realistic motion with Self Forcing technology.
In this section, you will upload your Start Keyframe and End Keyframe images for Self Forcing. These two images define the beginning and ending appearance of your Self Forcing generated video.
Set the total number of frames your Self Forcing video will generate.
This group loads the Self Forcing autoregressive video diffusion model. The Self Forcing workflow automatically selects the correct model version for you.
In this section, you can enter your Text Prompt to guide the Self Forcing generation.
Once Self Forcing generation is complete:
Comfyui > output
folder inside your ComfyUI directory.This workflow uses the Self Forcing model developed by guandeh.
The Self Forcing workflow integrates Wan Video Wrapper nodes by kijai to enable seamless Self Forcing video generation inside ComfyUI.
Full credit goes to both authors for their original Self Forcing model development and integration work.
GitHub Repository: https://github.com/guandeh17/Self-Forcing
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.